Ohbuchi Laboratory
Graduate School of Engineering, University of Yamanashi, Yamanashi, Japan.

SHREC 2009

3D retrieval using machine learning

Links

SHREC 2008 Generic Models Track

News

Introduction

The objective of this track is to compare 3D model retrieval methods that employs machine learning algorithms. This track has two subcategories for (1) unsupervised learning algorithms and (2) off-line supervised learning algorithms that learns multiple classes at once. (That is , on-line supervised method that a class on-line from on-line human feedback, e.g., through relevance feedback, is NOT included.)

This track employs the SHREC 2006 database, that is, the union of the train and test set of the Princeton Shape Benchmark. It is a collection of 1,814 polygon soup models. Please refer to the SHREC 2006 home page for the overview of the past contest. There will be more than one query sets, one of which will be identical to SHREC 2006. The database, the tools for quantitative performance evaluation, etc. are largely borrowed from the SHREC 2006, courtesy of Prof. Veltkamp and his team.

Link to SHREC 2009

Two entry categories

This track accepts methods that employ machine learning. Benchmark results will be clearly marked to indicate the category the method belongs. The two categories are;

  1. Unsupervised methods: This category include methods that do not use any machine learning as well as methods that employ UNSUPERVISED learning. The training set can be anything if the algorithm does not use labels (classification) of the models in the set. For example, the algorithm may use the SHREC 2006 dataset (i.e., PSB test + train set), or the National Taiwan University 3D model database so far as the algorithm ignores the class labels.
  2. Supervised methods: This category include methods that employ off-line supervised learning of multiple categories. However, on-line supervised learning, e.g., by using relevance feedback, is not allowed. (That is, all the learning must be done before the retrieval starts, and that no training (e.g., information on classes) are allowed during the retrieval session.) The evaluation will be done using SHREC 2006 categories. The participants are asked to submit the results for the following two cases;
    1. SS: Train the algorithm by using the SHREC2006 ground truth classes (30 classes). Evaluate the algorithm by using the same SHREC database and the same SHREC 2006 groud truth classes.
    2. PS: Train the algorithm by using the PSB train set (90 classes, 907 models) classes. (Do not use PSB test classes.) Evaluate the algorithm by using SHREC database and (an undisclosed) groud truth classification set.

Machine learning algorithms can be unsupervised or supervised. It is difficult to disallow unsupervised algorithms, for many of the methods already use them in the form of Principal Component Analysis (PCA) to filter out, or reduce dimension of, features.

Supervised learning comes in two flavors; off-line and on-line. An off-line algorithm learns multiple categories (classes) from a set of labeled (classified) training 3D models prior to retrieval. We allow this form of supervised learning in this track. For this track, we decided NOT to allow on-line supervised learning such as those using relevance feedback.

A benchmark of off-line supervised method requires specification of (1) database to train, (2) classes of the database to train, (3) database to evaluate (4) classes of the database to evaluate. For example, it is easier for a learning algorithm if the both classes and database entries for the training set is equal to those of the test (evaluation) set. If they are different, the learning algorithm must have a good generalization ability of classes and/or database to perform well. We decided to use two training set to evaluate the method's generalization capability.

Instruction for participants

The following is the schedule.

Schedule

Evaluation Results

Query set 1
Runfile(s) per participant Q1
Per query_Q1
All runfiles Q1
Query set 2
Runfile(s) per participant Q2
Per query Q2
All runfiles Q2
Query set 1 and set 2 combined
Runfile(s) per participant_Q1Q2
Per query Q1Q2
All runfiles Q1Q2

Contact Information

Ryutarou OHBUCHI iJapanese -> 基 Yj

Computer Science Department
University of Yamanashi
4-3-11 Takeda, Kofu-shi,
Yamanashi-ken, 400-8511
Japan
Phone: +81-55-220-8570
ohbuchiAT yamanashi DOT ac DOT jp