 |
Ohbuchi Laboratory
Graduate School of Engineering, University of Yamanashi, Yamanashi, Japan. |
SHREC 2009
3D retrieval using machine learning
Links
SHREC 2008 Generic Models Track
News
Introduction
The objective of this track is to compare 3D model retrieval methods that
employs machine learning algorithms. This track has two subcategories for
(1) unsupervised learning algorithms and (2) off-line supervised learning
algorithms that learns multiple classes at once. (That is , on-line supervised
method that a class on-line from on-line human feedback, e.g., through
relevance feedback, is NOT included.)
This track employs the SHREC 2006 database, that is, the union of the train and test set of the Princeton Shape Benchmark. It is a collection of 1,814 polygon soup models. Please refer to the
SHREC 2006 home page for the overview of the past contest. There will be more than one query
sets, one of which will be identical to SHREC 2006. The database, the tools
for quantitative performance evaluation, etc. are largely borrowed from
the SHREC 2006, courtesy of Prof. Veltkamp and his team.
Link to SHREC 2009
Two entry categories
This track accepts methods that employ machine learning. Benchmark results
will be clearly marked to indicate the category the method belongs. The
two categories are;
- Unsupervised methods: This category include methods that do not use any machine learning as
well as methods that employ UNSUPERVISED learning. The training set can
be anything if the algorithm does not use labels (classification) of the
models in the set. For example, the algorithm may use the SHREC 2006 dataset
(i.e., PSB test + train set), or the National Taiwan University 3D model database so far as the algorithm ignores the class labels.
- Supervised methods: This category include methods that employ off-line supervised learning of multiple categories. However, on-line supervised
learning, e.g., by using relevance feedback, is not allowed. (That is,
all the learning must be done before the retrieval starts, and that no
training (e.g., information on classes) are allowed during the retrieval
session.) The evaluation will be done using SHREC 2006 categories. The
participants are asked to submit the results for the following two cases;
- SS: Train the algorithm by using the SHREC2006 ground truth classes (30 classes). Evaluate the algorithm by using the same SHREC database
and the same SHREC 2006 groud truth classes.
- PS: Train the algorithm by using the PSB train set (90 classes, 907 models) classes. (Do not use PSB test classes.) Evaluate the algorithm by using SHREC database
and (an undisclosed) groud truth classification set.
Machine learning algorithms can be unsupervised or supervised. It is difficult
to disallow unsupervised algorithms, for many of the methods already use
them in the form of Principal Component Analysis (PCA) to filter out, or
reduce dimension of, features.
Supervised learning comes in two flavors; off-line and on-line. An off-line
algorithm learns multiple categories (classes) from a set of labeled (classified)
training 3D models prior to retrieval. We allow this form of supervised
learning in this track. For this track, we decided NOT to allow on-line
supervised learning such as those using relevance feedback.
A benchmark of off-line supervised method requires specification of (1)
database to train, (2) classes of the database to train, (3) database to
evaluate (4) classes of the database to evaluate. For example, it is easier
for a learning algorithm if the both classes and database entries for the
training set is equal to those of the test (evaluation) set. If they are
different, the learning algorithm must have a good generalization ability
of classes and/or database to perform well. We decided to use two training
set to evaluate the method's generalization capability.
Instruction for participants
The following is the schedule.
- Registration:
- Register by sending Email to the organizer. The registration should include,
- Entry name: Name of the team/method.
- Contact information: Name, affiliation, contact address and email of the
participant.
- Entry category: Either "Unsupervised method" or "Supervised
method".
- Download the database:
- Database will be abailable by March 10.
- Database for the SHREC 2008 ( identical to SHREC 2006 database) (zip)
- The database to be used for this year's track will contain the same models
as the SHREC 2006 database, that is, the union of the PSB test set and train set, having 1,814 models. The SHREC 2006 (and 2008)
database models are randomly rotated.
- The database is exactly the same as the SHREC 2006 database. (I just downloaded
it from the SHREC 2006 site, and placed it here.)
- Software tools
- Performance evaluation tools developed by Prof. Veltkamp et al for the SHREC 2006 are still available.Use
these tools to evaluate your method(s).
-
- Download the query sets:
- The query set will be a union of two subsets;
- Query set 1 of the SHREC 2008 (The query set 1 is identical to the SHREC 2006 query set)
- Models are numbered from 1 to 30.
- A classification file, the one we used for SHREC 2006 can be found here.
- Use this classification file
- A queries file, the one we used for SHREC 2006
can be found here.
- For a supervised algorithm, use this classification file to train an algorithm
for the "SS" category.
- You may also use the query set1, the classification file, and the performance
evaluator to test your algorithm, tune parameters, etc.
- Query set 2 of the SHREC 2008 (The set 2 is the new set of queries created for the SHREC 2008.)
- Models are numbered from 31 to 60.
- Classification file for the query set 2 will be released after the contest.
- Perform retrieval experiments:
- Perform retrieval experiments on your own ("Honor system").
- Return results and a draft paper:
- Return the results of retrieval experiment.
- For each query, rank all the models in the database in the order of decreasing
relevance. (That is, rank them in the order of increasing dissimilarity.)
- An algorithm is allowed to submit results of two runs of the algorithm
using two parameter sets.
- Clearly name the ranked list file with the following;
- Entry name. (shorten to less than 8 letters.)
- The distinction between unsupervised ("UL") and supervised ("SL") methods.
- For the SL method, add "SS" or "PS" depending on the
training set. "SS" is added if the training is done using SHREC
2006 classes, and "PS" is added if the training is done using
PSB train classes.
- The query set used (Q1 or Q2, respectively for the SHREC 2006 set or SHREC
2009 set)The runs (R1 or R2).
- The Ranklist file name format
- <Entry name>_<UL|SL>_<""|SS|PS>_<Q1|Q2>_<R1|R2>
- Example: "FooBar" team using supervised learning. Learning SHREC
and retrieving SHREC Query set 1. Ranklist for run 1. The ranklist file
name would be;
- Note that not all of the combinations are valid.
- Return also the draft of the 2-page summary paper describing the retrieval method. The paper shoud describe the retrieval method employed, their performance, and how the method and its results compares to the others in this track. The last part, comparison, can be left emply as it can only be filled after all the results are in. Please use the template here for preparation.
- Submit final version of the paper:
- The deadline for the final camera ready version of the summary paper submission
is January 26 for the paper to be included in the SMI 2008 proceedings.
- The 2 pages summary will be included in the EG 3DOR Workshop proceedings.
Schedule
- Jan. 09: Register by this date.
- Jan. 16: Submit experimental results.
- Jan. 19: Return evaluation results.
- Jan. 30: Submit final track paper by this date.
- Mar. 29: SHREC '09 Results reported at EG Workshop on 3D Object Retrieval
Evaluation Results
Query set 1
Runfile(s) per participant Q1
Per query_Q1
All runfiles Q1
Query set 2
Runfile(s) per participant Q2
Per query Q2
All runfiles Q2
Query set 1 and set 2 combined
Runfile(s) per participant_Q1Q2
Per query Q1Q2
All runfiles Q1Q2
Contact Information
Ryutarou OHBUCHI iJapanese -> 基 Yj
Computer Science Department
University of Yamanashi
4-3-11 Takeda, Kofu-shi,
Yamanashi-ken, 400-8511
Japan
Phone: +81-55-220-8570
ohbuchiAT yamanashi DOT ac DOT jp