Our hand-tuned rules for merging components work relatively good. However, there is a lot of room for improvements by learning. We discussed the following learning ideas:
1- Using CMA to fine-tune algorithm parameters. (while keeping the same rules)
2- Boosting decision trees to learn actual new rules.
3- A cascade framework that goes back and forth between grouping and classification stages.
4- A framework to merge two components at a time. (This could be use alongside other ideas)
5- A graph algorithm that can encorporate both encourage and discourage two components to be merged.
1- Using CMA to fine-tune algorithm parameters. (while keeping the same rules)
2- Boosting decision trees to learn actual new rules.
3- A cascade framework that goes back and forth between grouping and classification stages.
4- A framework to merge two components at a time. (This could be use alongside other ideas)
5- A graph algorithm that can encorporate both encourage and discourage two components to be merged.
The first step is to generate per pixel training data. Then we planned to use CMA to update already trained weights.
Aaron added a few other helpful steps:
1. Learn classifiers to predict which clusters can be merged, and use their outputs as features in clustering
2. Learn per-cluster semantic labeling classifiers, use results as features for merging
3. Semantic labeling formulations (e.g., cascades, decision tree fields). We should skip this for now since it doesn’t provide clustering.
1. Learn classifiers to predict which clusters can be merged, and use their outputs as features in clustering
2. Learn per-cluster semantic labeling classifiers, use results as features for merging
3. Semantic labeling formulations (e.g., cascades, decision tree fields). We should skip this for now since it doesn’t provide clustering.
No comments:
Post a Comment