http://members.cox.net/fbirchmore/goodrun.mpg
http://members.cox.net/fbirchmore/goodrun2.mpg
http://members.cox.net/fbirchmore/goodrun3.mpg
We ran my soda can detection algorithm on live video from Robart III. Apparently, the classifier I have trained is very susceptible to variations in lighting and background clutter that was not present in the training data. Regardless of the aforementioned deficiencies in this particular classifier, it does quite well at eliminating almost all of the background clutter in the center of the image (only a central portion of the window is scanned as mentioned in a previous post). These false positives could be passed to another detection method for further processing or to signal to Robart III that he should zoom in on these areas for further inspection. From the results I have observed so far, I think weak classifiers that are better at detecting unique features of soda cans would work better than the haar-like features used. I think that a SIFT-based detection method will work much better , partly because it uses keypoints that would be more invariant to changes in lighting. Investigating a SIFT-based detection method will probably be the next step I will take. Although I still need to read more about SIFT to understand it fully, I am going to investigate the possibility of incorporating Adaboost into a SIFT detection method. More information on this will appear in later blog postings.
No comments:
Post a Comment