Tuesday, January 24, 2006

Understanding Adaboost

Yesterday, I talked with my professor to clarify some things about Adaboost. One of the concepts I wasn't sure about was how a weak classifier is able to determine if an image is a license plate. How can a weak classifier be assigned a weight based on its accuracy if it doesn't even know whether an image is a license plate or not? Today, I discovered that the answer to this question is trivially simple: the weak classifier's role in Adaboost is NOT to determine if an image is a license plate or not --the image is already classified before-hand. Adaboost is used not in the detection process but in the training process. Adaboost simply constructs a good detector from many smaller detectors using images that contain ONLY a license plate or ONLY no license plates.

How Adaboost Works

From what I have read so far, I will attempt to describe how Adaboost works in the context of license plate detection (LPD). The main purpose of Adaboost is to take a bunch of weak classifiers and create a single strong classifier from them. Each weak classifier is pretty good at detecting a single feature of an image, such as a vertical line in the upper-left corner of an image.


In its most simplified form, Adaboost receives several different inputs:

1) positive training set: This is a set of images of ONLY LICENSE PLATES.

2) negative training set:
This is a set of images of ONLY NON-LICENSE PLATES, such as a picture of trees or other background objects.

3) weak classifiers: This is a set of classifiers, each of which will detect a single feature in an image with a success rate of slightly greater than 50%. For license plates, this could include thousands of different classifiers that take different derivatives in different parts of an image. In Dlagnekov's thesis, he divided an image into 7 different parts and each classifier would detect a different feature in a different part of the image.

4) a set of default weights (one for each classifier): these are initialized to some default value such as 1/(2 * # of license plates). These weights will be adjusted by Adaboost during training.


With these inputs, Adaboost is trained in the following way:

For X amount of rounds
____For each weak classifier
________run for each positive training image.
________run for each negative training image.
________compute and store the error for the current weak classifier
____End for
____Find the classifier that had the best detection rate (had the lowest error) and save it to build the strong classifier.
____Adjust the weights of the other weak classifiers based on the error rate: new weight = old weight * ( error / (1 - error) )
End for

The error rate for each weak classifier is determined based on the number of positive training images it returns "true" for and how many of the negative training images it returns "false" for (meaning a score for how many images were classified correctly).

As a result of boosting, only the best weak classifiers are kept. These are combined into a "strong" classifier. The strong classifier consists of simply running all of the weak classifiers on an image, multiplying the weights by these results, and adding them all together. For example, say we have a strong classifier consisting of 2 weak classifiers. The weak classifiers have weights of .6 and .3, respectively. When run on a license plate image, the first weak classifier detects its license plate feature so it outputs a "1". The second weak classifier doesn't detect its license plate feature so it outputs a "0". The result from the two weak classifiers would be .6( 1 ) + .3( 0 ) = .6. If this number is greater than or equal to (.6 + .3) * (some threshold) then the final result from the strong classifier would be "true". Otherwise, the output from the strong classifier would be "false".

Detecting the Letter "e"

In order to help me to fully understand Adaboost, my professor suggested that I create an Adaboosted algorithm for detecting the letter "e" on a sheet of paper. I will spend the next week implementing this while I help enable Robart to capture video footage.


Blogger ammar w said...

Can you ask your supervisor how the weak classifier is trained ? and what type of weak classifier to use ?


7:03 PM  
Blogger Dang Khoa said...

Thanks for this entry, it helps me a lot. And I have the same question as ammar.

8:39 PM  
Blogger vikas said...

I want to implement adaboost algorithm for detection of an object in an image .. so please can u help me in doing that ...??

10:40 PM  
Blogger Tarun said...

I Would rather be happy if u go on posting things about adaboost classifier. How u have done work. I also want to ask that up to which limit i can take classifiers and how can i check that i have produced strong classifier. u also talked about some threashhold value, how to calculate that value.

3:40 AM  
Blogger Nagesh said...

Thanks I read a lot on Adaboost by Schapire And viola & Jones for face detection,But ur blog is really helpful for me,
If u no about adaboost algorithm by vioola & Jones pleas help me out....
how haar like features are corelated with adaboost algorithm for face detection....
My email: kulkarninagesh13@gmail.com
Thank You...

2:24 AM  

Post a Comment

<< Home