Friday, November 18, 2016

Testing Classifier Performance


As I previously mentioned I wrote a python script to test how well our classifier detects narrow helixes. I wanted to test a small sample size, so I took 5 ear samples from a folder within our research gdrive to test against. The results were a bit discouraging, but I realized there's a few things I can do to better train the cascade.

detect.py (source code)



















Results:

Trial 1 - 4 Narrow Helixes Detected
Trial 2 - 7 Narrow Helixes Detected

















Trial 3- 4 Narrow Helixes Detected
Trial 4 - 5 Narrow Helixes Detected

















Trial 5 - 5 Narrow Helixes Detected


The results from testing were very inaccurate. There should be only one narrow helix detected in a sample image. A few things that I believe will help are providing more samples to train with. With this classifier there are two positive samples per negative sample. In resources I found many cascades were trained with a high ratio of negative samples compared to positives. Also in my test script I specify minNeighbors to equal 5. Which means that my program will detect at least 5 objects before it declares that a narrow helix is found. I believe if I increase the minimum neighbors the detection will be more accurate.

No comments:

Post a Comment