tag:blogger.com,1999:blog-42413769565163350762024-03-13T11:50:11.470-07:00CREU Project BlogCollaborative Research Experience for Undergraduates 2016-17Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.comBlogger30125tag:blogger.com,1999:blog-4241376956516335076.post-72668639087084993172017-05-05T10:21:00.001-07:002017-05-05T10:21:15.910-07:00Developing our Final ReportThroughout most of the first week of May, my mentor and research partner have met to discuss the CREU template and additional topics that should be mentioned in our final report. We went through our blogs and project files to compile screenshots of our process along with results that we obtained throughout the project.Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-20573941290634516952017-04-19T21:14:00.002-07:002017-04-19T21:14:14.771-07:004/19/17: Gathering sample Images
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Noteworthy Light'; -webkit-text-stroke: #000000}
span.s1 {font-kerning: none}
</style>
<br />
<div class="p1">
<span class="s1"><span style="font-size: small;">I’ve been trying to accurately detect a very simple object like a watch first before moving on to our positive samples. Today I wrote scripts to download large sets of watch samples from ImageNet </span></span><span style="-webkit-text-stroke-width: initial; font-size: small;">an image database. If the success rate increases for detecting watches within this new environment. I will apply the same techniques to the images I cropped of the narrow helix.</span></div>
Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-36745369902812515982017-04-19T21:08:00.001-07:002017-04-19T21:08:12.841-07:004/18/17: New Environment and Cropped Samples
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Noteworthy Light'; -webkit-text-stroke: #000000}
span.s1 {font-kerning: none}
</style>
<br />
<div class="p1">
<span class="s1"><span style="font-size: small;"><br /></span></span></div>
<div class="p1">
<span class="s1"><span style="font-size: small;">In my previous post I mentioned that we had a 32% accuracy rate of identifying a narrow helix. To provide an alternative environment to train our cascade classifier I set up a 2gb ubuntu server provided from digital ocean to provide an alternative environment to train our cascade classifier. Using a computer with higher specs may help the performance when training, so I dowloaded the required libraries and python bindings for OpenCV on the server. In addition in my last post I also discussed how we wanted to also change our technique so I manually cropped all of our positive sample to include solely the narrow helix instead of the entire ear.</span></span></div>
Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-18252013273847326192017-03-29T12:28:00.001-07:002017-03-29T12:28:32.084-07:00Narrow Helix Accuracy ReportsWe're getting to the point where we want to explore our preliminary results.<br />
To get a better sense of I examined and collected metrics from our most successful narrow helix trials. The following are the results from conducting 5 trials.<br />
<div class="p1" style="-webkit-text-stroke-color: rgb(0, 0, 0); font-family: "Noteworthy Light"; font-size: 12px; font-stretch: normal; line-height: normal;">
<span class="s1" style="font-kerning: none; text-decoration: underline;"><b>Narrow helix accuracy reports: </b></span></div>
<div class="p1" style="-webkit-text-stroke-color: rgb(0, 0, 0); font-family: "Noteworthy Light"; font-size: 12px; font-stretch: normal; line-height: normal;">
<span class="s2" style="font-kerning: none;">trial 1: 1/4</span></div>
<div class="p1" style="-webkit-text-stroke-color: rgb(0, 0, 0); font-family: "Noteworthy Light"; font-size: 12px; font-stretch: normal; line-height: normal;">
<span class="s2" style="font-kerning: none;">trial 2: 3/7</span></div>
<div class="p1" style="-webkit-text-stroke-color: rgb(0, 0, 0); font-family: "Noteworthy Light"; font-size: 12px; font-stretch: normal; line-height: normal;">
<span class="s2" style="font-kerning: none;">trial 3: 1/4</span></div>
<div class="p1" style="-webkit-text-stroke-color: rgb(0, 0, 0); font-family: "Noteworthy Light"; font-size: 12px; font-stretch: normal; line-height: normal;">
<span class="s2" style="font-kerning: none;">trial 4: 1/5</span></div>
<br />
<div class="p1" style="-webkit-text-stroke-color: rgb(0, 0, 0); font-family: "Noteworthy Light"; font-size: 12px; font-stretch: normal; line-height: normal;">
<span class="s2" style="font-kerning: none;">trial 5: 2/5</span></div>
<div class="p1" style="-webkit-text-stroke-color: rgb(0, 0, 0); font-family: "Noteworthy Light"; font-size: 12px; font-stretch: normal; line-height: normal;">
<span class="s2" style="font-kerning: none;"><br /></span></div>
<div class="p1" style="-webkit-text-stroke-color: rgb(0, 0, 0); font-family: "Noteworthy Light"; font-size: 12px; font-stretch: normal; line-height: normal;">
<span class="s2" style="font-kerning: none;"><span style="font-size: small;">Overall Accuracy: 8/25 = 32 %</span></span></div>
<div class="p1" style="-webkit-text-stroke-color: rgb(0, 0, 0); font-family: "Noteworthy Light"; font-size: 12px; font-stretch: normal; line-height: normal;">
<span style="font-size: small;">We acknowledge that this is a pretty low accuracy result, but seeing where we currently is helpful for deciding our next steps. We discussed ways of increasing our accuracy, one promising method is by changing our technique to incorporate only the focused regions of the sample. (i.e. instead of a positive sample of the complete ear for detecting a narrow helix, the sample to train with will only consist of a narrow helix)</span></div>
Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-48560513063223375832017-03-10T12:14:00.001-08:002017-03-10T12:14:51.806-08:00March 10th Abstract Development<br />
I haven't made much of a contribution to the research project because I was studying for midterms and also starting feeling ill at the beginning of the week. I was able to assist with the development of the abstract that we submitted for the HU Research Day at Capitol Hill.Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-11624955912461839092017-02-28T21:41:00.004-08:002017-02-28T21:41:47.876-08:00Retraining Classifier Results<br />
This week I worked on retraining the haar cascade. After being able to create 1000 positive samples I used 500 negative samples as input for our narrow helix classifier. The first attempt of training only passed through 1 stage then terminated with the following error "Train dataset for temp stage can not be filled. Branch training terminated."<br />
<br />
I learned that this occurred because the paths within my negative descriptor were incorrect, I quickly fixed this and retrained. On the second attempt stages 0 and 1 were loaded and I was able to enter into stage 2, but once again I encountered another issue "Required leaf false alarm rate achieved. Branch training terminated."<br />
<br />
The third attempt I made to train the classifier was much more successful than the previous. I ended up starting from scratch without loading the last xml containing the results of prior stages and reached the third stage. I then tried testing the classifier with my detect script, but I was unable to detect any narrow helixes in my sample image.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-KAPDb57BsyI/WLZe1Zum_UI/AAAAAAAAAOw/jpjAVikIpCAQf2xFKqzJFIGOKz1xkTIQwCLcB/s1600/cs-stage1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="105" src="https://3.bp.blogspot.com/-KAPDb57BsyI/WLZe1Zum_UI/AAAAAAAAAOw/jpjAVikIpCAQf2xFKqzJFIGOKz1xkTIQwCLcB/s320/cs-stage1.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-XHOccXYxZ9o/WLZe1ZM1BxI/AAAAAAAAAO0/trlInPViq7YLo-T-wwwZCiX_Jvh88qFiQCLcB/s1600/cs-stage2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="91" src="https://3.bp.blogspot.com/-XHOccXYxZ9o/WLZe1ZM1BxI/AAAAAAAAAO0/trlInPViq7YLo-T-wwwZCiX_Jvh88qFiQCLcB/s320/cs-stage2.png" width="320" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-ZZ1dz7Q_rK0/WLZe1e09xKI/AAAAAAAAAO4/FPK5-Fx7p-UMUjF-UMpD9qdi9A73FWulwCLcB/s1600/cs-stage3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="90" src="https://4.bp.blogspot.com/-ZZ1dz7Q_rK0/WLZe1e09xKI/AAAAAAAAAO4/FPK5-Fx7p-UMUjF-UMpD9qdi9A73FWulwCLcB/s320/cs-stage3.png" width="320" /></a></div>
<br />
<br />
Even though I wasn't able to detect helixes within my test images, we've made meaningful progress in training the classifier. Prior to now training didn't proceed past the 1st stage. With some minor adjustments we should be able to accurately detect segments of the ear.<br />
<br />Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-40909952980190138802017-02-22T13:02:00.000-08:002017-02-22T13:06:13.631-08:00Project Update February 22nd<br />
<br />
I did not blog about the project last week.<br />
<br />
This week we met with a grad research student, Ayotunde. He provided some great insights and assisted us in moving further with the project. I was able to generate over 1000 images from one positive sample image.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-4K1AE136m6o/WK39MY_uB2I/AAAAAAAAAOg/XFM0qoocE3A19iJMCjuA_bKevL1p8wykgCLcB/s1600/Screen%2BShot%2B2017-02-22%2Bat%2B3.39.47%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="194" src="https://3.bp.blogspot.com/-4K1AE136m6o/WK39MY_uB2I/AAAAAAAAAOg/XFM0qoocE3A19iJMCjuA_bKevL1p8wykgCLcB/s320/Screen%2BShot%2B2017-02-22%2Bat%2B3.39.47%2BPM.png" width="320" /></a></div>
<br />
This breakthrough will allow us to provide more sample images when training our classifier. The issue that prevented us from properly detecting parts of the ear was the poor accuracy of our cascade. During training our sample size was too small to go through many stages. Once I can create a larger amount of negative samples I will be able to build a more accurate cascade.<br />
<br />
<br />
<br />Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-48182109671604675892017-02-09T09:36:00.006-08:002017-02-09T09:36:46.303-08:00Cascade Training in Windows EnvironmentLast week we discussed the possibility of switching our tools used for training our haar cascade classifier. Over the course of the week we set up two computers with the required software for building a classifier. The resources we used to assist us with the setup included two computer vision blogs. Links: <a href="http://www.tectute.com/2011/06/opencv-haartraining.html">[1]</a> <a href="http://www.computer-vision-software.com/blog/2009/11/faq-opencv-haartraining/">[2]</a><br />
<br />
I was successful in gathering a small sample of images and recreating most of the progress we previously experienced, but the haartraining consistently failed. Currently I'm experiencing a parse error with generating the .vec file. <b>(Figure Below)</b><br />
<b><br /></b>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-yelOB9ciqwk/WJyl38E6oNI/AAAAAAAAAOI/SyQ3ixoyCKcVgTCRe-r33TRlzC2HGKY5gCLcB/s1600/progres_screenshot.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="336" src="https://4.bp.blogspot.com/-yelOB9ciqwk/WJyl38E6oNI/AAAAAAAAAOI/SyQ3ixoyCKcVgTCRe-r33TRlzC2HGKY5gCLcB/s640/progres_screenshot.png" width="640" /></a></div>
<b><br /></b>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Setting up the windows environment wasn't as easy as I anticipated and the results didn't exceed the progress we made before. By next week I plan on resolving the create samples issue I experienced on the Mac environment. I will reach out to the openCV community through forums and emails for additional advice. After fixing this issue I plan on generating a large sample of images ~500 and follow the advised testing ratios and criteria mentioned in my <a href="http://hucreu2016.blogspot.com/2017/01/retraining-helix-classifier-attempts.html">previous post</a>.Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-28102903775584712532017-02-01T12:40:00.003-08:002017-02-01T12:42:09.086-08:00Weekly Recap Meeting<br />
Today Morgan and I met with Dr. Washington. We discussed our current problems with the project and next steps to take. I'm currently trying to convert all my positive samples to 8-bit images, this may improve our cascade. We reached out to another research group who had experienced building classifiers for advice. Also we contemplated on using different tools to help with our project as well, in the next few days we will trying using a windows environment to train our cascade to see if we obtain better results.Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-70262138220225763262017-01-31T18:25:00.002-08:002017-01-31T18:26:24.173-08:00Retraining the Helix classifier. Attempts and HurdlesOver the past two weeks I've been working on retraining the Helix classifier to improve it's accuracy. Initially when I trained the classifier I used 20 positive samples and 10 negative samples. For the next trials I collected 40 positive samples and 500 negative samples. The positive samples came from the Collection E Notre Dame database. The negative samples were retrieved from the <a href="https://cogcomp.cs.illinois.edu/Data/Car/">UIUC Image Database for Car Detection</a>.<br />
<div>
<br /></div>
<div>
The cascade training was terrible with 40 positive and 500 negatives. ~6 minutes and 30 seconds in total time and the process terminated after 2 stages.</div>
<div>
<br /></div>
<div>
After a few attempts I wasn't able to successfully detect parts of the helix. I did multiple trials where I changed the ratio of positive to negative samples. (i.e. 40 positive, 250 negative / 40 positive 80 negative). The trial with 80 negatives was quicker, but only went through 1 stage of training and when tested no helixes were detected.</div>
<div>
<br /></div>
<div>
Some trials trained the classifier within seconds others were lengthy, over 5 minutes. I recently found a post on the forum site stackoverflow that provided suggestions on the sample sizes and properties that gave optimal results. The ideal settings had a positive to negative ratio of 2:1. Many people training haar classifiers generated 1000's of samples from a limited supply of positive images by applying small rotations and distortions to the original samples. These transformations can be performed using the opencv_createsamples utility. For each photo it's best to create 200 samples with this technique. Another thing I learned that will improve my training is to ensure my samples are monochrome and to scale the negatives to a size of 100 x 100. Negative images should be the same size or larger than positives. Currently the size of my negatives are 100 x 40. Much smaller than my positive samples. </div>
<div>
<br /></div>
<div>
I will apply these techniques in my next trials of testing.</div>
<div>
<br /></div>
<div>
<br /></div>
Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-86422420039880354402017-01-13T15:05:00.003-08:002017-01-13T15:05:22.556-08:00Walkthrough Meeting<br />
Today we met together and had a walkthrough of making a lobule classifier. Morgan made a lot of progress and is now dividing her samples between narrow and wide lobules.<br />
<br />
Howard University is holding for a research week event and our mentor, Dr. Washington advised us to apply and present our findings. We plan on showing the results of our work and the motivation behind the project. (An ear scheme that could recognize everyone (different groups/ races)<br />
examining different data sets (Asian ears, black ears )<br />
<br />
The submission requires an abstract and the deadline is February 26, 2017. <br />
<br />
We are going to set up another meeting next Wednesday to start drafting our abstract. In the mean time Dr. Washington is having Morgan and I write a few paragraphs about our work at this point so we have somewhere to start our draft. We each have a responsibility to write 200 words for next meeting.<br />
<br />
In addition, for next meeting I have to increase my image sample size from 20 to 100 samples to improve the accuracy of the helix classifier. As well as start on creating a classifier for the Tragus. Morgan plans on finishing her wide and narrow lobule classifier.<br />
<br />
<b>Important Dates:</b><br />
HU Research Week February 26, 2017<br />
Tapia scholarship is open <span style="background-color: white; color: #1d1608; font-family: Arial, Helvetica, sans-serif; font-size: 11pt;">General </span><span class="il" style="background-color: white; color: #1d1608; font-family: Arial, Helvetica, sans-serif; font-size: 11pt;">Tapia</span><span style="background-color: white; color: #1d1608; font-family: Arial, Helvetica, sans-serif; font-size: 11pt;"> </span><span style="background-color: white; color: #1d1608; font-family: Arial, Helvetica, sans-serif; font-size: 11pt;">Scholarship Applicants:</span><span style="background-color: white; color: #1d1608; font-family: Arial, Helvetica, sans-serif; font-size: 11pt;"> </span><span style="background-color: white; color: #1d1608; font-family: Arial, Helvetica, sans-serif; font-size: 11pt;"><span class="aBn" data-term="goog_1078291864" style="border-bottom: 1px dashed rgb(204, 204, 204); position: relative; top: -2px; z-index: 0;" tabindex="0"><span class="aQJ" style="position: relative; top: 2px; z-index: -1;">February 28, 2017</span></span></span><br />
<span style="background-color: white; color: #1d1608; font-family: Arial, Helvetica, sans-serif; font-size: 11pt;"><span class="aBn" data-term="goog_1078291864" style="border-bottom: 1px dashed rgb(204, 204, 204); position: relative; top: -2px; z-index: 0;" tabindex="0"><span class="aQJ" style="position: relative; top: 2px; z-index: -1;">Grace Hopper Registration opens February</span></span></span>Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-25770833037163934282016-12-12T01:54:00.003-08:002016-12-13T19:25:24.796-08:00Recovered Drive!<br />
Great news! Last week we had a terrible accident with one of our hard drives that resulted in us not being able to access our data sets. Luckily a classmate we reached out to knew how to repair the drive and we were back to normal in a few days. After recovering the drive we made another back up with an external hard drive, just in case we have an issue in the future.<br />
<br />
I've created a readme doc detailing the commands, steps, and resources for my research partner to follow while we're on break. During the break I also plan on creating a classifier for the <span style="font-family: Arial; font-size: 14.6667px; white-space: pre-wrap;">curved and triangular tragus.</span>Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-72666919641109434572016-12-04T23:52:00.000-08:002016-12-13T19:25:27.203-08:00Progress Meeting<br />
<br />
Today Morgan and I spoke with Dr. Washington, about how much progress we have made to date. We also discussed the milestones we wanted to tackle over the winter break. The game-plan is for Morgan to deliver the lobule classifier by December 18th. The types of classifications for lobules are attached and unattached. Currently we have classifiers for the Helix portion of the ear. Also we recently had issues with the external hard drive that was used to store our collection of ear samples. We are currently trying to recover the data sets.<br />
<style type="text/css">
p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Noteworthy Light'; -webkit-text-stroke: #000000}
p.p2 {margin: 0.0px 0.0px 0.0px 0.0px; font: 12.0px 'Noteworthy Light'; -webkit-text-stroke: #000000; min-height: 19.0px}
span.s1 {font-kerning: none}
</style>Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-1647535983469264402016-11-18T16:39:00.001-08:002016-12-12T03:12:40.903-08:00Testing Classifier Performance<br />
As I previously mentioned I wrote a python script to test how well our classifier detects narrow helixes. I wanted to test a small sample size, so I took 5 ear samples from a folder within our research gdrive to test against. The results were a bit discouraging, but I realized there's a few things I can do to better train the cascade.<br />
<br />
<b>detect.py </b>(source code)<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-FBsZRDFlhDA/V_g-HyiErXI/AAAAAAAAAHg/y4WA4Srbo88QWLmn6DV1-w_4ND49vq8JwCPcB/s1600/face_sc1.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="304" src="https://2.bp.blogspot.com/-FBsZRDFlhDA/V_g-HyiErXI/AAAAAAAAAHg/y4WA4Srbo88QWLmn6DV1-w_4ND49vq8JwCPcB/s400/face_sc1.png" width="400" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<b><u>Results:</u></b><br />
<b><u><br /></u></b>
Trial 1 - 4 Narrow Helixes Detected<br />
<div style="text-align: right;">
Trial 2 - 7 Narrow Helixes Detected</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-o8Zefe9teqM/WE6DBbdSkoI/AAAAAAAAAM8/DFs1yFynW90rFyY39PK5F2xfdActaVh1wCLcB/s1600/trial%2B1.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-o8Zefe9teqM/WE6DBbdSkoI/AAAAAAAAAM8/DFs1yFynW90rFyY39PK5F2xfdActaVh1wCLcB/s1600/trial%2B1.png" /></a></div>
<br />
<a href="https://2.bp.blogspot.com/-XXpu9mPI3HM/WE6DGxEdJpI/AAAAAAAAANA/GpqHv4WXRKcVFm7wWz1glv8AfYzEvsYHACLcB/s1600/trial%2B2.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://2.bp.blogspot.com/-XXpu9mPI3HM/WE6DGxEdJpI/AAAAAAAAANA/GpqHv4WXRKcVFm7wWz1glv8AfYzEvsYHACLcB/s1600/trial%2B2.png" /></a><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<div style="text-align: right;">
<br /></div>
<br />
Trial 3- 4 Narrow Helixes Detected<br />
<div style="text-align: right;">
Trial 4 - 5 Narrow Helixes Detected</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-dmMc7yRmwH8/WE6DVYU692I/AAAAAAAAANQ/VJrgicY54AccmKkvYcSe4dmUp9rGln7fQCLcB/s1600/trial%2B3.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-dmMc7yRmwH8/WE6DVYU692I/AAAAAAAAANQ/VJrgicY54AccmKkvYcSe4dmUp9rGln7fQCLcB/s1600/trial%2B3.png" /></a></div>
<a href="https://3.bp.blogspot.com/-X7mAhu949dg/WE6DVSl7awI/AAAAAAAAANM/n1URBI6VThYmjXGLzZ1NJaOIq2fLJUCcQCEw/s1600/trial%2B5.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://3.bp.blogspot.com/-X7mAhu949dg/WE6DVSl7awI/AAAAAAAAANM/n1URBI6VThYmjXGLzZ1NJaOIq2fLJUCcQCEw/s1600/trial%2B5.png" /></a><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<div style="text-align: right;">
<br /></div>
<br />
<div style="text-align: center;">
Trial 5 - 5 Narrow Helixes Detected</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-qpNlkNVTimo/WE6DVQKRMbI/AAAAAAAAANI/G7_YqjV7Z7c1D57-OtMdGGMww7mgFejNgCEw/s1600/trial%2B4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://4.bp.blogspot.com/-qpNlkNVTimo/WE6DVQKRMbI/AAAAAAAAANI/G7_YqjV7Z7c1D57-OtMdGGMww7mgFejNgCEw/s1600/trial%2B4.png" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
The results from testing were very inaccurate. There should be only one narrow helix detected in a sample image. A few things that I believe will help are providing more samples to train with. With this classifier there are two positive samples per negative sample. In resources I found many cascades were trained with a high ratio of negative samples compared to positives. Also in my test script I specify minNeighbors to equal 5. Which means that my program will detect at least 5 objects before it declares that a narrow helix is found. I believe if I increase the minimum neighbors the detection will be more accurate.Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-83567851827017695512016-11-17T19:54:00.000-08:002016-12-12T01:42:40.114-08:00Testing Our Classifier to Detect Narrow Helixes<br />
To test the performance of our newly created classifier. I created a python script that runs the classifier against a set of ear samples that were not used to train the classifier. None of the positive samples that went into building our positive vector file were included. Our wonderful mentor Dr. Washington stressed to "Don't test on what you train!" doing so will greatly skew the results and not provide an accurate depiction of the classifier's quality.Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-61444075596332270192016-11-17T19:50:00.005-08:002016-12-12T01:36:13.574-08:00Classifier Format<br />
The train cascade tool created and converted our cascade to multiple xml files. Our narrow_helix_cascade contains xml files for each stage ran during the training (stage0.xml, stage1.xml, stage2.xml, etc..) params.xml contains the supplied arguments to the train_cascade command, and the classifier.xml file has the features and results from all stages of training.<br />
<br />Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-85082028073422914742016-11-17T19:50:00.001-08:002016-12-12T01:19:02.899-08:00Training Our Classifier<br />
After constructing our vector file. Our next task involves using the file as input for training our classifier. This is done by using the opencv_traincascade command line tool.<br />
<br />
opencv_traincascade -data classifier -vec narrow_positives.vec -bg narrow_negatives.txt\<br />
-numStages 3 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 20\<br />
-numNeg 10 -w 24 -h 24 -mode ALL -precalcValBufSize 256\<br />
-precalcIdxBufSize 256<br />
<br />
<b>Parameters</b><br />
In our case classifier is the location we want the classifier files to be stored. The -vec flag requires the vec file we generated in our last step. We supplied the file that specifies paths to all the negative sample files we created<br />
<br />
PrecalcValBufSize indicates the amount of memory we allow for executing the program 256MB in our case. If we had a larger sample size more memory would make processing faster, but since we have a small sample size and this is one of our first trials we won't need much. The number of negative and positive samples is given and The amount of stages that we want the classifier to undergo is given with -nstages parameter. MinHitRate stands for the minimal desired hit rate for each stage of the classifier<br />
<br />
<br />
When trying to train the classifier, we ran into a few issues.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-Oiq9mLyXJkA/WE5qjGw-c2I/AAAAAAAAAMo/aq8P7uC4D3YNGA-1ciFdRHTJMOgKPCqSACLcB/s1600/failed%2Bat%2Bfirst%2Bstage%2Btrain%2Bcascade.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="https://3.bp.blogspot.com/-Oiq9mLyXJkA/WE5qjGw-c2I/AAAAAAAAAMo/aq8P7uC4D3YNGA-1ciFdRHTJMOgKPCqSACLcB/s320/failed%2Bat%2Bfirst%2Bstage%2Btrain%2Bcascade.png" width="317" /></a></div>
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
The attempt above, failed at the first stage. At first I was missing or incorrectly giving values to the command's parameter. I also believe that finding the right amount of stages to train effected the outcome as well as this trial's lack of negatives samples compared to the number of positives.<br />
<br />
Eventually we were able to get a successful run. (see below)<br />
<br />
PARAMETERS:<br />
cascadeDirName: classifier<br />
vecFileName: narrow_positives.vec<br />
bgFileName: narrow_negatives.txt<br />
numPos: 20<br />
numNeg: 10<br />
numStages: 3<br />
precalcValBufSize[Mb] : 256<br />
precalcIdxBufSize[Mb] : 256<br />
acceptanceRatioBreakValue : -1<br />
stageType: BOOST<br />
featureType: HAAR<br />
sampleWidth: 24<br />
sampleHeight: 24<br />
boostType: GAB<br />
minHitRate: 0.999<br />
maxFalseAlarmRate: 0.5<br />
weightTrimRate: 0.95<br />
maxDepth: 1<br />
maxWeakCount: 100<br />
mode: ALL<br />
Number of unique features given windowSize [24,24] : 261600<br />
<br />
===== TRAINING 0-stage =====<br />
<BEGIN<br />
POS count : consumed 20 : 20<br />
NEG count : acceptanceRatio 10 : 1<br />
Precalculation time: 0<br />
+----+---------+---------+<br />
| N | HR | FA |<br />
+----+---------+---------+<br />
| 1| 1| 0|<br />
+----+---------+---------+<br />
END><br />
Training until now has taken 0 days 0 hours 0 minutes 1 seconds.<br />
<br />
===== TRAINING 1-stage =====<br />
<BEGIN<br />
POS count : consumed 20 : 20<br />
NEG count : acceptanceRatio 10 : 0.217391<br />
Precalculation time: 0<br />
+----+---------+---------+<br />
| N | HR | FA |<br />
+----+---------+---------+<br />
| 1| 1| 0|<br />
+----+---------+---------+<br />
END><br />
Training until now has taken 0 days 0 hours 0 minutes 2 seconds.<br />
<br />
===== TRAINING 2-stage =====<br />
<BEGIN<br />
POS count : consumed 20 : 20<br />
NEG count : acceptanceRatio 4 : 0.1<br />
Required leaf false alarm rate achieved. Branch training terminated.<br />
<br />
<br />
<br />
<br />Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-86903082706996261482016-11-17T19:49:00.004-08:002016-12-04T23:54:08.465-08:00Constructing a Vec File Based on Positive Narrow Helix Samples<br />
After creating our description file of positive and negative samples, the next step towards building our classifiers includes packing the positive samples into a vec file.<br />
<br />
Building a vector file is done via the opencv_createsamples utility. Opencv_createsamples allows us to generate a large number of samples from a small number of input images by applying distortions and transformations to positive samples.<br />
<br />
We wrote shell scripts to automate using a few of the opencv command line tools. The shell script for createsamples is below.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-H7CRkbWKa2A/WD37f8W-bUI/AAAAAAAAALw/ZCvo2jeKxnQtYmLXVnvRpY9wEppTnqHwgCLcB/s1600/Screen%2BShot%2B2016-11-29%2Bat%2B5.04.31%2BPM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="100" src="https://1.bp.blogspot.com/-H7CRkbWKa2A/WD37f8W-bUI/AAAAAAAAALw/ZCvo2jeKxnQtYmLXVnvRpY9wEppTnqHwgCLcB/s400/Screen%2BShot%2B2016-11-29%2Bat%2B5.04.31%2BPM.png" width="400" /></a></div>
<br />
<br />
Part of our vec file generated<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-YS6R53mVL8s/WD9ViCz0HHI/AAAAAAAAAME/LCMP7eqa0oInReCLIRWceSpOJ13cYSs9wCLcB/s1600/narrow_helix%2Bvec%2Bresult.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="138" src="https://2.bp.blogspot.com/-YS6R53mVL8s/WD9ViCz0HHI/AAAAAAAAAME/LCMP7eqa0oInReCLIRWceSpOJ13cYSs9wCLcB/s320/narrow_helix%2Bvec%2Bresult.png" width="320" /></a></div>
<br />
<br />
<a href="https://1.bp.blogspot.com/-H7CRkbWKa2A/WD37f8W-bUI/AAAAAAAAALw/ZCvo2jeKxnQtYmLXVnvRpY9wEppTnqHwgCLcB/s1600/Screen%2BShot%2B2016-11-29%2Bat%2B5.04.31%2BPM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><br /></a><a href="https://1.bp.blogspot.com/-H7CRkbWKa2A/WD37f8W-bUI/AAAAAAAAALw/ZCvo2jeKxnQtYmLXVnvRpY9wEppTnqHwgCLcB/s1600/Screen%2BShot%2B2016-11-29%2Bat%2B5.04.31%2BPM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><br /></a><a href="https://1.bp.blogspot.com/-H7CRkbWKa2A/WD37f8W-bUI/AAAAAAAAALw/ZCvo2jeKxnQtYmLXVnvRpY9wEppTnqHwgCLcB/s1600/Screen%2BShot%2B2016-11-29%2Bat%2B5.04.31%2BPM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><br /></a>Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-25562218885789538052016-11-17T19:49:00.002-08:002016-11-29T13:22:06.196-08:00Creating Our Negative Samples + Negative Description File<br />
<br />
When researching different ways to develop negative samples, we found that we obtain the best results for the classifier by having a slight variant of the features we wish to detect embedded in an image that does not contain any characteristics of the image.<br />
Negative images can be anything, but the classifier is more accurate if it includes a variant of a positive sample. Ideally negative images would look exactly like the positive samples, except they wouldn't contain the object we want to recognize.<br />
<br />
Using Gimp, an image manipulation program we placed images of ears in the foreground of a background/backdrop.<br />
<br />
Examples of Negative Samples:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-6UjNspYSh6c/WD3u5rejZOI/AAAAAAAAALY/e2tMIjreXRAu6fPf1adQ7fAlrGATmgLNACEw/s1600/negatives6.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="211" src="https://4.bp.blogspot.com/-6UjNspYSh6c/WD3u5rejZOI/AAAAAAAAALY/e2tMIjreXRAu6fPf1adQ7fAlrGATmgLNACEw/s320/negatives6.jpg" width="320" /></a></div>
<br />
<br />
<a href="https://1.bp.blogspot.com/-mg7PoPk394g/WD3u2FgwvGI/AAAAAAAAALU/r5NaZGRGNs4AgozD5p_WtZdY0CWRGwtRgCLcB/s1600/negatives4.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="150" src="https://1.bp.blogspot.com/-mg7PoPk394g/WD3u2FgwvGI/AAAAAAAAALU/r5NaZGRGNs4AgozD5p_WtZdY0CWRGwtRgCLcB/s200/negatives4.jpg" width="200" /></a><br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
<br />
Negative Description File:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-KDr_x3Jkr-k/WD3viQfJXFI/AAAAAAAAALc/uR6WEdxzCCwPO4EznJL345escriX2IQzgCLcB/s1600/Screen%2BShot%2B2016-11-29%2Bat%2B1.44.26%2BPM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="120" src="https://3.bp.blogspot.com/-KDr_x3Jkr-k/WD3viQfJXFI/AAAAAAAAALc/uR6WEdxzCCwPO4EznJL345escriX2IQzgCLcB/s200/Screen%2BShot%2B2016-11-29%2Bat%2B1.44.26%2BPM.png" width="200" /></a></div>
Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-91056855503460542852016-11-04T20:51:00.004-07:002016-11-10T09:35:53.349-08:00Creating our description file of positive narrow helix samplesAfter collecting positive training images of narrow helixes we cropped our sample images of ears to just the portion that contained the helix. This cropping was done using an open source object marker tool written in python.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-R7fEhY4e_-8/WB1XrgP3sQI/AAAAAAAAAKI/W6CTuy-VG18OPzRBYQl8ewWMmOmWuZtCgCLcB/s1600/cropper_stopper.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-R7fEhY4e_-8/WB1XrgP3sQI/AAAAAAAAAKI/W6CTuy-VG18OPzRBYQl8ewWMmOmWuZtCgCLcB/s1600/cropper_stopper.png" /></a></div>
<br />
<br />
The object marker allows us to specify the region of interest by drawing a bounded rectangle object in each positive image then produces a text file description of the coordinates corresponding to the location of the helixes.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://2.bp.blogspot.com/-B5INBybI0FM/WB1yycbVkgI/AAAAAAAAAKY/4jsrE6VtdfsGHhxtWAak1pm7zG8IwgwPACLcB/s1600/dimens.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="141" src="https://2.bp.blogspot.com/-B5INBybI0FM/WB1yycbVkgI/AAAAAAAAAKY/4jsrE6VtdfsGHhxtWAak1pm7zG8IwgwPACLcB/s400/dimens.png" width="400" /></a></div>
<br />
This data will be used to construct our positive vector file to eventually train our classifier.<br />
<br />Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-59131975619348520802016-11-03T09:29:00.002-07:002016-11-03T09:30:32.001-07:00Next Steps<br />
The Next Steps for our project:<br />
<ul>
<li>Work through Haar-training tutorials</li>
<ul>
<li>http://coding-robin.de/2013/07/22/train-your-own-opencv-haar-classifier.html</li>
<li>https://pythonprogramming.net/haar-cascade-object-detection-python-opencv-tutorial/</li>
<li>https://www.cs.auckland.ac.nz/~m.rezaei/Tutorials/Creating_a_Cascade_of_Haar-Like_Classifiers_Step_by_Step.pdf</li>
<li>http://note.sonots.com/SciSoftware/haartraining.html#z97120d9</li>
</ul>
<li>Generate XML file for Helix haartraining</li>
<li>Verify + Test Helix Classifier by feeding dummy images</li>
<ul>
<li>Test classifier against sample images like trucks and other vehicles to ensure matches aren't returned.</li>
</ul>
</ul>
Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-36237239602473117602016-11-03T09:02:00.001-07:002016-11-03T09:08:03.730-07:00Introduction to Haar Cascades <span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;">Now that we're starting to build our extraction tool we needed to gain more background information to acquire a better understanding of how haarcascades work.</span><br />
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><br /></span>
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><u>Background Info:</u></span><br />
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><br /></span>
<br />
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;">A HaarCascasde is used to detect objects within images. This feature based classifier was first introduced in the Viola Jones Algorithm explained in the paper "Rapid Object Detection using a Boosted Cascade of Simple Features" by Paula Viola and Michael Jones. The detection method is based off of machine learning and applying a cascade function that is trained from negative and positive images. After the cascade function is trained it can be used to detect the desired object within sample images.</span><br />
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><br /></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-odnFus_v_Cg/WBtebZNulII/AAAAAAAAAJs/BbZOuhiwzmEntsow_r-aAbWGLo1Sw9zCQCLcB/s1600/haar.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="193" src="https://1.bp.blogspot.com/-odnFus_v_Cg/WBtebZNulII/AAAAAAAAAJs/BbZOuhiwzmEntsow_r-aAbWGLo1Sw9zCQCLcB/s320/haar.png" width="320" /></a></div>
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><br /></span>
<a href="https://4.bp.blogspot.com/-nkaBfCQh2_k/WBtfDEitpCI/AAAAAAAAAJw/eMPiyRnx6aI6t03g6ieyljVPgv0AWThQgCLcB/s1600/haar_features.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="168" src="https://4.bp.blogspot.com/-nkaBfCQh2_k/WBtfDEitpCI/AAAAAAAAAJw/eMPiyRnx6aI6t03g6ieyljVPgv0AWThQgCLcB/s200/haar_features.jpg" width="200" /></a><span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><br /></span><br />
<div style="text-align: center;">
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><b><u><br /></u></b></span></div>
<div style="text-align: center;">
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><b><u><br /></u></b></span></div>
<div style="text-align: center;">
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><b><u><br /></u></b></span></div>
<div style="text-align: center;">
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><b><u><br /></u></b></span></div>
<div style="text-align: center;">
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><b><u><br /></u></b></span></div>
<div style="text-align: center;">
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><b><u><br /></u></b></span></div>
<div style="text-align: center;">
<b style="color: #444444; font-family: "helvetica neue", helveticaneue, arial, sans-serif; font-size: 15px;"><u><br /></u></b></div>
<div style="text-align: center;">
<b style="color: #444444; font-family: "helvetica neue", helveticaneue, arial, sans-serif; font-size: 15px;"><u><br /></u></b></div>
<div style="text-align: center;">
<b style="color: #444444; font-family: "helvetica neue", helveticaneue, arial, sans-serif; font-size: 15px;"><u><br /></u></b></div>
<div style="text-align: center;">
<b style="color: #444444; font-family: "helvetica neue", helveticaneue, arial, sans-serif; font-size: 15px;"><u><br /></u></b></div>
<div style="text-align: center;">
<b style="color: #444444; font-family: "helvetica neue", helveticaneue, arial, sans-serif; font-size: 15px;"><u>Viola Jones Algorithm</u></b></div>
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><b><u><br /></u></b></span>
<span style="color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif;"><span style="background-color: white;"><span style="font-size: 15px;">The Viola Jones algorithm detection algorithm depends on "<b>Haar features</b>" to detect the presence of a desired image in a sample, an "<b>Integral Image</b>" which is a representation of the original image. The integral image allows a detector to evaluate features quickly, several operation are performed per pixel from an image. After each pixel is computed any Haar feature can be detected in current time regardless of the position in the image or scale of the image. "<b>AdaBoost</b>" is another vital part of the algorithm and is used for feature feature selection. Adaboost increases the speed of classification by excluding irrelevant features by focusing on a subset of Haar-like features. <b>Cascading</b> as previously mentioned is one of the major contributions to object detection from the algorithm. Cascading increases the speed of the classifier by focusing on the critical portions of the image. Non-promising regions of a sample are disregarded. Increasingly complex processing is applied only once a feature of interest is found.</span></span></span><br />
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><br /></span>
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><u>Explored Sources:</u></span><br />
<span style="background-color: white; color: #444444; font-family: "helvetica neue" , "helveticaneue" , "arial" , sans-serif; font-size: 15px;"><a href="https://www.cs.cmu.edu/~efros/courses/LBMV07/Papers/viola-cvpr-01.pdf">"Rapid Object Detection using a Boosted Cascade of Simple Features"</a></span><br />
<a href="http://docs.opencv.org/3.1.0/d7/d8b/tutorial_py_face_detection.html">Face Detection using Haar Cascades</a>Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-26509776300125236762016-11-02T16:14:00.005-07:002016-11-02T16:14:48.673-07:00Helix Distinction: Wide vs. NarrowThe helix is located in the upper portion of the ear that consists of cartilage and resembles a y-shaped curve (see diagram of ear below).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-2b-SIXW8VZs/WBprmFubqvI/AAAAAAAAAI0/8Zr1_6ZkBjo1cv3BAhmYUwC7aFs4CHZ8QCLcB/s1600/ear_diagram.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="201" src="https://3.bp.blogspot.com/-2b-SIXW8VZs/WBprmFubqvI/AAAAAAAAAI0/8Zr1_6ZkBjo1cv3BAhmYUwC7aFs4CHZ8QCLcB/s400/ear_diagram.png" width="400" /></a></div>
<br />
<br />
For feature extraction to analyze the helix portion of the ear we created two categories of helix; wide or narrow. We distinguish between the two categories through looking at the amount of cartilage contained in the sample. Sample images where the helix seems to have a lot of cartilage are considered wide whereas helixes that are small and the outer rim is very defined are considered narrow in our classifier.<br />
<div class="p1">
<span class="s1"><br /></span>
<span class="s1"><br /></span>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-TvpO_guzMd4/WBpzMfjDJFI/AAAAAAAAAJQ/EONeamHGoBUKSjW_4nOU2mYqO5BFe7baACLcB/s1600/Screen%2BShot%2B2016-11-02%2Bat%2B7.13.15%2BPM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://1.bp.blogspot.com/-TvpO_guzMd4/WBpzMfjDJFI/AAAAAAAAAJQ/EONeamHGoBUKSjW_4nOU2mYqO5BFe7baACLcB/s1600/Screen%2BShot%2B2016-11-02%2Bat%2B7.13.15%2BPM.png" /></a><a href="https://2.bp.blogspot.com/-DxU3KMEgN7I/WBpzIDjbeMI/AAAAAAAAAJM/AULxlkg0Qh8sYggw6yzuMiYXCVXLGpFYwCLcB/s1600/Screen%2BShot%2B2016-11-02%2Bat%2B7.13.19%2BPM.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="https://2.bp.blogspot.com/-DxU3KMEgN7I/WBpzIDjbeMI/AAAAAAAAAJM/AULxlkg0Qh8sYggw6yzuMiYXCVXLGpFYwCLcB/s1600/Screen%2BShot%2B2016-11-02%2Bat%2B7.13.19%2BPM.png" /></a></div>
<span class="s1"><br /></span>
<span class="s1"><br /></span></div>
Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-33542309467003576802016-10-18T19:05:00.001-07:002016-10-18T19:05:46.971-07:00Research Group PresentationYesterday Morgan and I presented the progress we made to the rest of the researchers. The discussions about the project, gave us more insights on the direction we can take later after we complete our initial objectives.<br />
<br />
Major points taken:<br />
<br />
<ul>
<li>Utilize our classifier in a demo security application (for validation)</li>
<li>Record Compare the accuracy between multiple races (have african-american, white, asian, etc participants)</li>
<li>Improve our classifier to handle participants with higher levels of melanin. (if time permits)</li>
</ul>
<div>
<br /></div>
<div>
Our Presentation is Below:</div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-hJQxRDmv9g8/WAbUo4IDZLI/AAAAAAAAAIA/JmC4fk2n9GEqRJg0qBFKim1lcv-mWMxTQCLcB/s1600/slide1.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="237" src="https://3.bp.blogspot.com/-hJQxRDmv9g8/WAbUo4IDZLI/AAAAAAAAAIA/JmC4fk2n9GEqRJg0qBFKim1lcv-mWMxTQCLcB/s320/slide1.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-GrzyQUC7GIg/WAbUpDZ4peI/AAAAAAAAAIE/BD-S8duOJvkEeGMG9JoLME7p5FY0W0tiACLcB/s1600/slide2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="https://3.bp.blogspot.com/-GrzyQUC7GIg/WAbUpDZ4peI/AAAAAAAAAIE/BD-S8duOJvkEeGMG9JoLME7p5FY0W0tiACLcB/s320/slide2.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://3.bp.blogspot.com/-iKz1YY7x7LI/WAbUpCGD1qI/AAAAAAAAAII/GulewEdHZvoeRGaIZZO-dTWrhOhEKjmQwCLcB/s1600/slide3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="239" src="https://3.bp.blogspot.com/-iKz1YY7x7LI/WAbUpCGD1qI/AAAAAAAAAII/GulewEdHZvoeRGaIZZO-dTWrhOhEKjmQwCLcB/s320/slide3.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://1.bp.blogspot.com/-kZXlfvZADyI/WAbUpM4HL7I/AAAAAAAAAIM/8RcYImnRF7Ii5kaHm-PWgC21WexbzbibwCLcB/s1600/slide4.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="236" src="https://1.bp.blogspot.com/-kZXlfvZADyI/WAbUpM4HL7I/AAAAAAAAAIM/8RcYImnRF7Ii5kaHm-PWgC21WexbzbibwCLcB/s320/slide4.png" width="320" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://4.bp.blogspot.com/-wCseaT9qznc/WAbUpJg2EQI/AAAAAAAAAIQ/dYljNb7XGwYax0vzIWhDUNiOZm6VkV7BwCLcB/s1600/slide5.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="237" src="https://4.bp.blogspot.com/-wCseaT9qznc/WAbUpJg2EQI/AAAAAAAAAIQ/dYljNb7XGwYax0vzIWhDUNiOZm6VkV7BwCLcB/s320/slide5.png" width="320" /></a></div>
<div>
<br /></div>
<div>
<br /></div>
Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0tag:blogger.com,1999:blog-4241376956516335076.post-82212182063093679312016-10-14T11:15:00.000-07:002016-10-14T11:15:30.427-07:00Weekly Research Recap Meeting<h3 class="post-title entry-title" itemprop="name" style="background-color: white; color: #444444; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 22px; font-stretch: normal; line-height: normal; margin: 0px; position: relative;">
<br /></h3>
<div>
<br /></div>
<div>
<br /></div>
<div class="post-body entry-content" id="post-body-2982084765931859513" itemprop="description articleBody" style="background-color: white; color: #444444; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px; line-height: 1.4; position: relative; width: 586px;">
October 13, 2016 Meeting Breakdown</div>
<div class="post-body entry-content" id="post-body-2982084765931859513" itemprop="description articleBody" style="background-color: white; color: #444444; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px; line-height: 1.4; position: relative; width: 586px;">
<br /></div>
<div class="post-body entry-content" id="post-body-2982084765931859513" itemprop="description articleBody" style="background-color: white; color: #444444; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px; line-height: 1.4; position: relative; width: 586px;">
In our weekly research meeting we discussed preparing for our large research group presentation on the 17th. Morgan and I will prepare slides introduces ourselves, the project, and provide an overview to other researchers on the progress we have made. </div>
<div class="post-body entry-content" id="post-body-2982084765931859513" itemprop="description articleBody" style="background-color: white; color: #444444; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px; line-height: 1.4; position: relative; width: 586px;">
<br /></div>
<div class="post-body entry-content" id="post-body-2982084765931859513" itemprop="description articleBody" style="background-color: white; color: #444444; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px; line-height: 1.4; position: relative; width: 586px;">
In addition, we were able to talk to the main lab technician to get a dedicated computer to build our classifiers. We also switched from trying to build a haar-cascade classifier on anti-helixes to helixes. Our next objective is to find 20 positive samples from our database of a wide and narrow helix.</div>
<div class="post-body entry-content" id="post-body-2982084765931859513" itemprop="description articleBody" style="background-color: white; color: #444444; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px; line-height: 1.4; position: relative; width: 586px;">
<br /></div>
<div class="post-body entry-content" id="post-body-2982084765931859513" itemprop="description articleBody" style="background-color: white; color: #444444; font-family: Arial, Tahoma, Helvetica, FreeSans, sans-serif; font-size: 13px; line-height: 1.4; position: relative; width: 586px;">
For research purposes we're constraining what we consider the helix to be to just the top portion of the ear. We discussed the possibility of using edge detection to distinguish between wide and narrow helixes, but for the moment we will be using visual approximations.</div>
Errol Grannumhttp://www.blogger.com/profile/04359320178648870543noreply@blogger.com0