Evaluation of Text Detection
I compared the current version of our text detection with Jue's version. Becuase Jue's version outputs bounding boxes I also convert segmentations into bounding boxes to be comparable. I count the number of true positives, false positives, and missed elements and use them to compute precision and recall.I split the data into 34 training and 73 test images. the training images are smaller and faster to process because I needed images that could run quickly. I ran the algorithms on both training and testing sets although the split is irrelevant to Jue's text detection version. I show Jue's accuracy on my training set to help us assess the inherent difficulty of my training set.
Here are the visual results:
https://dl.dropboxusercontent.com/u/20022261/reports/text_segmentation_benchmark.html
Testing
Amin's code (Testing) : Precision = 0.614, Recall = 0.692Jue's code (Testing) : Precision = 0.714, Recall = 0.453
Training
Amin's code (Training): Precision = 0.902, Recall = 0.830Jue's code (Training): Precision = 0.833, Recall = 0.600
(Jue's code has not used these images for training, this is just to compare the difficulty of training set)
All images including Training and Testing
Amin's code (All) : Precision = 0.671, Recall = 0.724Jue's code (All) : Precision = 0.745, Recall = 0.487
Comparison of my training and testing performance:
Amin's code (Training): Precision = 0.902, Recall = 0.830Amin's code (Testing) : Precision = 0.614, Recall = 0.692
No comments:
Post a Comment