Wednesday, June 25, 2014

Stroke Width Transform using Generalized Distance Transform

The original Stroke Width Transform algorithm is described in the following paper: 
Detecting Text in Natural Scenes with Stroke Width Transform

There are two major implementations available:
1- The version Jue's matlab implementation uses from bresenham_swt.m

2- A version provided by a few Cornell students: https://github.com/aperrau/DetectText

This algorithm starts by edge detection. Then from each edge pixel it traces rays prependicular to the edge orientation at that pixel and tried to hit another edge. The distance between the two edge pixels give an estimate of stroke width.My implementation of Stroke Width Transform is different. Instead of ray tracing I use Generalized Distance Transform. The core of my code is three MATLAB lines:

D = DT(img);
[~, R]= DT(-D);
swt=sqrt(D(R));

DT is generalized distance transform function from here. D is distance transform and R is the assignment.

Other than its simplicity this implementation has several advantages:
1- Its complexity is O(n) where n is the number of pixels
2- The input can be continues grayscale values. The algorithm takes advantage of anti-aliasing matt around strokes to provide a more accurate stroke width estimate.
3- The output is smoother. There are some holes in the original SWT implementation that are fixed here.
Here is a visual comparison of my implementation with the old techniques.
https://dl.dropboxusercontent.com/u/20022261/reports/stroke_width_transform_benchmark.html

The first column contains the original images. The second column is a segmentation using rgb k-means. Please note that the boundary matt is preserved. The third column is the output of my SWT implementation. The forth column is the code Jue's students currently use. The Forth column is an implementation from Cornell students. The last 




Tuesday, June 24, 2014

K-means for Segmentation

Aseem suggested a simple experiment: Running K-means on rgb values of pixels. The initial segmentation outcome was promising. Two major problems that were apparent in the output was as follows:

1- Boundary pixels may be assigned to wrong clusters due to anti-aliasing.
2- Two elements with the same color would fall into one cluster.

Boundary clean-up

I tried a few techniques to fix the first problem. I ended up doing the following that gave the best result:

1- Before running k-means to determine cluster centers, I excluded edge pixels from k-means to make sure they do not form a separate cluster.
2- For the pixels that fall close to a segment boundary, I perform a soft assignment process. The output end up having a clean boundary matt.

Here are the results after cleaning boundary matt:

https://dl.dropboxusercontent.com/u/20022261/reports/segmentation_using_kmeans.html

Second level of segmentation 

In images that are segmented using only colors, seperate design elements tend to fall into the same cluster because they have the same color. We discussed two approaches to fix this problem:

1- Add certain features to RGB features to improve k-means clustering. One could hope that improved features would help improve clusters that k-means would generate. We discussed Stroke Width Transform, Texture features (LBP and local image statistics) and spatial features. We also discussed a metric learning process based on stochastic gradient descent to learn weights for features.

2- Separate disjoint elements of one cluster and join them up according to their appearance. We thought it is reasonable to process text elements separately using specialized features that could include Stroke Width Transform.

We decided to investigate the second idea first. I began implementing a basic version of Stroke Width transform that fits our application.



Friday, June 20, 2014

Segmentation using Edge Detection

There are certain differences between graphic design and natural images that affect the goal and hints for segmentation:




Graphic Design
Natural Image
Distinct segment boundaries
Frequent
Rare
Color consistency in segments
High
Low
Disjoint Elements
Many
Few



We realized that a mean-shift based segmentation algorithm does not produce appropriate results on graphic design.

We decided to try segmentation using edge detection. Two techniques are proposed for now.

1- Run an Edge-Detector; find connected components in edges and fill up the gaps

2- Run an Edge-Detector; find connected components in non-edge pixels and call each a super-pixel


Update: I tried a few types of edge detectors. The problem is these edge detectors are not designed to generate regions, they generate edges that are often disjoint.

Here are the outputs of edge detection:

https://dl.dropboxusercontent.com/u/20022261/reports/edge_detection.html

Thursday, June 19, 2014

The problem of Segmentation and Grouping for Graphic Design

After apply segmentation techniques on graphic designs we realized a significant difference between the ways a graphic design or an image must be segmented. We are using natural image segmentation algorithms, however, our experiments show that some fairly simple heuristics give us better segmentation for graphic design than a natural image segmenter. There are several important factors that make a graphic design different from a natural image.

1- Graphic design elements have more consistent colors, shapes and boundaries.

2- Graphic design elements may consist of disjoint sub-elements that need to be grouped together. There is a body of psychology study on how to group elements in an image. Including:


Text segmentation

Stroke Width Transform

I compared three implementations of Stroke Width Transform.

1- The version Jue's matlab implementation uses from bresenham_swt.m
This version is designed for black and white images; so running them on grayscale images is irrelevant.

2- A version provided by a few Cornell students
http://stackoverflow.com/questions/4837124/stroke-width-transform-swt-implementation-java-c
This work is quick to run and provides very little false positives. However, a portion of true positive letters are also missed.

3- A version from CCV library
http://libccv.org/doc/doc-swt/
I don't suggest this library. It is hard to install but it does not provide a proper output.

You can see the results here. Because Jue's matlab implementation for swt was slow I ran it on only a few images.
https://dl.dropboxusercontent.com/u/20022261/reports/stroke_width_transform_benchmark.html

Using bounding-boxes to improve text segmentation

I applied some heuristics on our text-boxe annotations to segment out the text. I first determine whether the text is dark on light background or light on dark background. Then I do k-means on pixels with two components to get foreground and background.

Here are the results:
https://dl.dropboxusercontent.com/u/20022261/reports/text_foreground_benchmark.html

Next Steps

We planned to:
1- Use Jue's bounding boxes instead of annotation as the starting point for text segmentation.
2- Visualize the output on images
3- Use the output to determine which superpixels belong to text

Tuesday, June 17, 2014

Meeting on June 17th, 2014

We discussed the quality of superpixels here:

We brainstormed a few heuristics to improve superpixel segmentation and decided on the following ideas:

1- Get rid of low quality images
2- Remove compression noise using median filter
3- Use other superpixel algorithms such as TurboPixels
4- See if Jue's paper provides auxiliary data for text pixels
5- Heuristics to use text annotations to detect pixels belonging to the text
6- Hand annotate which superpixels must go together.


We noted that how superpixels relate to eachother matters. Probably we need various kinds of features including color, shape and stroke width transform feature. We also discussed ideas to use VOC style detectors and RCNN features but we planned to study them later on.

Monday, June 16, 2014

Plans for Computer Vision Analysis

Text Processing

We used text detection and OCR to extract text automatically. We use a text line detection code to generates bounding boxes for textlines. Each image takes about 1 minute to be processed. For OCR we use Tesseract. Text detection results are here:

Future Plan

Here is what we discussed. After we extract text, we can try removing the text and detect the major graphics elements by parsing the image. We can over-segment the image into many super-pixels. Then, using computer vision techniques we can group and classify super-pixels into elements such as graphics, photo or background. We can either extract bounding boxes from segments or generate a heat map or a feature representation that could be later used to improve search.



Background

We decided to work on techniques involving reverse engineering graphic designs. We discussed possible goals, possible techniques and some background work.

We started by investigating ways to parse a graphic design into its elements. Parsing graphic designs can be defined in various ways. We decided to start from a good application, then define the parsing task according to the requirements of the application.

We looked at Theo that helps beginner users to good graphical design. Theo is supposed to give suggestions to the user to be able to make a professional-looking graphic design.

Dataset

We made a dataset of 107 designs that are simple enough to experiment with. We annotated them using mechanical turk. (I did the annotations in a mechanical turk page.) We annotated text fields and graphics fields using bounding boxes.

Pipeline

A query input design would contain a few graphics and a few text elements. Given the query, the pipeline consists of two major steps:

1- Search for relevant designs for suggestion
2- Transfer the Layout, Fonts and Colors to user Query


Nearest Neighbor Search

We performed nearest neighbor search using a few different distance measures. Some measures consider the image as a whole while some others try to perform a binary matching between pairs of elements. We used the following measures:
1- Intersection over Union
2- Aspect ratio of elements
3- Color distribution
4- Density and Moments of elements
5- Saliency map

Transfer

Our goal is to transfer layout, fonts and colors from a database design to the query design. The elements in the query design are parsed; the extend of graphics and text elements are known. However, Database designs are raw JPG images; they have no annotation by default. We need to process database designs using MTurk and computer vision techniques. Our goal is to extract layout, fonts and colors as well as to certain auxiliary annotation that helps improving search.

We directly generate SVG files and view them in a browser.

Preliminary Results

Preliminary results show various limitations. One major contributing factor to the quality of results is the size of the design dataset, so we need to expand our dataset from 107 images to thousands of images. Another contributing factor is how well dataset images are processed.