User:Eirisu

From Robin

Revision as of 06:44, 6 October 2015 by Eirisu (Talk | contribs)
Jump to: navigation, search

eirisu@ifi.uio.no

Contents

Master's Thesis:

  • Stereo Vision and for Unmanned Ground Vehicle

My Master's Thesis concerns the use of optical sensors, in order to construct 3D-models of the area around an autonomous vehicle.

Timeline

October

  • Finish Road-Detection Algorithm
  • Implement the algorithm in C++ (Maybe as a ROS-node)
  • Demonstration of the working system at FFI (October 30)
  • Test 1

November

  • Developement of the dirt-road version of the algorithm.
  • Are the same approaches working in this terrain?
  • Test 2

December

  • Developement of map-representation (free-space, road, path, obstacle, etc.)
  • Classification

January

  • Open area path segmentation

Febuary

  • Begin Writing the Master's Thesis

March

April

May

  • Deliver Master's Thesis

Road Detection

One of the first things they want to achieve the Norwegian Defence Research Establishment (FFI), is an autonomous vehicle able to drive by itself at tarmac roads. Thus, I've begun making an algorithm that can detect where the road is in a color-image.

As a first step, I've chosen to take a segment of the image, and calculate the gaussian model for this section. The assumption is that the road is always right in front of the vehicle. Then, a region-growing search is executed to find the other 8-connected pixels belonging to the gaussian model, according to the Bayesian cost function:

\epsilon = \frac{1}{2\pi^{\frac{n}{2}}*|\Sigma|^{\frac{1}{2}}}*exp\bigg((-0.5)*\frac{(I(n,m) - \mu)'}{\Sigma}*(I(n,m) - \mu)\bigg)

Bayesian Classification on RGB space

The results below is computed using only the RGB Color-Space as features and the bayesian cost function on a 373*113pixel image. The left images shows some good result. In these examples, the road texture is pretty homogeneous, and there is little ambiguity. The two images on the right shows some bad result. In the top image, the section right in front of the car, which is used for making a gaussian model for the road, is dominated by shadows, which causses the gaussian model to be false. Shadows are a problem, but there are methods to handle them. The bottom image shows a situation where there are several areas that displays simmilar RGB values as the road.

Including Pixel-Position Into the Feature-Space

Texture-Based Approach

I've experimented with using the Gray-Level Co-Occurence Matrix (GLCM) in order to extract textural information from the image. In my experiment, I've computed the GLCM for a 31x31pixel window surrounding each pixel in the image. Then, 26 different features are computed by applying different functions on the GLCM. I've tested different combinations of these features, to see what feature-combination provides the most robust differentiation. Below are some of the best features I found:

Illumination-Invariant Images

Shadows and different lighting conditions are a big problem for many segmentation algorithms. Therefore, it is beneficial to be able to filter out these differences in lighting, before segmentation is initiated.

By using knowledge about the camera's spectral response, one can compute an illumination-invariant picture by the following formula: ii_image = 0.5 + log(IG) − α * log(IB) − (1 − α) * log(IR), where α is the spectral response coefficient for the particular camera, IR,IG,IB represent the original image seperated into the three RGB color channels, and ii_$image is the illumination-invariant output image.

Results

Obstacle Detection

Personal tools
Front page