User:Eirisu

From Robin

(Difference between revisions)
Jump to: navigation, search
(Timeline)
(Timeline)
Line 13: Line 13:
* Implement the algorithm in C++ (Maybe as a ROS-node)
* Implement the algorithm in C++ (Maybe as a ROS-node)
* Demonstration of the working system at FFI (October 30)
* Demonstration of the working system at FFI (October 30)
-
 
+
* Test 1
=== November ===
=== November ===
-
 
+
* Developement of the dirt-road version of the algorithm.
 +
* Are the same approaches working in this terrain?
 +
* Test 2
=== December ===
=== December ===
-
 
+
* Developement of map-representation (free-space, road, path, obstacle, etc.)
 +
* Classification
=== January ===
=== January ===
-
 
+
* Open area path segmentation
=== Febuary ===
=== Febuary ===

Revision as of 07:42, 5 October 2015

Eirik Sundet

eirisu@ifi.uio.no

Contents

Master's Thesis:

  • Stereo Vision and for Unmanned Ground Vehicle

My Master's Thesis concerns the use of optical sensors, in order to construct 3D-models of the area around an autonomous vehicle.

Timeline

October

  • Finish Road-Detection Algorithm
  • Implement the algorithm in C++ (Maybe as a ROS-node)
  • Demonstration of the working system at FFI (October 30)
  • Test 1

November

  • Developement of the dirt-road version of the algorithm.
  • Are the same approaches working in this terrain?
  • Test 2

December

  • Developement of map-representation (free-space, road, path, obstacle, etc.)
  • Classification

January

  • Open area path segmentation

Febuary

  • Begin Writing the Master's Thesis

March

April

May

  • Deliver Master's Thesis

Road Detection

One of the first things they want to achieve the Norwegian Defence Research Establishment (FFI), is an autonomous vehicle able to drive by itself at tarmac roads. Thus, I've begun making an algorithm that can detect where the road is in a color-image.

As a first step, I've chosen to take a segment of the image, and calculate the gaussian model for this section. The assumption is that the road is always right in front of the vehicle. Then, a region-growing search is executed to find the other 8-connected pixels belonging to the gaussian model, according to the Bayesian cost function:

\epsilon = \frac{1}{2\pi^{\frac{n}{2}}*|\Sigma|^{\frac{1}{2}}}*exp\bigg((-0.5)*\frac{(I(n,m) - \mu)'}{\Sigma}*(I(n,m) - \mu)\bigg)

Bayesian Classification on RGB space

The results below is computed using only the RGB Color-Space as features and the bayesian cost function on a 373*113pixel image. The left images shows some good result. In these examples, the road texture is pretty homogeneous, and there is little ambiguity. The two images on the right shows some bad result. In the top image, the section right in front of the car, which is used for making a gaussian model for the road, is dominated by shadows, which causses the gaussian model to be false. Shadows are a problem, but there are methods to handle them. The bottom image shows a situation where there are several areas that displays simmilar RGB values as the road.

Including Pixel-Position Into the Feature-Space

Texture-Based Approach

Obstacle Detection

Personal tools
Front page