# User:Davidko

(Difference between revisions)
 Revision as of 19:14, 21 November 2018 (view source)Davidko (Talk | contribs) (→Master)← Older edit Current revision as of 19:42, 21 November 2018 (view source)Davidko (Talk | contribs) Line 17: Line 17: * Background * Background * Experiments * Experiments - ** Experiments formulation + ** Problem formulation - ** Pose estimation methods and position based visual servo controller vs tracking. + ** Pose estimation methods and position based visual servo controller. ** Image based (virtual) visual servo controller ** Image based (virtual) visual servo controller - ** Grasping by analyzing the geometry of the object. + ** Grasping by analyzing the geometry of the object ** Force feedback control ** Force feedback control ** Assembly sequence ** Assembly sequence Line 29: Line 29: == Background == == Background == == Experiments == == Experiments == + === Problem formulation === + The question to be researched is: how can information about object and workspace geometry be used to enable autonomous assembly? + === Pose estimation methods and position based visual servo controller === + Algorithms: Chamfer distance, (extended kalman filter), gradient response maps + + Test pose estimators in two ways: + * Estimating pose of a stationary object: accuracy, computation time, singularities, constraints. + * Estimating pose of a moving target - tracking + + === Image based (virtual) visual servoing === + Algorithms: VISP framework, virtual visual servoing + + Test the same way as pose estimation methods. + + === Grasping by analyzing the geometry of the object === + + Algorithms: TODO + + Test grasping of different objects placed in different poses. + + === Force feedback control === + + Find a way to plan the applied forces using the geometric model (and perhaps a simple model of the friction) + + === Assembly sequence === + + Implement a sequence planner (might skip) + + Try to assemble a lego structure == Results == == Results == == Discussion == == Discussion ==

# Master

When reading research literature about evolutionary robotics (ER), most papers are concerned with the development of some kind of controller for a certain robot. The majority of ER research papers test out evolution schemes that tries to create a controller to enable a robot to fulfill certain tasks. The controller is often evolved in a simulator, and eventually downloaded into a robot for testing in the real world. This is done to prevent the reality gap that arises because of modelling error of the world in the simulator.

A part of ER that is often overlooked is the evolution of controllers and the morphology (body) of the robot together. In order to enable this kind of evolution, the evolved robot body must be manufactured and assembled during the runs of the evolutionary algorithm. The manufacturing could be done by rapid prototyping and human assembly, but this thesis explores the creation of an ER system that is fully automated, i.e the robot morphology is designed, manufactured and evaluated by the ER-system without any human intervention. The reasons to apt for full autonomy of an ER system are many:

1. Manual assembly of evolved robots are a tedious and time consuming task, hindering ER research and engineering.
2. An ER-system could be operating in environments where it is hard or impossible to have human personnel (i.e on Mars or in military operations).
3. The problem of autonomous manufacturing is heavily researched in disciplines such as mechanical engineering and mechatronics. ER could be a useful contribution to these disciplines to solve problems in the industry, and manufacture theory can be used in ER to accelerate its development.
4. Assembly of evolved robots will place constraints on the algorithms used in ER, because the algorithm must design something that can be physically built. By treating the assembly as a natural part of ER, research can be conducted to explore the relationship between evolutionary algorithms and physical assembly.

This thesis will explore the implementation of an autonomous assembly system for an ER system using robot manipulators. The thesis will explore the controllers needed for the robot manipulators to make the system behave as intended, and will focus on using all the information that is generated before the designed robot is assembled to make the assembly process more precise. Initially, this means using the geometric description of the item to be assembled in a robot manipulator controller scheme to increase the precision of a peg-in-hole task, i.e a classic robotic manufacture problem where we want to automate the mating of two parts without breaking or wedging them.

Because the problem of autonomous assembly and manipulation is wast, the initial focus will be to explore promising algorithms for estimating the pose of an object from an image when the geometry of the object is known and use this information in a position based visual servo controller. Another approach is to use an image based visual servo controller. However, because this thesis relies on geometric information about the manipulated object and not texture or colors, the image based controller must use image features that that can be extracted from geometric information.

## Thesis structure

• Introduction
• Background
• Experiments
• Problem formulation
• Pose estimation methods and position based visual servo controller.
• Image based (virtual) visual servo controller
• Grasping by analyzing the geometry of the object
• Force feedback control
• Assembly sequence
• Results
• Discussion
• Implementation notes and documentation

## Experiments

### Problem formulation

The question to be researched is: how can information about object and workspace geometry be used to enable autonomous assembly?

### Pose estimation methods and position based visual servo controller

Algorithms: Chamfer distance, (extended kalman filter), gradient response maps

Test pose estimators in two ways:

• Estimating pose of a stationary object: accuracy, computation time, singularities, constraints.
• Estimating pose of a moving target - tracking

### Image based (virtual) visual servoing

Algorithms: VISP framework, virtual visual servoing

Test the same way as pose estimation methods.

### Grasping by analyzing the geometry of the object

Algorithms: TODO

Test grasping of different objects placed in different poses.

### Force feedback control

Find a way to plan the applied forces using the geometric model (and perhaps a simple model of the friction)

### Assembly sequence

Implement a sequence planner (might skip)

Try to assemble a lego structure