User:Martijho

From Robin

(Difference between revisions)
Jump to: navigation, search
(Multitask Evolution with Cartesian Genetic Programming)
Line 5: Line 5:
* No catastrophic forgetting
* No catastrophic forgetting
* Embedded transfer learning
* Embedded transfer learning
 +
 +
== Thoughts on Thesis ==
 +
- Search for the first path is unnecessary? The search is over good permutations of parameters from the network at the same time
 +
the parameters are trained for the first time. In other words: does the search provide a significant increase in transferability or any measurable increase in performance over just picking a random path and training it for a set amount of iterations?
== Thesis problem specification ==   
== Thesis problem specification ==   
Line 13: Line 17:
** What would, in that case, constitute a "saturated PathNet"?  
** What would, in that case, constitute a "saturated PathNet"?  
** Is there a learning advantage to be had from this kind of learning?  
** Is there a learning advantage to be had from this kind of learning?  
 +
* Is there a measurable increase in performance by searching over optimal "first paths" instead of just training a selected segment of the PathNet?
=== PathNet structure ===  
=== PathNet structure ===  
Line 20: Line 25:
=== Test scenario ===
=== Test scenario ===
Must be fairly quick to provide one episode. Small input dimentionality to reduce necessary capacity of PathNet structure and computational time.  
Must be fairly quick to provide one episode. Small input dimentionality to reduce necessary capacity of PathNet structure and computational time.  
-
Scenario must also be easy to divide into subtasks.
+
The scenario must also be easy to divide into subtasks.
* OpenAI gym?  
* OpenAI gym?  
** LunarLander:  
** LunarLander:  

Revision as of 14:50, 6 November 2017

Contents

Key "words"

  • Super neural network
  • Evolved sub-models from a larger set of parameters
  • Multitask learning
  • No catastrophic forgetting
  • Embedded transfer learning

Thoughts on Thesis

- Search for the first path is unnecessary? The search is over good permutations of parameters from the network at the same time the parameters are trained for the first time. In other words: does the search provide a significant increase in transferability or any measurable increase in performance over just picking a random path and training it for a set amount of iterations?

Thesis problem specification

Studying the behaviour of super neural networks when saturated with subtasks from the same domain such as in a curriculum learning scenario. Include research questions such as

  • Can we estimate the decline in needed capacity for each new sub-task learned from the curriculum?
  • Could a PathNet saturated with optimized paths for tasks from a curriculum provide one/few-shot learning?
    • What would, in that case, constitute a "saturated PathNet"?
    • Is there a learning advantage to be had from this kind of learning?
  • Is there a measurable increase in performance by searching over optimal "first paths" instead of just training a selected segment of the PathNet?

PathNet structure

Small structure to reduce computational requirements.

  • (3 layers 10-20 modules of small affine MLPs)

Test scenario

Must be fairly quick to provide one episode. Small input dimentionality to reduce necessary capacity of PathNet structure and computational time. The scenario must also be easy to divide into subtasks.

  • OpenAI gym?
    • LunarLander:
      • Hover
      • Land safely
      • Land in goal
      • Land in goal quickly


Who cites PathNet?

Born to Learn

EPANN - Evolved Plastic Artificial Neural Networks Mentions Pathnet as an example of where evolution where used to train a network on multiple tasks. "While these results were only possible through significant computational resources, they demonstrate the potential of combining evolution and deep learning approaches.

Learning time-efficient deep architectures with budgeted super networks

Mentions PathNet as a predecessor in the super neural network family

Deep Learning for video game playing

Reviewing recent deep learning advances in the context of how they have been applied to play different types of video games

Evolutive deep models for online learning on data streams with no storage

Pathnet is proposed alongside PNNS as a way to deal with changing environments. It is mentioned that both PathNet and progressive networks show good results on sequences of tasks and are a good alternative to fine-tuning to accelerate learning.

Online multi-task learning using active sampling

Cites Progressive Neural Networks for multitask learning

Hierarchical Task Generalization with Neural Programs

Mentions PathNet as way of reusing weights

Multitask Evolution with Cartesian Genetic Programming

Mentions PathNet in a list of systems that use evolution as tool in multitasking

Personal tools
Front page