User:Martijho

From Robin

(Difference between revisions)
Jump to: navigation, search
Line 1: Line 1:
-
== PathNet ==
+
= PathNet =
 +
 
 +
== Key "words" ==  
* Super neural network
* Super neural network
* Evolved sub-models from a larger set of parameters
* Evolved sub-models from a larger set of parameters
Line 6: Line 8:
* Embedded transfer learning
* Embedded transfer learning
 +
== Thesis problem specification == 
 +
* Using small PathNet structure (3 layers 10-20 modules of small affine MLPs)
 +
* Testing on problem easy to divide into subtasks
 +
** Some RL-environment from OpenAI gym?
 +
*** LunarLander:
 +
**** Hover
 +
**** Land safely
 +
**** Land in goal
 +
**** Land in goal quickly
== Who cites PathNet? ==  
== Who cites PathNet? ==  
Line 27: Line 38:
=== ''[https://openreview.net/pdf?id=H1XLbXEtg Online multi-task learning using active sampling]'' ===
=== ''[https://openreview.net/pdf?id=H1XLbXEtg Online multi-task learning using active sampling]'' ===
-
 
+
Cites Progressive Neural Networks for multitask learning
=== ''[http://juxi.net/workshop/deep-learning-rss-2017/papers/Xu.pdf Hierarchical Task Generalization with Neural Programs]'' ===
=== ''[http://juxi.net/workshop/deep-learning-rss-2017/papers/Xu.pdf Hierarchical Task Generalization with Neural Programs]'' ===
=== ''[https://arxiv.org/pdf/1702.02217.pdf Multitask Evolution with Cartesian Genetic Programming]'' ===
=== ''[https://arxiv.org/pdf/1702.02217.pdf Multitask Evolution with Cartesian Genetic Programming]'' ===

Revision as of 13:53, 6 November 2017

Contents

PathNet

Key "words"

  • Super neural network
  • Evolved sub-models from a larger set of parameters
  • Multitask learning
  • No catastrophic forgetting
  • Embedded transfer learning

Thesis problem specification

  • Using small PathNet structure (3 layers 10-20 modules of small affine MLPs)
  • Testing on problem easy to divide into subtasks
    • Some RL-environment from OpenAI gym?
      • LunarLander:
        • Hover
        • Land safely
        • Land in goal
        • Land in goal quickly

Who cites PathNet?

Born to Learn

EPANN - Evolved Plastic Artificial Neural Networks Mentions Pathnet as an example of where evolution where used to train a network on multiple tasks. "While these results were only possible through significant computational resources, they demonstrate the potential of combining evolution and deep learning approaches.

Learning time-efficient deep architectures with budgeted super networks

Mentions PathNet as a predecessor in the super neural network family

Deep Learning for video game playing

Reviewing recent deep learning advances in the context of how they have been applied to play different types of video games

Evolutive deep models for online learning on data streams with no storage

Pathnet is proposed alongside PNNS as a way to deal with changing environments. It is mentioned that both PathNet and progressive networks show good results on sequences of tasks and are a good alternative to fine-tuning to accelerate learning.

Online multi-task learning using active sampling

Cites Progressive Neural Networks for multitask learning

Hierarchical Task Generalization with Neural Programs

Multitask Evolution with Cartesian Genetic Programming

Personal tools
Front page