User:Martijho
From Robin
(→Who cites PathNet?) |
|||
Line 39: | Line 39: | ||
== Who cites PathNet? == | == Who cites PathNet? == | ||
- | + | ''[https://arxiv.org/pdf/1703.10371.pdf Born to Learn]'' | |
EPANN - Evolved Plastic Artificial Neural Networks | EPANN - Evolved Plastic Artificial Neural Networks | ||
Mentions Pathnet as an example of where evolution where | Mentions Pathnet as an example of where evolution where | ||
Line 47: | Line 47: | ||
evolution and deep learning approaches. | evolution and deep learning approaches. | ||
- | + | ''[https://arxiv.org/pdf/1706.00046.pdf Learning time-efficient deep architectures with budgeted super networks]'' | |
Mentions PathNet as a predecessor in the super neural network family | Mentions PathNet as a predecessor in the super neural network family | ||
- | + | '' [https://arxiv.org/pdf/1708.07902.pdf Deep Learning for video game playing]'' | |
Reviewing recent deep learning advances in the context | Reviewing recent deep learning advances in the context | ||
of how they have been applied to play different types of video games | of how they have been applied to play different types of video games | ||
- | + | ''[http://ceur-ws.org/Vol-1958/IOTSTREAMING2.pdf Evolutive deep models for online learning on data streams with no storage]'' | |
Pathnet is proposed alongside PNNS as a way to deal with changing environments. It is mentioned that both PathNet and progressive networks show good results on sequences of tasks and are a good alternative to fine-tuning to accelerate learning. | Pathnet is proposed alongside PNNS as a way to deal with changing environments. It is mentioned that both PathNet and progressive networks show good results on sequences of tasks and are a good alternative to fine-tuning to accelerate learning. | ||
- | + | ''[https://openreview.net/pdf?id=H1XLbXEtg Online multi-task learning using active sampling]'' | |
Cites Progressive Neural Networks for multitask learning | Cites Progressive Neural Networks for multitask learning | ||
- | + | ''[http://juxi.net/workshop/deep-learning-rss-2017/papers/Xu.pdf Hierarchical Task Generalization with Neural Programs]'' | |
Mentions PathNet as way of reusing weights | Mentions PathNet as way of reusing weights | ||
- | + | ||
+ | ''[https://arxiv.org/pdf/1702.02217.pdf Multitask Evolution with Cartesian Genetic Programming]'' | ||
Mentions PathNet in a list of systems that use evolution as tool in multitasking | Mentions PathNet in a list of systems that use evolution as tool in multitasking |
Revision as of 12:19, 8 November 2017
Contents |
Key "words"
- Super neural network
- Evolved sub-models from a larger set of parameters
- Multitask learning
- No catastrophic forgetting
- Embedded transfer learning
Thoughts on Thesis
- Search for the first path is unnecessary? The search is over good permutations of parameters from the network at the same time the parameters are trained for the first time. In other words: does the search provide a significant increase in transferability or any measurable increase in performance over just picking a random path and training it for a set amount of iterations?
Thesis problem specification
Studying the behaviour of super neural networks when saturated with subtasks from the same domain such as in a curriculum learning scenario. Include research questions such as
- Can we estimate the decline in needed capacity for each new sub-task learned from the curriculum?
- Could a PathNet saturated with optimized paths for tasks from a curriculum provide one/few-shot learning?
- What would, in that case, constitute a "saturated PathNet"?
- Is there a learning advantage to be had from this kind of learning?
- Is there a measurable increase in performance by searching over optimal "first paths" instead of just training a selected segment of the PathNet?
PathNet structure
Small structure to reduce computational requirements.
- (3 layers 10-20 modules of small affine MLPs)
Test scenario
Must be fairly quick to provide one episode. Small input dimentionality to reduce necessary capacity of PathNet structure and computational time. The scenario must also be easy to divide into subtasks.
- OpenAI gym?
- LunarLander:
- Hover
- Land safely
- Land in goal
- Land in goal quickly
- LunarLander:
Who cites PathNet?
Born to Learn EPANN - Evolved Plastic Artificial Neural Networks Mentions Pathnet as an example of where evolution where used to train a network on multiple tasks. "While these results were only possible through significant computational resources, they demonstrate the potential of combining evolution and deep learning approaches.
Learning time-efficient deep architectures with budgeted super networks Mentions PathNet as a predecessor in the super neural network family
Deep Learning for video game playing Reviewing recent deep learning advances in the context of how they have been applied to play different types of video games
Evolutive deep models for online learning on data streams with no storage Pathnet is proposed alongside PNNS as a way to deal with changing environments. It is mentioned that both PathNet and progressive networks show good results on sequences of tasks and are a good alternative to fine-tuning to accelerate learning.
Online multi-task learning using active sampling Cites Progressive Neural Networks for multitask learning
Hierarchical Task Generalization with Neural Programs Mentions PathNet as way of reusing weights
Multitask Evolution with Cartesian Genetic Programming Mentions PathNet in a list of systems that use evolution as tool in multitasking