User:Martijho
From Robin
Key "words"
- Super neural network
- Evolved sub-models from a larger set of parameters
- Multitask learning
- No catastrophic forgetting
- Embedded transfer learning
Thesis problem specification
Studying the behaviour of super neural networks when saturated with subtasks from the same domain such as in a curriculum learning scenario. Include research questions such as
- Can we estimate the decline in needed capacity for each new sub-task learned from the curriculum?
- Could a PathNet saturated with optimized paths for tasks from a curriculum provide one/few-shot learning?
- What would, in that case, constitute a "saturated PathNet"?
- Is there a learning advantage to be had from this kind of learning?
PathNet structure
Small structure to reduce computational requirements.
- (3 layers 10-20 modules of small affine MLPs)
Test scenario
Must be fairly quick to provide one episode. Small input dimentionality to reduce necessary capacity of PathNet structure and computational time. Scenario must also be easy to divide into subtasks.
- OpenAI gym?
- LunarLander:
- Hover
- Land safely
- Land in goal
- Land in goal quickly
- LunarLander:
Who cites PathNet?
Born to Learn
EPANN - Evolved Plastic Artificial Neural Networks Mentions Pathnet as an example of where evolution where used to train a network on multiple tasks. "While these results were only possible through significant computational resources, they demonstrate the potential of combining evolution and deep learning approaches.
Learning time-efficient deep architectures with budgeted super networks
Mentions PathNet as a predecessor in the super neural network family
Deep Learning for video game playing
Reviewing recent deep learning advances in the context of how they have been applied to play different types of video games
Evolutive deep models for online learning on data streams with no storage
Pathnet is proposed alongside PNNS as a way to deal with changing environments. It is mentioned that both PathNet and progressive networks show good results on sequences of tasks and are a good alternative to fine-tuning to accelerate learning.
Online multi-task learning using active sampling
Cites Progressive Neural Networks for multitask learning
Hierarchical Task Generalization with Neural Programs
Mentions PathNet as way of reusing weights