From Robin

(Difference between revisions)
Jump to: navigation, search
Line 2: Line 2:
Current draft of the thesis can be found by following this link
Current draft of the thesis can be found by following this link
:[ CurrentMasterThesis.pdf]
:[ Current thesis draft]
Updated as of 24.04.2018
Updated as of 24.04.2018

Revision as of 10:46, 26 April 2018


Current draft

Current draft of the thesis can be found by following this link

Current thesis draft

Updated as of 24.04.2018

Presentation of PathNet and research questions
Transfer learning in SNNs: (tl;dr) + First-path-experiments
Transfer learning in SNNs: Search-experiments
ML for the cool kids

Thesis structure and notes

A seperate page describing outline and section structure of the thesis.

Thesis structure and outlines


A separate page describing research questions and the experiments proposed to answer them.


Terms to use in thesis

  • Plastic Neural Network
NN that change topology or connectivity according to learning algorithm
  • In-silico
performed by computer
  • Modular Super Neural Network
DNN consisting of modules of smaller NNs,
  • Task-specific meme
Smallest concise unit of knowledge required to perform some task
Example - Task = Pick something up. Meme = Ability to bend index finger
  • Memetics
Study of information in an analogy to Darwinian evolution.
  • Transferability
Ability to transfer/reuse knowledge between task
  • Saturation in PathNet
Most or all modules are trained and locked to backpropagation
  • Embedded transfer learning
Knowledge transfer capability is incorporated into the machine learning structure (PathNet, PNNs)
  • Catastrophic forgetting
Forgetting previously known task when fine-tuning parameters
  • Evolved sub-models
Using GAs to evolve paths through a larger set of parameters (PathNet functionality)

Thoughts on Thesis

- Search for the first path is unnecessary? The search is over good permutations of parameters from the network at the same time the parameters are trained for the first time. In other words: does the search provide a significant increase in transferability or any measurable increase in performance over just picking a random path and training it for a set amount of iterations?

- When training on a saturated PathNet, it might be quicker to preprocess the data for each path (view it as feature extraction) since there is no backpropagation except for in the final task-specific softmax layer

- When training on a curriculum and decrease in batch size for each increase in the task difficulty might make sense. Easy examples have little "nuance" in-between datapoints so large batch size might increase convergence speed. Equivalently, complex tasks later on in the curriculum might have a lot of "detail" which will be drowned out if the batch size is kept constant.

Thesis problem specification

Studying the behaviour of super neural networks when saturated with subtasks from the same domain such as in a curriculum learning scenario. Include research questions such as

  • Can we estimate the decline in needed capacity for each new sub-task learned from the curriculum?
  • Could a PathNet saturated with optimized paths for tasks from a curriculum provide one/few-shot learning?
    • What would, in that case, constitute a "saturated PathNet"?
    • Is there a learning advantage to be had from this kind of learning?
  • Is there a measurable increase in performance by searching over optimal "first paths" instead of just training a selected segment of the PathNet?

PathNet Implementation

The pathnet is implementet using Keras with a tensorflow backend, in a object oriented structure with a high level of mudularity.

Pathnet layers are represented as subclasses of Layer. Currently only DenseLayer is implemented. These contain all modules in the layer and functionality for providing a log of layer-information (used for saving pathnet to disc), merging selected modules from the layer with a new model, temporarily storing weights in the layer and loading them back (used during backend session reset). Task-objects contain the unique softmax layer, a potential optimal path as well as functionality for providig log (again: saving pathnet to disc), applying unique layer to a new model.

A PathSearch class contain all implemented search algorithms (currently tournament and a simple evolutionary search are implemented). This class use a provided pathnet object which provides paths (genotypes) and models (for fitness evaluation). The search metods returns a optimal path along side a history-structure that are used in the Analytics class. Here, test results are stored and plotted.

PathNet structure

Small structure to reduce computational requirements.

(3 layers 10-20 modules of small affine MLPs)

Test scenario

Must be fairly quick to provide one episode. Small input dimentionality to reduce necessary capacity of PathNet structure and computational time. The scenario must also be easy to divide into subtasks.

  • OpenAI gym?
    • LunarLander:
      • Hover
      • Land safely
      • Land in goal
      • Land in goal quickly

Who cites PathNet?

Born to Learn EPANN - Evolved Plastic Artificial Neural Networks Mentions Pathnet as an example of where evolution where used to train a network on multiple tasks. "While these results were only possible through significant computational resources, they demonstrate the potential of combining evolution and deep learning approaches.

Learning time-efficient deep architectures with budgeted super networks Mentions PathNet as a predecessor in the super neural network family

Deep Learning for video game playing Reviewing recent deep learning advances in the context of how they have been applied to play different types of video games

Evolutive deep models for online learning on data streams with no storage Pathnet is proposed alongside PNNS as a way to deal with changing environments. It is mentioned that both PathNet and progressive networks show good results on sequences of tasks and are a good alternative to fine-tuning to accelerate learning.

Online multi-task learning using active sampling Cites Progressive Neural Networks for multitask learning

Hierarchical Task Generalization with Neural Programs Mentions PathNet as way of reusing weights

Multitask Evolution with Cartesian Genetic Programming Mentions PathNet in a list of systems that use evolution as tool in multitasking

Personal tools