Martijho-PathNet-thesis

From Robin

(Difference between revisions)
Jump to: navigation, search
Line 13: Line 13:
= Introduction =
= Introduction =
 +
From essay.
 +
More on multi task learning
 +
More on transfer learning
 +
 +
== Raise problem: catastrophic forgetting. ==
 +
Multiple solutions (PNN, PN, EWC)
 +
* Large structures (PNN, PN)
 +
* Limited in number of tasks it can retains(EWC)
 +
 +
Optimize reuse of knowledge while still providing valid solutions to tasks. More reuse and limited capacity use will increase amount of task a structure can learn.
 +
 +
:where do i start?
 +
Question DeepMind left unanswered is how different GAs influence task learning and module reuse.
 +
Exploration vs exploitation\ref{theoretic background on topic}
 +
 +
:why this?
 +
broad answers first, specify later.
 +
We know PN works. would it work better for different algorithms?
 +
logical next step from original paper "unit of evolution"
 +
 +
== Problem/hypothesis ==
 +
* What do modular PN training do with the knowledge?
 +
** More/less accuracy?
 +
** More/less transferability?
 +
Test by learning in end-to-end first then PN search.
 +
Difference in performance or reuse?
 +
 +
* Can we make reuse easier by shifting focus of search algorithm?
 +
** PN original: Naive search. Higher exploitation improve on module selection?
 +
 +
== How to answer? ==
 +
* Set up simple multitask scenarios and try.
 +
** 2 tasks where first are end to end vs PN
 +
** List algorithms with different selection pressure and try on multiple tasks.
 +
 +
= Theoretical Background =  
= Theoretical Background =  

Revision as of 13:37, 21 March 2018

Contents

Opening

Abstract

  • What is all this about?
  • Why should I read this thesis?
  • Is it any good?
  • What's new?

Acknowledgements

  • Who is your advisor?
  • Did anyone help you?
  • Who funded this work?
  • What's the name of your favorite pet?

Introduction

From essay. More on multi task learning More on transfer learning

Raise problem: catastrophic forgetting.

Multiple solutions (PNN, PN, EWC)

  • Large structures (PNN, PN)
  • Limited in number of tasks it can retains(EWC)

Optimize reuse of knowledge while still providing valid solutions to tasks. More reuse and limited capacity use will increase amount of task a structure can learn.

where do i start?

Question DeepMind left unanswered is how different GAs influence task learning and module reuse. Exploration vs exploitation\ref{theoretic background on topic}

why this?

broad answers first, specify later. We know PN works. would it work better for different algorithms? logical next step from original paper "unit of evolution"

Problem/hypothesis

  • What do modular PN training do with the knowledge?
    • More/less accuracy?
    • More/less transferability?

Test by learning in end-to-end first then PN search. Difference in performance or reuse?

  • Can we make reuse easier by shifting focus of search algorithm?
    • PN original: Naive search. Higher exploitation improve on module selection?

How to answer?

  • Set up simple multitask scenarios and try.
    • 2 tasks where first are end to end vs PN
    • List algorithms with different selection pressure and try on multiple tasks.


Theoretical Background

Implementation

Experiment 1: Search versus Selection

Experiment 2: Selection Pressure

Discussion

Ending

Personal tools
Front page