4th Modelling Symposium:

Introducing Deep Neural Networks

organized by Felix Ball

supported by


Updates


Overview


I am pleased to announce the 4th Modelling Symposium which provides once more a mix of theoretical contents and application-oriented analyses. The next symposium will cover Deep Neural Networks (DNNs) including basic introductions into DNNs, common building blocks, design patterns and architectures, best practices, optimization, applications etc. To this end, it is my pleasure to welcome -- Prof. Dr. Sebastian Stober – as tutor for this year’s symposium.
 
Goal: Please note that DNNs are complex and that this course will help you to get started with DNN analyses. The workshop provides a general introduction into DNNs covering a wide range of topics. After the 4 days you should have an overview of different DNNs, their strength and weaknesses and which parameters of the model might be important and which ones you might have to tweak. The course will also help you to make decisions about which information/parameter can be important in steps XY and it also helps you to better understand the DNN literature (e.g. whether author's omitted important information about the presented models).

 

Have a look on NoesseltLab.org, if you want to know more about previous events.

 

When & Where


When

  • 26.07.2021 - 30.07.2021

 

Where

  • This year likely ONLINE (unfortunately, it is not clear how the whole pandemic situation will develop and which restrictions might be imposed in the future).

 

Wednesday off!

  • There needs to be some time to digest!

Detailed Program (can be subject to changes)


Please note that middle european time zone applies for all days (i.e. Berlin time zone)

1st half of the week


Monday
(Basics and CNNs)

09.00 - 10.30: General introduction

                         (machine learning basics)
                         Break

11.00 - 12.30: Convolutional Neural Networks I  (Basics)
                         Break

14.00 - 15.30: Convolutional Neural Networks II (Hands-on)
                         Break

16.00 - 17.30: Convolutional Neural Networks III

                         (Advanced)
                         Break

17.40 - 18.30: OPTIONAL - Discussing your data models


Tuesday
(common building blocks, design patterns and architectures)

09.00 - 10.30: Recurrent Neural Networks I  (Basics)
                         Break

11.00 - 12.30: Recurrent Neural Networks I  (Hands-on)
                         Break

14.00 - 15.30: Attention mechanisms
                         Break

16.00 - 17.30: Transformers
                         Break

17.40 - 18.30: OPTIONAL - Discussing your data models

2nd half of the week


Thursday
(best practices [BP], optimization and introspection)

09.00 - 10.30: Best practices, optimisation and

                         regularization techniques I (Basics)
                         Break

11.00 - 12.30: Best practices, optimisation and

                         regularization  techniques II (Hands-on)
                         Break

14.00 - 15.30: Introspection I (Basics)
                         Break

16.00 - 17.30: Introspection II (Hands-on)
                         Break

17.40 - 18.30: OPTIONAL - Discussing your data models


Friday
(Applications, transfer learning and sneak peek)

09.00 - 10.30: Present your data
                        Break

11.00 - 12.30: Possible applications (EEG and fMRI)
                         Break

14.00 - 15.30: Model compression and transfer learning
                         Break

16.00 - 17.30: Sneak peek and summary


Software, Code, Equipment, & Requirements


All information will be regularly updated, so please check for updates!
 
Software: Hands-on sessions will be based on Python and Tensorflow.
 
Code & Equipment: The code will be provided during the symposium. We will use a computation cluster so you do not have to worry about software and installation. All you need is a laptop/PC and a stable internet connection.
 
Requirements: The hands-on sessions require that you have general coding skills and are not an absolute beginner. You should have already written pieces of code, maybe a data analysis or e.g. an experiment. You should know Python and also Numpy, you should know what loops and conditions are, different types of variables, n-dimensional arrays, what a function is etc. Please note that we do not have time to cover basic programming.
 
Literature: This course will cover a variety of topics related to DNNs. To enhance your experience and avoid being overwhelmed (e.g. in case, you have never heard about DNNs before), you should consider reading about DNNs beforehand. Here are suggestions for starting with DNNs and computational models (more might follow):

 

Paper and books
 
Storrs & Kriegeskorte; Kriegeskorte & Douglas; Cichy & Kaiser; Goodfellow, Bengio & Courville
 
Videos
 
TensorFlow and DNNs without PhD

Speaker: Prof. Dr. Sebastian Stober


Foto Quelle: Jana Dünnhaupt / Universität Magdeburg

Sebastian Stober is an interdisciplinary researcher with a PhD in computer science and a background in (applied) machine learning, (music) information retrieval and cognitive neuroscience. He is especially interested in so-called “human-in-the-loop” scenarios, in which both humans and machines learn from each other and together contribute to the solution of a problem. Since October 2018, he is Professor for Artificial Intelligence at the Otto-von-Guericke-University Magdeburg. Before, he was head of a new junior research group on Machine Learning in Cognitive Science at the University of Potsdam and from 2013 to 2015, he was post-doctoral fellow in the labs of Adrian Owen and Jessica Grahn at the Brain and Mind Institute at Western University in London, Ontario.


Contact


ScienceEventsFB[at]gmail.com