dScience Lunch Seminar: Kickoff

Welcome to our kickoff event in a series of monthly lunch seminars! Grab some lunch and join us for a talk by Luca Galimberti from NTNU, followed by an introduction to dScience by centre leader Morten Dæhlen.

Time and place: Oct. 14, 2021 11:30 AM–1:00 PM, The Science Library

Add to calendar

Photo: Colourbox

Luca Galimberti is a mathematician currently based at NTNU, after obtaining his PhD from ETC Zurich and working at the Mathematics Department of UiO. Recently, his research has been focused on Machine Learning and Neural Networks. More precisely, he is interested in using the powerful tools which mathematics offer to us in order to advance the theory in a formalized and rigorous way, and in applications geared towards quantitative finance.

dScience – Centre for Computational and Data Science – at the University of Oslo (UiO) is an interdisciplinary centre developing and supporting research within computational science and data science across UiO and together with partners in industry and public sector. Morten Dæhlen, centre leader, will present the centre and our upcoming activities.

Register now!


11:30 – Doors open and lunch is served

12:15 – “Deep Learning and Neural Networks: an infinite dimensional perspective” by Luca Galimberti (postdoctoral fellow, NTNU). Download abstract (PDF)

12:50 – Introduction to dScience and upcoming events by Morten Dæhlen (centre leader, dScience)

13:00 – Mingling (and goodbye)

To participate, please fill out the registration form by October 12. This way, we will not be short on food and drinks! (Registration is not binding)

Deep Learning and Neural Networks: an infinite dimensional perspective

The primary aim of any neural network architecture is to approximate an unknown mathematical function y = f(x), (x = attributes, y = response), relying only on a finite number of observations. The accuracy of this approximation will depend on the distribution of the dataset and the architecture of the network employed. The function f(x) can be arbitrarily complex. The Universal Approximation Theorem tells us that neural networks has a kind of universality, i.e. no matter what f(x) is, there is a network that can approximately approach the unknown target f. This result is valid for any finite number of attributes x.

However, in some situations and applications, we are required to approximate functions f which depend on an infinite number of attributes, a fact that prohibits us from using the Universal Approximation Theorem above.

In this talk, we are going to present a novel universal approximation result which is now valid also for unknown functions f depending on an infinite number of variables, and ensuring that also in this case, neural networks exhibits the universality property above.

About the seminar series

Once a month, dScience will invite you to join us for lunch, soft drinks and professional talks at the Science Library. In addition to these, we will serve lunch to PhD candidates in our lounge in Kristine Bonnevies hus every Thursday. Due to limited space (40 people), this will be first come, first served. See how to find us here (download).

Our lounge can also be booked by PhDs and Postdocs on a regular basis, whether it is for a meeting or just to hang out – we have fresh coffee all day long!

Register now!



Publisert 6. okt. 2021 15:37 - Sist endret 6. okt. 2021 15:37