Workshop with Graphcore: Next Gen AI Research on IPUs

Register by following the link below:

Register for the workshop

*Due to the corona situation, the physical part of this event has been cancelled. The event will go on as planned on zoom. 

Introduction to the IPU 

Graphcore is a UK technology unicorn ($2.8B) that has developed the IPU (Intelligence Processing Unit) - a novel, deliberately different chip designed from the ground up for Machine Intelligence workloads. The IPU enables researchers to achieve significantly higher performance from their existing ML models and more importantly open new, previously not feasible areas of investigation due to its unique architecture. 

This workshop will cover why hardware has an outsized effect on the direction of ML research, Graphcore’s wager on the future ML architectures still to be developed, and how we’ve designed a technology to meet those needs. We will also deep-dive into the chip architecture and how to program it, with some examples from Tensorflow and Pytorch.  


Program

3:00-3:10pm  Welcome and Introduction

3:10-3:40pm IPU technical deep dive 

3:40-4:20pm  Research talk 1 – Dominic Masters 

4:20-4:30pm  Q&A 

4:30-5:00pm  Research talk 2 – Daniel Justus 

5:00-5:10pm  Q&A 

5:10-6:00pm  Live demo/hands on session 

05:45-06:30  Networking 


Research talk 1 

Speaker: Dominic Masters, Research Team Lead at Graphcore 

Talk #1: Using IPUs to Make EfficientNet Efficient in Practice, Not Just in Theory 

Abstract:  

Modern convolutional neural networks have become increasingly reliant on using FLOP efficient depthwise and group convolutions. This allows them to achieve high accuracy while keeping the "theoretical" cost, measured in FLOPs, low. However, in practice, these operations are notoriously hard to accelerate on GPUs. This is particularly true for the state-of-the-art model EfficientNet. We show how these operations can easily be accelerated on IPUs and use this knowledge to train EfficientNet an order of magnitude faster. 

Research talks 2 & 3 

Speaker: Daniel Justus, Research Scientist at Graphcore 

Talk #2: Using grouped operations on the IPU to improve language models 

Abstract:  

Attention based language models have become the state-of-the-art for natural language processing applications. In this presentation, we explore the use of grouped transformations in the Transformer architecture. We introduce a new layer with grouped convolutions to model short-range interactions and complement the self-attention module.  Furthermore, we use grouped matrix multiplications to reduce the high computational cost of dense feed-forward layers.  

Taking advantage of the IPU’s superior performance on sparse operations, the resulting GroupBERT architecture shows a 2x reduced time-to-train for language representation learning when compared to an equally accurate BERT model. 

Research talk 3: Accelerating Graph Neural Networks on the IPU 

Abstract: 

From social networks to the representation of molecules, the importance of graph-structured data and the demand for models that act on this data is steadily increasing. A particularly interesting and challenging use case deals with graphs that change dynamically by gaining new nodes or edges over time. Using the example of Temporal Graph Networks (E. Rossi et al., 2020), we demonstrate the advantages of the IPU for training Graph Neural Networks. 


                                  Register now!

 

Published Nov. 22, 2021 3:44 PM - Last modified Dec. 3, 2021 11:28 AM