Workshop and Tutorial Schedule

All times shown in Pacific Standard Time (UTC−8)

To sign up for workshops and tutorials, first use the conference registration page to register for the conference. Some events may require additional registration, see each event for details. All events will send emails to registered participants.

Early workshop registration is free for all graduate and undergraduate students.

Lunch will be served from 12:30pm.

Date Time Type Name Location
Feb 12 8:30am-12:30pm Tutorial T1: Introduction to the Versal ACAP AI Engine and to its programming model San Carlos IV
Feb 12 8:30am-12:30pm Tutorial T3: Enabling Networking for Distributed Applications on FPGA Clusters San Carlos II
Feb 12 9:00am-12:30pm Workshop W1: Workshop on Security for Custom Computing Machines San Carlos III
Feb 12 1:30pm-5:30pm Tutorial T2: Leveraging MLIR to Design for AI Engines San Carlos IV
Feb 12 1:30pm-3:30pm Tutorial T4: Create your Own FPGA!: An OpenFPGA Guided Tutorial San Carlos II
Feb 12 1:30pm-5:30pm Tutorial T6: FPGA Architecture for Deep Learning San Carlos III
Feb 12 3:30pm-5:30pm Tutorial T5: Introduction to the Intel FPGA AI Suite: Generate Inference IP for Intel FPGAs San Carlos II
Feb 12 6:00pm Reception FPGA Welcome Reception San Carlos Ballroom Foyer

Workshop and Tutorial Details

T1: Introduction to the Versal ACAP AI Engine and to its programming model

12th February 2023, 8:30am-12:30pm PST

Website: https://www.xilinx.com/support/university/workshops.html#americas

Please register using this link before attending the tutorial

Organizers: Mario Ruiz (AMD), Naveen Purushotham (AMD), and Hugo Andrade (AMD)

This tutorial will briefly introduce the heterogeneous Versal Adaptive Compute Acceleration Platform. We will primarily focus on the Adaptable Intelligent Engine (AIE), a new type of compute element in the latest Xilinx technology. The AI Engine is a tiled array of Very Long Instruction Word (VLIW) and Single Instruction Multiple Data (SIMD) processing elements that provide high compute density. We will describe the AI Engine tile and AI Engine array architecture as well as the different data movement alternatives. We will also introduce the AI Engine programming model, which consists of a Data Flow Graph Specification written in C++ and the kernel description written either in C or C++. The application can be compiled and executed using the AI Engine tool chain, which is part of the Vitis Unified Software.

T2: Leveraging MLIR to Design for AI Engines

12th February 2023, 1:30pm-5:30pm PST

Organizers: Jack Lo (AMD), Sam Bayliss (AMD), Andra Bisca (AMD), Kristof Denolf (AMD),
Joseph Melber (AMD), Stephen Neuendorffer (AMD), Erwei Wang (AMD), Phil James-Roxby (AMD)

The AI Engine array of the AMD Versal ACAP device is a set of VLIW vector processors with adaptable interconnect. This tutorial is targeted at tool developers and system designers who are looking for fast and completely open source design tools to support their research. Participants will first get insight into the Versal ACAP architecture, more specifically the AI Engine compute and data movement capabilities. Through small design examples expressed in the MLIR-AIE dialect and executed on an ACAP device, participants will leverage AI Engine features for optimizing performance of increasingly complex designs. This will enable them to recognize how this physical-level dialect can be connected to higher level abstraction in the MLIR framework and understand how logical concepts can be expressed to increase productivity and reduce complexity. The labs will be done using AWS instances with opportunities to execute their own designs on real hardware.

T3: Enabling Networking for Distributed Applications on FPGA Clusters

12th February 2023, 8:30am-12:30pm PST

Website: https://systems.ethz.ch/research/data-processing-on-modern-hardware/hacc/tutorial-fpga.html

Organizers: Zhenhao He (ETH Zürich), Gustavo Alonso (ETH Zürich), Lucian Petrica (AMD), Michaela Blott (AMD)

FPGAs are increasingly being deployed in data centers and the cloud in a variety of settings and configurations. Such rapid cloud development makes FPGAs no longer viewed as a PCIe-attached accelerator, but as a first-class compute resource directly connected to the network. This opens up lots of opportunities for in-network processing and distributed computing on FPGAs. In this tutorial, we will present and illustrate with examples how to use several resources available to the academic research community to pursue research in distributed applications on top of FPGA clusters. First, we present the hardware platform that has been made available: the ETH Zürich-AMD HACC cluster, for research in cloud computing with FPGAs. As a second step, we present hardware network stacks, e.g., TCP/IP stack with EasyNet and RDMA stack with Coyote, that are performant and compatible to the data center infrastructures. As a third step, we present ACCL, an open-source MPI implementation for FPGAs developed to provide a higher level of network abstraction and to simplify the use of networking in machine learning applications. We will present not only the design of such resources, but also how to deploy them in the cluster with demos and how to use or extend them with a hands-on session.

 T4: Create your Own FPGA!: An OpenFPGA Guided Tutorial

12th February 2023, 1:30pm-3:30pm PST

Website: https://sites.google.com/view/openfpgaatfpga23

Organizers: Pierre-Emmanuel Gaillardon (RapidSilicon, University of Utah), Ganesh Gore (University of Utah), Nanditha Rao (IIT Bangalore)

OpenFPGA is an open-source framework that automates and accelerates the development cycle of customizable FPGA architectures. OpenFPGA allows users to define customized FPGA architectures using a high-level architecture description language and auto generate the corresponding Verilog netlists that can be used in a backend flow to generate production-ready layouts. OpenFPGA also provides native bitstream generation support for user Verilog designs, avoiding recurring engineering costs in developing CAD tools for these custom FPGAs. This workshop will introduce the participants to OpenFPGA and showcase its capabilities and features through live demos. It will also provide them with hands-on training to use the OpenFPGA framework.

 
T5: Introduction to the Intel FPGA AI Suite: Generate Inference IP for Intel FPGAs

12th February 2023, 3:30pm-5:30pm PST

Organizer: Rama Venkata (Intel)

FPGAs can be an optimal choice for custom AI platforms with low latency and low power. This session will review the flow for Intel® FPGA AI Suite along with Intel® Distribution of OpenVINO™ toolkit. See how Intel FPGA AI Suite can generate, and also help optimize, inference IP. FPGA designers and AI algorithm developers can use this flow to create high performance FPGA designs that can absorb AI functionality. Popular, industry standard frameworks – TensorFlow, PyTorch and ONNX – are supported.

T6: FPGA Architecture for Deep Learning

12th February 2023, 1:30pm-5:30pm PST

Website: https://sites.google.com/view/fpga23fpgasfordl

Organizers: Andrew Boutros (University of Toronto), Vaughn Betz (University of Toronto), Aman Arora (University of Texas at Austin), Lizy K. John (University of Texas at Austin), Seyedramin Rasoulinezhad (University of Sydney), Phillip Leong (University of Sydney)

FPGA architecture has been continuously evolving over the course of the past three decades to better suit key FPGA use cases. With deep learning (DL) inference becoming a major market segment, FPGA architecture is also evolving to match its requirements. FPGA vendors are announcing new FPGA families specifically targeted for DL workloads and many academic research efforts are proposing FPGA architecture modifications for DL. In this tutorial, we will focus on both academic and industrial FPGA architecture enhancements for DL that have been introduced in recent years. First, we will give a brief introduction on the basics of FPGA architecture and how the key components of FPGAs lead to strengths and weaknesses in DL applications. Then, we will cover DL-specific enhancements to traditional FPGA components such as logic and DSP blocks as well as new specialized elements such as tensor blocks, computational BRAMs and AI engine processors that have been introduced for DL. We will also highlight promising directions for future research in this area. Finally, we will have a panel discussion with representatives from major FPGA vendors and academia to present their perspectives on the future of FPGA architecture and use cases in the DL domain

W1: Workshop on Security for Custom Computing Machines

12th February 2023, 9:00am-12:30pm PST

Website: https://sccm-workshop.github.io/

Organizers: Dustin Richmond (University of California, Santa Cruz), Ryan Kastner (University of California San Diego), Jeff Goeders (Brigham Young University), Mirjana Stojilović (EPFL)

Hardware security is an important design consideration. Recent events have raised awareness of security in general-purpose processors. As experts we must consider: What are the equivalents for reconfigurable architectures and custom computing machines? How do we defend against threats that exist today? How do we design our systems to defend against future threats? This is increasingly important as we deploy customized hardware at unprecedented scales.