NVIDIA GTC 2022 Trip Report

Conor Hoekstra · March 28, 2022

This will be a short “trip report” on the talks I watched over the week of the virtual GTC 2022 (March 21 - 24). All talks are freely available online.

Summary

Speaker(s) Talk
Larkin ⭐ No More Porting: Coding for GPUs with ISO C++, Fortran, and Python
Jones ⭐ How CUDA Programming Works
AH & MM ⛔ Deep Dive into GPU-accelerated Big Data and DS Technologies
Huang 🌟⭐ GTC 2022 Keynote with NVIDIA CEO Jensen Huang image
Costa ⭐ A Deep Dive into the Latest HPC Software
GP & MA Inside the NVIDIA Hopper Architecture
Jones ⭐ CUDA: New Features and Beyond
Lelbach ⭐ C++ Standard Parallelism image
GT & VM ⛔ Optimizing CUDA Applications for NVIDIA Hopper Architecture
MC & NG Accelerating PyTorch with Native CUDA Graphs Support
Enemark ⭐ Scaling Web and Visualization Apps with GPUs
MA, AD, NB Large-scale Machine Learning with Snowflake and RAPIDS
Hammond ⭐ Shifting through the Gears of GPU Programming
CWE ⭐ Standard and CUDA C++ User Forum
CWE 👋 Thrust, CUB, and libcu++ User Forum
CWE NVCC CUDA Compiler Toolchain
Panel ⭐ Future of Standard and CUDA C++


  Meaning
🌟 Keynote
Best talks / CWEs
👋 I participated in
Bad audio / don’t watch
image Available on YouTube
CWE Connect With the Experts (Panel Q&A)

⭐ No More Porting: Coding for GPUs with ISO C++, Fortran, and Python

Speaker: Jeff Larkin

This was an awesome talk, one of my favorites of the conference. It shows multiple examples of refactoring code from C++ with OpemMP/OpenACC to Standard C++. The refactoring code is always cleaner, smaller and faster. The talks also gives a couple algorithm tips when refactoring, such as preferring std::transform_reduce to std::trasform + std::reduce.

image image image image

⭐ How CUDA Programming Works

Speaker: Stephen Jones

This was a summary of not just how CUDA programming but why it works the way it does. If you have ever been curious of what a grid, warp, (thread) block or thread is and how that relates to CUDA programming and writing CUDA kernels, this is the talk for you.

image image

⭐ Standard and CUDA C++ User Forum

If you are interested in either Standard C++ or CUDA C++, this CWE is worth a watch. Some of the “experts” come and go as they split off into separate 1-on-1 rooms, but many of the panelists remain in the main room to answer audience questions.

Panelists:

image

🌟⭐ GTC 2022 Keynote with NVIDIA CEO Jensen Huang image

Speaker: Jensen Huang, CEO of NVIDIA

The keynote was awesome (as always). It started out with a “virtual fly through” of NVIDIA HQ in Santa Clara. It looks surreal but that is actually what HQ looks like (minus the robots trapped in the basement). JHH designed it to feel like a futuristic spaceship. I always feel inspired after watching Jensen keynotes. Key announcements:

⭐ A Deep Dive into the Latest HPC Software

Speaker: Tim Costa

This was another great talk. This talk has some overlap with the No More Porting talk but most of it is different. The highlight (for me at least) was the example of Maxwell’s equation using Senders and Receivers (a C++ proposal that looks like it will go into C++ early in the C++26 cycle). I am really looking forward to the new paradigm that senders & receivers will unlock for C++.

image image

⭐ CUDA: New Features and Beyond

Speaker: Stephen Jones

Another great CUDA talk from Stephen Jones. This talk covers how the architecture changes of Hopper will affect how you program GPUs with CUDA. Specifically, the Hopper architecture introduces the Thread Block Cluster and Cluster Distributed Shared Memory. You can get a high level introduction to it in Stephen’s talk and for a deeper dive, you can check out Optimizing CUDA Applications for NVIDIA Hopper Architecture (note: this is one of the talks that has pretty bad audio, so if you are reading this in the future - it might be worth googling to see if the talk has been given again with better audio quality).

image image

⭐ C++ Standard Parallelism

Speaker: Bryce Adelstein Lelbach

Bryce always gives great talks and always has some of the nicest slide decks. I have seen different versions of this talk before but there are a few new things in this talk. If you’ve watched Jeff’s No More Porting talk or Tim’s A Deep Dive into the Latest HPC Software you will recognize some of the examples. However, most of the content is totally different and there are tons of awesome modern C++ examples. My favorite example is probably using std::transform_reduce to get a word count in parallel.

For the sake of transparency, I should state that Bryce and I have a podcast together.

image

⭐ Shifting through the Gears of GPU Programming

Speaker: Jeff Hammond

This might be my favorite talk of the conference as it was the only talk that I tweeted about during the conference. Jeff takes you through a whirlwind history of GPU computing and then compares different models / languages for accelerating your code on GPUs and compares to see which is fastest over three different examples. Definitely worth the watch if you have 30 minutes (or 15 minutes on 2x).

image

⭐ Future of Standard and CUDA C++

Last but not least, this panel was definitely my favorite panel / CWE of GTC 2022. It doesn’t list her below but Daisy Hollman of Google and formerly Sandia National Labs was on the panel as well. It was super interesting to hear the thoughts of both NVIDIAns and non-NVIDIAns about the future of standard C++ and parallel compute. There were some pretty interesting exchanges. I would definitely recommend this talk if you are interested.

image

Hope to See You Next Year!

This was my third virtual GTC I have “attended” since joining NVIDIA back in 2019. I’ve got my fingers crossed that in 2023 we will be able to attend in person and I can meet some of you there!

Feel free to leave a comment on the reddit thread.

Twitter, Facebook