This will be a
short “trip report” on the talks I watched over the week of the virtual GTC 2022 (March 21 - 24). All talks a freely available online.
|⭐||Best talks / CWEs|
|👋||I participated in|
|⛔||Bad audio / don’t watch|
|Available on YouTube|
|CWE||Connect With the Experts (Panel Q&A)|
Speaker: Jeff Larkin
This was an awesome talk, one of my favorites of the conference. It shows multiple examples of refactoring code from C++ with OpemMP/OpenACC to Standard C++. The refactoring code is always cleaner, smaller and faster. The talks also gives a couple algorithm tips when refactoring, such as preferring
Speaker: Stephen Jones
This was a summary of not just how CUDA programming but why it works the way it does. If you have ever been curious of what a grid, warp, (thread) block or thread is and how that relates to CUDA programming and writing CUDA kernels, this is the talk for you.
If you are interested in either Standard C++ or CUDA C++, this CWE is worth a watch. Some of the “experts” come and go as they split off into separate 1-on-1 rooms, but many of the panelists remain in the main room to answer audience questions.
Speaker: Jensen Huang, CEO of NVIDIA
The keynote was awesome (as always). It started out with a “virtual fly through” of NVIDIA HQ in Santa Clara. It looks surreal but that is actually what HQ looks like (minus the robots trapped in the basement). JHH designed it to feel like a futuristic spaceship. I always feel inspired after watching Jensen keynotes. Key announcements:
- NVIDIA Announces Hopper Architecture, the Next Generation of Accelerated Computing
- NVIDIA Announces DGX H100 Systems – World’s Most Advanced Enterprise AI Infrastructure
- NVIDIA Introduces Grace CPU Superchip
- NVIDIA Introduces 60+ Updates to CUDA-X Libraries, Opening New Science and Industries to Accelerated Computing
- NVIDIA Announces Digital Twin Platform for Scientific Computing
Speaker: Tim Costa
This was another great talk. This talk has some overlap with the No More Porting talk but most of it is different. The highlight (for me at least) was the example of Maxwell’s equation using Senders and Receivers (a C++ proposal that looks like it will go into C++ early in the C++26 cycle). I am really looking forward to the new paradigm that senders & receivers will unlock for C++.
Speaker: Stephen Jones
Another great CUDA talk from Stephen Jones. This talk covers how the architecture changes of Hopper will affect how you program GPUs with CUDA. Specifically, the Hopper architecture introduces the Thread Block Cluster and Cluster Distributed Shared Memory. You can get a high level introduction to it in Stephen’s talk and for a deeper dive, you can check out Optimizing CUDA Applications for NVIDIA Hopper Architecture (note: this is one of the talks that has pretty bad audio, so if you are reading this in the future - it might be worth googling to see if the talk has been given again with better audio quality).
Speaker: Bryce Adelstein Lelbach
Bryce always gives great talks and always has some of the nicest slide decks. I have seen different versions of this talk before but there are a few new things in this talk. If you’ve watched Jeff’s No More Porting talk or Tim’s A Deep Dive into the Latest HPC Software you will recognize some of the examples. However, most of the content is totally different and there are tons of awesome modern C++ examples. My favorite example is probably using
std::transform_reduce to get a word count in parallel.
For the sake of transparency, I should state that Bryce and I have a podcast together.
Speaker: Jeff Hammond
This might be my favorite talk of the conference as it was the only talk that I tweeted about during the conference. Jeff takes you through a whirlwind history of GPU computing and then compares different models / languages for accelerating your code on GPUs and compares to see which is fastest over three different examples. Definitely worth the watch if you have 30 minutes (or 15 minutes on 2x).
One of the highlights of @NVIDIAGTC 2022 was @science_dot's talk comparing CUDA #Python, CUDA C++, CUDA #Fortran, CuPy, Standard C++, OpenMP, OpenACC, and so much more! If you are curious which gets you the most perf, check it out: https://t.co/pEWaNzEOe4 #cpp #cuda pic.twitter.com/TIhvGR8bhE— Conor Hoekstra (@code_report) March 24, 2022
Last but not least, this panel was definitely my favorite panel / CWE of GTC 2022. It doesn’t list her below but Daisy Hollman of Google and formerly Sandia National Labs was on the panel as well. It was super interesting to hear the thoughts of both NVIDIAns and non-NVIDIAns about the future of standard C++ and parallel compute. There were some pretty interesting exchanges. I would definitely recommend this talk if you are interested.
Hope to See You Next Year!
This was my third virtual GTC I have “attended” since joining NVIDIA back in 2019. I’ve got my fingers crossed that in 2023 we will be able to attend in person and I can meet some of you there!
Feel free to leave a comment on the reddit thread.