Aritificial Intelligence-based tools are increasingly popular. You probably used tools such as OpenAI ChatGPT and Google Bard for various purposes, but can you use them successfully in the context of High-Performance Computing (HPC)?
Learning outcomes
When you complete this training you will
- understand how Large Language Models (LLMs) work and understand their strong and weak points;
- understand that such tools are only useful if you already have a reasonable level of domain knowledge;
- understand various ways in which these tools can be used successfully;
- know what to expect from shell-gpt;
- know how to use ChatGPT and GitHub Copilot to generate tests for your code;
- know how to use ChatGPT and GitHub Copilot to generate API documentation for your code;
- use GitHub Copilot to generate boring, boilerplate code;
- use ChatGPT and GitHub Copilot for debugging;
- use ChatGPT Advanced Data Analysis for data exploration and visualization;
- use ChatGPT from the command line;
- use Ansible LightSpeed to generate Infrastructure-as-Code.
Schedule
Total duration: 2.5 hours.
Subject | Duration |
---|---|
introduction and motivation | 5 min. |
understanding LLMs | 15 min. |
code generation | 15 min. |
test generation | 15 min. |
documentation generation | 15 min. |
debugging and refactoring | 15 min. |
shell-GPT | 15 min. |
Ansible LightSpeed | 30 min. |
There be dragons! | 30 min. |
wrap up | 5 min. |
Training materials
Slides are available in the GitHub repository, as well as example code and hands-on material.
Target audience
This training is for you if you want to save time and use HPC systems more efficiently.
Prerequisites
You will need experience programming in some programming language for the code generation parts. You are also expected to be familiar with Linux and an HPC environment.
Trainer(s)
- Frederik De Ceuster (KU Leuven)
- Frédéric Wautelet (University of Namur)
- Geert Jan Bex (Hasselt University)