Module 3 - Using Large Language Models (LLMs) for Coding Assistance¶
Work in progress
This page is currently under development.
This module introduces the participants to the use of LLMs as coding assistants, focusing on integrating AI tools into daily programming workflows to boost productivity, improve code, and accelerate learning. Note that this is not a Module to learn about basic usage of LLMs, or other of its capabilities regarding data analysis, image and video generation, or scientific writing. Participants will learn about prompt engineering, how to effectively use ChatGPT, GitHub Copilot, Codex, and other LLMs, understand the potential pitfalls, and self-learn using LLM chats. We will also show how to use multiple LLMs to define a workflow for code generation.
Learning Outcome¶
By the end of this module, participants will be able to:
Sections¶
- Quick welcoming to the module (15min)
- Introduction to LLMs (30min)
- What are LLMs and how they work
- Example of code assistants
- Benefits and limitations of using AI in programming
- AI-assisted vs. AI-generated code
- (Optional) Quiz
- Prompt Engineering (1h)
- ...
- (Optional) Quiz
- Using ChatGPT for Programming (1h30min)
- ...
- (Optional) Quiz
- Code Assistants: Codex and GitHub Copilot (1h30min)
- ...
- (Optional) Quiz
- AI-Generated Code Workflow: from ChatGPT to Google AI Studio (1h30min)
- ...
- (Optional) Quiz
Participants' projects¶
After the last session, we will organize an additional 2-hour session for all participants. The objective of this session is to allow participants to work on their projects uninterrupted, in a room with experts and peers, where they can collaboratively address questions and challenges.