AI-110 Large Language Models (LLM) Intro
Course Description: Artificial intelligence has become an extremely important area for IT professionals and engineers in the past 10-20 years with the scientific breakthroughs and practical applications of deep learning and more recently of generative AI systems, especially with Large Language Models (LLM) such as ChatGPT. That’s why understanding the concepts, and practical usage of AI systems generally and LLMs specifically is becoming essential for all IT and other technical professionals as well as for managers with technical background.
This training focuses on Large Language Models (LLMs), and gives an insight into their theory and operation, namely (preliminary version):
- Using LLMs in Applications
- The Foundations: Neural Networks, Deep Learning, CNN, RNN, Transfer Learning, NLP
- “Attention is all you need” – The Transformer Architecture
- Pre-training of LLMs
- LLM Fine-tuning techniques: Prompt Tuning and Parameter Efficient Fine Tuning (PEFT)
- Reinforcement Learning with Human feedback (RLHF)
- MLOps, LLMOps (optional)
- Ethical considerations (optional)
Besides gaining a basic understanding of the theory of Large Language Models (LLMs), students will be able to watch their details and operation by instructor’s demonstration and their own exercises.
Course Length: 8 training hours
Structure: 50% lecture, 25% demonstration by the instructor, 25% hands on lab exercises
Target audience: Technical managers as well as IT and telco professionals who want to familiarize themselves with cloud-native technologies
Prerequisites: General understanding of and experience in IT systems and/or IT development