0 Foreword

0 Foreword #

This site serves (for now) as a lecture note repository for a short course offered as part of the module “Topics in Machine Learning and Data Science” in the Bachelor Data Science at Katholische Universität Eichstätt-Ingolstadt (KU).

Short course “From Language Models to AI Agents” #

Location and time #

The course takes place at GEOG-101 (MIDS Seminar room at Georgianum) at the following dates:

  1. Wednesday, 19.11.2025, 845 - 1015
  2. Wednesday, 03.12.2025, 845 - 1015
  3. Wednesday, 10.12.2025, 845 - 1015
  4. Wednesday, 17.12.2025, 845 - 1015

Abstract #

Since the public release of ChatGPT in November 2022, generative artificial intelligence (AI) approaches based on large language models (LLMs) have gained immense significance. These models power a wide range of applications—from purely text-based systems such as chatbots to multimodal and agentic frameworks involving image recognition, image and video generation, code execution, database queries, and internet search.

This short course introduces students to the foundations and capabilities of transformer-based LLMs, contrasting them with earlier approaches in computational linguistics. We will examine key model features, training paradigms, and inference parameters, aiming to develop a deeper understanding of how LLM-based systems—such as ChatGPT—(often) generate useful and coherent responses.

Building on this foundation, the course will explore advanced topics such as retrieval-augmented generation (RAG), parameter-efficient fine-tuning (PEFT), and the design of multimodal and reasoning-capable models. We will also introduce emerging agentic frameworks, including the Model Context Protocol (MCP), and discuss their potential impact on future AI systems.

Participants are expected to apply course concepts in small experiments or mini-projects, for example using the GWDG LLM platform (https://chat-ai.academiccloud.de/). Results and insights gained from these explorations are to be included in their course summary.

Course topics #

  1. Understanding transformer based LLMs & platforms (ChatGPT + …)
    • early approaches in computational linguistics
    • key model features
    • training paradigms + inference parameters
  2. Retreaval augmented generation (RAG)
  3. Parameter efficient fine-tuning (PEFT)
  4. Multimodality + reasoning
  5. Agentic AI: model context protocol (MCP) + …

Practical aspects #

  1. Relevant use cases for LLMs
  2. Public platforms for LLM usage & experiments
  3. Advanced usage of LLMs via APIs (e.g.: on CL / in IDE)
  4. How to host LLMs by yourself
  5. Further reading: courses, tutorials, books, videos, …
  6. Exchange: in course, in AI communities …

Not in this course: „prompt engineering“, Python coding, …

ECTS credit requirements #

  1. Active participation in lessons (e.g., point out errors / omissions!)
  2. Do some extra research / experiments / mini project (groups 1-3) extra resources on request: OpenAI API keys, LLM server (2x A100)
  3. Include personal insights in (short) course summary
  4. Indicate further personal plans regarding LLMs

Not o.k.: reproduction of lecture notes (personally or by LLM)!

Sneak preview: LLM server (OpenWebUI + Ollama, 2x A100) #

OpenWebUI: pretrained model vs instruction model

Figure: OpenWebUI in model comparison mode (here text model, only pretrained, vs. production model), also showing controls (such as temperature and top_p)

Last updated: 2025-12-10 13:15

← Cover Introduction →
© 2025 Nils Blümer