GET /congress/2025/event/82604a07-0548-4ad5-8d0b-2dd7bbf0e0e1/?format=api
HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": "82604a07-0548-4ad5-8d0b-2dd7bbf0e0e1",
    "kind": "sos",
    "name": "A Primer on LLM Security and Secure LLMOps",
    "slug": "a-primer-on-llm-security",
    "url": "https://api.events.ccc.de/congress/2025/event/82604a07-0548-4ad5-8d0b-2dd7bbf0e0e1/?format=api",
    "track": null,
    "assembly": "sos",
    "room": "ecd57af5-f5fe-4b00-bd18-005375deb4ac",
    "location": null,
    "language": "en, de",
    "description": "## Post-Session Material\r\n\r\nThank you for participating! :)\r\nThe slides are available here: [Slides (PDF) via Slideshare](https://de.slideshare.net/secret/1UNWblpgKTSoSR)\r\n\r\nYou are welcome to get in touch, either here (until Day 4) or via the contact information in the slides.\r\n\r\n## Session\r\n\r\nLarge Language Models (LLMs) have taken the world by storm. Alongside their vast potential, these models also present unique security challenges. This session will serve as a primer on LLM security and secure LLMOps, introducing key issues and concepts related to the security of LLMs and systems relying on them. For example, we will be looking at issues such as prompt injection, sensitive information disclosure, and issues related to the interaction of LLMs with the “outside world” (e.g., plugins or APIs, RAG, Agentic AI). Of course, we are also going to briefly look at how to red-team LLMs.\r\n\r\nThis session is based on previous iterations of “A Primer on LLM Security” at Congress and, based on audience feedback, has been extended and developed further.\r\n\r\n\r\nThis session is based on previous iterations of “A Primer on LLM Security” at Congress and, based on audience feedback, has been extended and developed further.\r\n\r\n## Target Audience and Required Previous Knowledge\r\n\r\nThis session targets beginners and does not assume (in-depth) knowledge about LLMs. If you have prior experience in LLM security and anticipate insights into the latest developments, this session most likely is not for you.\r\nPlease note that this session will not be about using LLMs in offensive or defensive cybersecurity.\r\n\r\n## Learning Objectives\r\n\r\nFrom a learning perspective, after the session, participants will be able to …\r\n\r\n* describe what LLMs are and how they fundamentally function.\r\n* describe LLMOps and outline fundamental principles of secure LLMOps.\r\n* describe common security issues related to LLMs and systems relying on LLMs.\r\n* describe what LLM red teaming is.\r\n* perform some basic attacks against LLMs to test them for common issues.\r\n\r\n## About Me\r\n\r\nMy Name is Ingo, and I am currently responsible for Digital Education and Educational Technology at the University of Cologne. Relevant to this session, I have a background in computational linguistics and have been working with LLMs for quite some time – also prior to the ChatGPT moment. I am also involved in developing and providing AI infrastructure at scale. Of course, all of this is embedded within a deep interest in cyber- and information security.\r\n\r\n## Format\r\nThe session will be split into a 45-minute talk as well as 15 minutes of discussion. Participants will be provided with the slides as well as some resources for further study.\r\n\r\n## Technical Requirements\r\n As this will not be a highly hands-on session, there are no technical requirements. If you want to experiment with some of the topics, a device capable of accessing and/or running LLMs is necessary. If you want to “go deeper,” you will need a device – e.g., a laptop, capable of running LLMs locally.\r\n\r\n## Material\r\n\r\nAfter the session, I will provide all materials, including some selected additional resources. All materials will also be provided via this page.\r\n\r\n[Slides (PDF) via Slideshare](https://de.slideshare.net/secret/1UNWblpgKTSoSR)\r\n\r\nPs. This is a slightly updated version of the workshop(s) I gave at previous iterations of Congress.",
    "schedule_start": "2025-12-28T10:30:00+01:00",
    "schedule_duration": "01:20:00",
    "schedule_end": "2025-12-28T11:50:00+01:00"
}