Our teaching activities

Explore our wide array of courses, lectures, and labs. You can find our current offerings listed below. Look forward to exciting and enriching experiences with us!

Our courses (SS24)

 

You can subscribe to our courses here

Basis Courses

Lecture: Advanced Methods For Text Mining (SS)

This course, offered as part of the Master's Program in Media Informatics at the Bonn-Aachen International Center for Information Technology (B-IT), dives into the sophisticated realm of text mining. Acknowledging text as a prevalent medium of communication, the course grapples with the complexities of textual data mining. Students will embark on a journey through both the foundational theories of data mining and machine learning and the cutting-edge practices in text analysis. From the preliminary discussions on natural language processing to the intricate mechanisms of advanced text mining methods, the course covers a broad spectrum, including Latent Semantic Indexing, Word Embeddings, conventional and resource-efficient Recurrent Neural Networks, Attention Mechanisms, and Transformer architectures. The curriculum extends to practical applications in real-world scenarios, such as natural language inference, information extraction in financial documents, recommender systems in legal and audit sectors, and generative models for digital forensics. A particular focus will be on the exploration of emerging generative language models like GPT-4 and Bloom, highlighting their significance and the nuanced differences that set them apart.

Lecture topics

  1. Introduction to Natural Language Processing: Starting from the basics, this lecture introduces the core concepts of NLP and its evolution leading up to the dominance of Transformer models in the machine learning landscape.
     
  2. Data Mining and Machine Learning Preliminaries: A foundational overview of data mining techniques and machine learning principles, setting the stage for more advanced studies.
     
  3. Advanced Text Mining Methods: Deep dive into sophisticated text mining techniques, including Latent Semantic Indexing, Word Embeddings, and the dynamics of Recurrent Neural Networks for sequential text representation.
     
  4. Attention Mechanisms and Transformer Architectures: Exploration of the groundbreaking Attention Mechanism and Transformer models that have revolutionized natural language processing.
     
  5. Applications in the Real World: Insight into the application of text mining techniques for natural language inference, information extraction in financial documents, and recommender systems within the legal and audit domains.
     
  6. Generative Models for Digital Forensics: Examination of the role of generative models in digital forensics, showcasing their potential for innovation and problem-solving in the digital age.
     
  7. Emerging Generative Language Models: A focused discussion on the latest advancements in generative language models, including GPT-4 and Bloom, and their unique contributions to the field of text mining.

Lecture & Exercise

Advanced Methods For Text Mining
 

  1. Lecture: Thursday, 14:00-15:30
  2. Exercise: Thursday, 16:15-17:45
  3. Location: b-it, room 0.107
  4. Resources: Github

Lab: Explainable AI and Applications (SS)

In this lab “Explainable AI and Applications -- Explainability of foundation models for sequential data”, we will start with the reproduction of existing explainability of deep-learning systems (especially foundation models) in the fields of biomedicine and natural language processing, where foundation models can be thought of an “average mind” for both. Then, we will encourage lab participants to find limitations and explore novel solutions with experiments. The students will work in groups on a selected task.

The lab will be given online via Zoom. We offer the lab course for up to 5 groups with a maximum of four members per group, thus encouraging collaboration and peer learning

Lab Activities

  1. Understanding the Landscape: Initiating the course with a comprehensive survey of explainable AI, defining key problems and structuring the foundation for subsequent exploration.
     
  2. Reproducing Key Findings: Students will select research papers to reproduce significant findings, applying theoretical concepts practically to affirm groundbreaking work.
     
  3. Midterm Milestone: A mid-term presentation provides a platform for students to share their progress, challenges, and insights, enhancing learning through peer feedback.
     
  4. Forging New Paths: In the contribution phase, creativity is paramount as groups develop new ideas, benchmarks, datasets, or methodologies to advance the field of XAI.
     
  5. Showcasing Innovations: Teams present their original contributions, sharing their innovations with classmates and faculty, fostering an environment of intellectual growth and discovery.
     
  6. Reflecting and Reporting: The course concludes with a reflection and reporting phase, documenting the learning journey, findings, and student reflections on the process and outcomes.

This course represents an opportunity for students in the field of machine learning to learn more about the interpretability of AI systems.

Lab

Explainable AI and Applications
 

  1. Regular meeting: Friday, 14:00-15:30
  2. Location: Online
  3. Resources: Github

Seminar: Theory of Deep Learning (SS)

This seminar, part of the advanced studies in the Master's Program in Media Informatics at the Bonn-Aachen International Center for Information Technology (B-IT), focuses on the theoretical underpinnings of deep neural networks, particularly exploring the infinite width limit. This limit provides an analytically tractable way to examine neural network properties, enhancing our understanding of aspects like data recognition, generalizability metrics, and structurally significant features of neural networks. Participants will delve into the connection between Gaussian processes, kernel learning, and neural networks; the role of the neural tangent kernel in training; the universal language of tensor programs for architecture-independent insights; and the application of these theories to enhance model generalizability.

Seminar Topics

  1. Introduction to Neural Network Theory: Covers the basics of neural network properties under the infinite width limit and its implications for theory and practice.
     
  2. Gaussian Processes and Kernel Learning: Examination of the relationship between Gaussian processes, kernel learning, and deep neural networks.
     
  3. Neural Tangent Kernel and Training Dynamics: Analysis of how the neural tangent kernel influences neural network training and its implications for learning speed and efficiency.
     
  4. Tensor Programs in Neural Networks: Exploration of tensor program language as a model for deriving insights across different neural network architectures.
     
  5. Generalizability and Theoretical Applications: Insights into how theoretical advancements translate into improved generalizability in practical applications.
     

Organization and Participation

The seminar is limited to 10 students to ensure intensive learning and interaction. Each student will be assigned several scholarly articles on the theory of deep neural networks. A month of preliminary reading will help participants prepare to discuss and present their assigned topics. Additional follow-up meetings with the instructor and up to three individual sessions will facilitate deeper understanding. Interested students must email amllab@bit.uni-bonn.de by 5th March 2024 with the subject 'Seminar Registration' and a summary of their previous courses.

Seminar Schedule

  1. Kickoff Meeting: Scheduled for 15:30 on 6th May 2024 at B-IT, room 0.109 (Friedrich-Hirzebruch-Allee 6, 53115 Bonn).
     
  2. Paper Assignments and Admissions Announcement: 13th May 2024.
     
  3. Final Presentations: Set for 13:00 on 29th August 2024 in room 2.113 at B-IT.
     

Prerequisites

Participants are expected to have completed at least one course in machine learning or a related AI field. They should possess a solid foundation in AI, data science, machine learning, and pattern recognition, along with programming skills. Proficiency in statistics, linear algebra, and optimization is required, with particular emphasis on deep neural networks, kernel methods, and gradient descent-based optimization.

Seminar

Theory of Deep Learning
 

  1. Kickoff Meeting: Monday, 6th of May, 15:30, b-it, room 0.109
  2. Final Presentation: Thursday, 29th of August, 13:00, b-it, room 2.113

Lecture: Advanced Methods For Text Mining (WS)

This course, offered as part of the Master's Program in Media Informatics at the Bonn-Aachen International Center for Information Technology (B-IT), dives into the sophisticated realm of text mining. Acknowledging text as a prevalent medium of communication, the course grapples with the complexities of textual data mining. Students will embark on a journey through both the foundational theories of data mining and machine learning and the cutting-edge practices in text analysis. From the preliminary discussions on natural language processing to the intricate mechanisms of advanced text mining methods, the course covers a broad spectrum, including Latent Semantic Indexing, Word Embeddings, conventional and resource-efficient Recurrent Neural Networks, Attention Mechanisms, and Transformer architectures. The curriculum extends to practical applications in real-world scenarios, such as natural language inference, information extraction in financial documents, recommender systems in legal and audit sectors, and generative models for digital forensics. A particular focus will be on the exploration of emerging generative language models like GPT-4 and Bloom, highlighting their significance and the nuanced differences that set them apart.

Lecture topics

  1. Introduction to Natural Language Processing: Starting from the basics, this lecture introduces the core concepts of NLP and its evolution leading up to the dominance of Transformer models in the machine learning landscape.
     
  2. Data Mining and Machine Learning Preliminaries: A foundational overview of data mining techniques and machine learning principles, setting the stage for more advanced studies.
     
  3. Advanced Text Mining Methods: Deep dive into sophisticated text mining techniques, including Latent Semantic Indexing, Word Embeddings, and the dynamics of Recurrent Neural Networks for sequential text representation.
     
  4. Attention Mechanisms and Transformer Architectures: Exploration of the groundbreaking Attention Mechanism and Transformer models that have revolutionized natural language processing.
     
  5. Applications in the Real World: Insight into the application of text mining techniques for natural language inference, information extraction in financial documents, and recommender systems within the legal and audit domains.
     
  6. Generative Models for Digital Forensics: Examination of the role of generative models in digital forensics, showcasing their potential for innovation and problem-solving in the digital age.
     
  7. Emerging Generative Language Models: A focused discussion on the latest advancements in generative language models, including GPT-4 and Bloom, and their unique contributions to the field of text mining.

Lecture & Exercise

Mining Media Data I
 

  1. Lecture: Not offered this semester
  2. Exercise: Not offered this semester
  3. Location: Not offered this semester
  4. Resources: Github