Skip to content

Session 3: Basics of AI

Date: 2026-03-30

We continue with the history of AI and get to the basic concepts and terminology, and the main paradigms of learning.

Plan

Results from last week

  • Discuss the plan with MikeBot3000 and decide if it is a good project for the course.
  • Compare the results of the students about the history of AI and discuss the main milestones and inventions that led to generative AI.
  • Fill the gaps about the AI history events here.
  • Quiz from the audio sample and discuss the results.
  • Understanding the learning paradigms: supervised, reinforcement, and self-supervised learning.

Quiz: Guess the decade of this audio sample

  • Play the audio sample and let students guess the decade it is from.

History of AI - Milestones (continued)

  • General idea of deep learning.
  • We differentiate between reinforcement, supervised and self-supervised learning.
  • What happened in the last few years after GPT-3?

Learning paradigms: supervised, reinforcement, and self-supervised learning

  • We discuss the differences between supervised, reinforcement, and self-supervised learning.
  • We discuss a classroom exercise that helps students build an intuitive understanding of these learning paradigms.
  • We first find out how a human would learn the same skill using an equivalent approach.
  • Then we run and understand small implementations and simulations for each learning paradigm.

Materials

Code examples

Audio samples for the course

Another secret speech about AI, and students have to guess the year.

Audio sample 01 (Urheber: Maximilian Schönherr, Quelle: Wikipedia: Geschichte der künstlichen Intelligenz, Hans-Werner Hein, 1986)

Sources for shown image and video samples

  • Geoffrey Hinton and Yann LeCun and their lectures about "The Deep Learning Revolution" from 2019.

Results

Remarks about the audio quiz

It was not expected that the speech is from 1986. The idea of these historic examples it to show that many ideas and apporaches are usually much older than we think. That's why it is important to learn about the history of AI, to understand the context and the development of the field, and to get a better understanding of the current state of the art and the potential future directions.

Notes from the lectures

Geoffrey Hinton's lecture

  • Two paradigms: logic-inspired vs biological-inspired approach
    • Logic-inspired approach: symbolic AI, expert systems, rule-based systems, knowledge representation, reasoning, planning, etc.
    • Biological-inspired approach: connectionism, neural networks, deep learning, etc.
  • The central question for neural networks is how to learn the weights. Rosenblatt showed that learning from examples is possible. But only with backpropagation, which was developed in the 1980s, it became possible to train multi-layer neural networks and led to the deep learning revolution.
  • Backpropagation checks if changing a weight in the network would lead to a better or worse output and then updates the weights accordingly. This is not done weight by weight but in parallel for all weights. This is done by computing the gradients of the loss function with respect to the weights and using them to update the weights in the direction that minimizes the loss.
  • There are three paradigms to train a neural network: supervised learning, self-supervised learning, and reinforcement learning.
  • In the machine learning community in the 1990ies and early 2000s, there was the assumption that a neural network is not able to learn just from data. It was thought that it needs some kind of prior knowledge or structure to learn.
  • Between 2008 and 2014 the deep learning methods took over natural language processing as well as computer vision. The attention mechanism was a major breakthrough to understand words based on their context.
  • Humans use coordinate frames for visual understanding, but neural nets not.
  • Humans use a long-term and short-term memory, but neural nets only train the long-term memory. There is currently now mechanism for a short-term memory in neural nets, but it is an active area of research.

Yann LeCun's lecture

  • Supervised learning works well, if many examples are available.
  • Convolutional neural networks explained
  • Panoptic Feature Pyramid Networks - predecessor of segment anything model (SAM)?
  • Reinforcement learning works great, but needs a long time to train. And many applications can not easily be simulated, so it is hard to train a reinforcement learning model. But it is a very powerful paradigm for learning from interaction with the environment.
  • Humans and animals are able to learn concepts like gravitation just from observing the world and interacting with it. This is a form of self-supervised learning.
  • Self-supervised learning can be basically seen as prediction and reconstruction: pretend that there is a part of the input you do not know and try to predict it from the rest of the input.
  • (stopped watching the lecture at 1:02 h:mm)

Next homework

  • Watch the end of the lecture of Yann LeCun and take notes about the main points and concepts.
  • Work on the first task of this sheet at home and prepare examples to be discussed in the next session.