Session 2: History and basics of AI¶
Date: 2026-03-23
Uwe Hahne is not available for this session, so students work individually on some tasks.
Write your results from all tasks into one document (pdf or markdown, German or English) and upload it to FELIX.
Task 1: Review and plan¶
Review the results of the first session and check if the summary of what students want to learn and create is correct. Discuss if there are any changes or additions to the list. Discuss if reproducing a project like MikeBot3000 (video) would be interesting for the course as it would cover many of the topics that students want to learn and create.
Task 2: History of AI¶
Goal¶
We want to get an overview of the history of AI, the basic concepts and terminology, and which models and inventions lead to generative AI.
Task description¶
Listen to the rerecorded radio lecture of Alan Turing. It was broadcasted in 1951 and gives a good impression of the state of AI at that time and the potential that Turing saw in it. Take notes about the main points of the lecture. Gather technical terms and concepts that are mentioned in the lecture and look them up if you are not familiar with them. Write a short summary of the lecture and your thoughts about it.
Check out the Chapter 2 - A brief history of deep learning of the lecture of Sebastian Raschka. Take notes about the main points of the lectures and the milestones in the history of AI. Write a short summary of the lecture and your thoughts about it. Note that the lecture is from February 2021, so it does not cover the latest developments in generative AI, but it gives a good overview of the history and the main concepts. Which big inventions were published after that lecture that is not covered in it?
In comparison watch the first lectures of Alfredo Canziani's course Introduction toDeep Learning Research to get a slightly different and more recent perspective on the history of AI and deep learning. You can start with the first lecture at about 7:50 where the history of AI is covered, but you can also watch the whole lecture as his way of teaching is exemplary. Skim also through the lectures 02 and 03 to get a better understanding of the history of deep learning. Find out how the work of McCulloch and Pitts is combined with the ideas of Norbert Wiener and Claude Shannon to create the first learning algorithm for a single-layer neural network by Frank Rosenblatt.
Find out how the critique of the book "Perceptrons" by Marvin Minsky and Seymour Papert led to the AI winter and how the backpropagation algorithm led to the deep learning revolution in 2012.
Finally, research the following milestones in AI history and write a short summary for each of them.
AI history events¶
McCulloch & Pitts (1943)¶
Mathematical model of a neuron as a binary threshold unit. They showed that such a network can compute the logical functions AND, OR, and NOT, which was a fundamental result for the theory of neural networks.
Rosenblatt's Perceptron (1958)¶
Rosenblatt developed the first learning algorithm for a single-layer neural network, called the perceptron. The perceptron could learn to classify linearly separable patterns, but it had limitations in learning more complex patterns. The fundamental idea of letting a computer learn from data was a major breakthrough and inspired further research in machine learning and neural networks.
Minsky and Papert - Critique of Perceptrons (1969)¶
Minsky and Papert published a book called "Perceptrons" where they analyzed the limitations of the perceptron and showed that it could not learn to classify certain patterns, such as the XOR problem. This critique led to a decline in interest and funding for neural network research, which is often referred to as the "AI winter".
Backpropagation¶
The backpropagation algorithm, developed in the 1980s, allowed for the training of multi-layer neural networks by efficiently computing the gradients of the loss function with respect to the weights. This was a major breakthrough that enabled the development of deep learning models and led to significant improvements in performance on various tasks.
Deep Learning Revolution (2012)¶
In 2012, a deep convolutional neural network called AlexNet, developed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, achieved a significant improvement in performance on the ImageNet classification task. This success sparked a surge of interest and research in deep learning, leading to the development of many new architectures and applications in various domains.
Task 3: Reflection¶
Read again the notes you have taken from the video lectures. Reflect on the following questions:
- How often did I pause the video to take notes and look up concepts? Did I understand the content without pausing, or did I need to pause frequently to understand the concepts?
- Which concepts were new to me, and which ones did I already know? Which concepts were the most difficult to understand, and which ones were easier? Which one didn't I understand at all?
- Which lecturer was easier to understand for me, and why? Did I prefer the style of one lecturer over the other? Did I find one lecture more engaging or informative than the other? Are there other lectures or ressources that helped me to understand the history of AI better?
- Do you think that you are able to learn more about generative AI from lectures like these? Which activities in class do you think are more helpful for learning about generative AI? Do you prefer lectures, discussions, hands-on activities, or something else? How can we design the sessions in this course to make them more engaging and effective for learning about generative AI?