History of AI - Joshua Aleth¶
McCulloch & Pitts (1943)¶
McCulloch and Pitts were the first to mathematically describe a neural network. This led to the development of the McCulloch-Pitts cell. So, if I understand correctly, the key points are that a single neuron is like a computer because it makes yes/no or 0/1 decisions, but there are multiple neurons. This results in a complex system.
Sources¶
- wikipedia
- https://blog.hnf.de/mcculloch-und-pitts-das-erste-neuronale-netz/
- https://maelfabien.github.io/deeplearning/Perceptron/#the-mcculloch-pitts-neuron-1943
Rosenblatt's Perceptron (1958)¶
Is the dirst neural network which could learn not really complex things but unlike to McCulloch... this one could get data which were not boolean. So the data the network gets were weighted no fixed rules (McCulloch & Pitts)
Source¶
Minsky and Papert - Critique of Perceptrons (1969)¶
Minsky and Papert say that the Perceptrons from Rosenblatt ... etc. were too limited. This shattered hopes for advanced technology like a machine that sees and hears... This resulted in the onset of AI Winter.
Sources¶
- https://maelfabien.github.io/deeplearning/Perceptron/#the-mcculloch-pitts-neuron-1943
- https://inamdaraditya.medium.com/the-perceptron-paradox-how-minsky-and-papert-exposed-the-limits-of-early-ai-78f93f450dc6
Backpropagation¶
Solution for the Minsky and Papert Problems --> end of the Ai Winter The Learning now works on several layers and Errors can now compile back so that now the KI can learn complex systems
Source¶
Deep Learning Revolution (2012)¶
Due to technological advancements like GPUs, many datas enabled networks to function effectively leading to todays breakthroughs such as T to I or S to T.
Source¶
Side results¶
simple neurons → learning → failure → solution → breakthrough
Turing Broadcast¶
It's a discussion primarily about comparing the brain to a computer/machine. He attempts to argue, through various approaches, that the human brain isn't sufficient to determine whether something is a machine. Rather, one should assess whether one can distinguish a machine from a human in a given moment during a dialogue. Interesting is the comparison between the two men, which I don't fully understand myself, is meant to build a bridge to the topic. I understand the point, but not why it should count as proof.
Chapter two of Sebastian Raschka (foils)¶
Artificial neurons are simplified models of biological neurons that take weighted inputs and produce an output Multilayer neural networks consist of multiple layers of neurons that can detect more complex patterns. They can solve non-linear problems by combining several layers. Deep learning uses neural networks with many layers to solve complex tasks such as image and speech recognition. Deep learning requires powerful GPUs and software frameworks such as TensorFlow or PyTorch. Current research focuses on more efficient models, reducing data, and new applications like generative AI
Alfredo Canziani¶
Its about the overview of the McCulloch and Pitts Thematic. at first he described it as the Event and then he cames to the mathematical consequence of this, so the possibilities.
AND OR NOT The binary neutron at first he describes it and the possibilities resolve from this And then he goas deeper in to the matimatical process of them.
Personal Reflection¶
The first video (radio lecture of Alan Turing) was easy to understand a good speed and a good english.
The second videos (lecture of Sebastian Raschka) were the hardest for me. I had to stop multiple times in one lecture und research a lot of things in the pauses so I end up reading just the slides. But in the end lesson L2.4 and L2.5, I could easily watch and understand what he wants to tell but L2.2 and L2.3 was not really friendly for me with no background to understand the grafics or concepts. So my solution was to stop th videos get to the foils and research the things myself.
The third video (Alfredo Canziani's course Introduction toDeep Learning Research) was the better lecturer for me it was easy to understand except some mathematical calculations, but it was easy to follow him without falling out of the lecture especially the examples and direct link to the neuron were simple.
I was familiar with McCulloch's concept, not by that name, but with its clear rules and the 0.1 output. Everything else was new to me. While some things were somewhat familiar from listening, I didn't know the details.
For the course, I would be happy if there were many discussions, such as each person working on a question individually and then discussing it at the end.