The reason why deep learning does well
Back propagation does not occur in the human brain.
Neural Networks are function approximators that stack affine transformations followed by nonlinear transformations.
- Linear Affine transformations: https://en.wikipedia.org/wiki/Affine_transformation, https://youtu.be/DSmXIYkp024
2012 - AlexNet
2013 - DQN
2014 - Encoder / Decoder, Adam Optimizer
2015 - Generative Adversarial Network, Residual Networks
2016 -
2017 - Transformer
2018 - BERT (fine-tuned NLP models)
2019 - BIG Language Models
2020 - Self Supervised Learning
댓글