Member-only story
Autoencoders: Introduction and Practical Applications
Autoencoders are probably the first neural networks to come to one’s mind when we think of unsupervised learning approaches. It is widely used in many areas, especially representation learning. It is an algorithm that learns a mapping y = x through a backpropagation. In this article, I am going to explain what autoencoders are and demonstrate how it is used for representation learning.
Concept
Autoencoders are neural networks that are designed to learn an Identity Function I(x) in an unsupervised manner. Obviously, a simple multiplication by 1 would do the job; however, the important feature of autoencoders is within the bottleneck layer, which is illustrated as a red box in the figure above. As seen, the bottleneck layer provides a compressed low dimensional representation of the input which can be used in many applications since it contains the most important information of the input. Thus, autoencoders can be used for dimensionality reduction, just how we would use Principal Component Analysis (PCA). Besides the bottleneck layer, autoencoders consist of encoder and decoder networks.
The encoder takes input and translates it into the latent low-dimensional representation z.
The decoder takes the low-dimensional representation z and reconstructs the initial input data.