Autoencoders (AE)

AEs are a latent variable model used to move the data from high dimensional input space to a lower dimensional latent space in order to get a compact and meaningful representation of the input data.

AEs have a similar architecture to FFNNs, with automatic data compression(encoding) being their main difference. AEs have smaller hidden layers with a symmetrical shape around the middle layer(s) with one or two nodes in the middle where data is most compressed. left of chock-point(middle layer) is encoder, and it’s left(toward output) is decoder.

AEs can be trained using backpropagation by feeding input and setting the error to be the difference between the input and what came out. AEs can be built symmetrically when it comes to weights as well, so the encoding weights are the same as the decoding weights.


References: