Homepage | Course content |
Autoencoders
- Autoencoders are neural models that take an input and reconstruct the same input in their output.
- The reconstruction will not be perfect
- As a result the autoencoder is “lossy”
-
The autoencoder architecture compresses the information in the input to reach a “latent” space.
-
Autoencoders are considered to be trained in an unsupervised fashion.
- The latent space (or code, or bottleneck)
- contains all the information necessary to reconstruct the input.
- can be considered as a set of features that are not correlated to each other.
-
In contrast, the input may be full of correlated features.
- Examples autoencoder applications include:
- data denoising
- segmentation
- anomaly detection
- feature generation
© Iran R. Roman & Camille Noufi 2022