Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy
VOCABULARY

Autoencoder

An autoencoder is a type of artificial neural network used for unsupervised learning of efficient codings or representations of data, typically for the purpose of dimensionality reduction or feature extraction.

An autoencoder consists of two main parts:

  1. Encoder: This part of the network compresses the input into a latent-space representation. It encodes the input image as an internal fixed-size representation in reduced dimensionality.
  2. Decoder: The decoder works to reconstruct the input data from this internal representation. It maps the encoded data back to the original data space.

How Autoencoders Work

We can distinguish between two phases: training and usage phase.

Training Phase

  1. The autoencoder is trained to copy its input to its output. This might sound trivial, but since the internal layers of the network are of smaller dimensions than the input data, the network is forced to learn a compressed representation of the data.
  2. To be more specific, during training, the input data is passed through the encoder, which encodes it into a lower-dimensional latent space.
  3. The decoder then tries to reconstruct the original data from this encoding.
  4. The purpose of an autoencoder is to learn a representation (encoding) for a set of data, typically for the purpose of dimensionality reduction or denoising.
  5. The difference between the original input and the reconstructed output (called the reconstruction error) is measured using a loss function, typically Mean Squared Error (MSE).
  6. The network adjusts its weights using an optimization algorithm to minimize this reconstruction error.

Usage Phase:

  1. Once the autoencoder is trained, you might use just the encoder part to transform input data into a reduced-dimensionality space, which is its latent representation. This is useful for dimensionality reduction tasks.
  2. For denoising tasks, the entire autoencoder might be used. The input data, with some induced noise, is fed into the network, and the output should ideally be the denoised version of that data.
Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Related terms
No items found.
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.