Cookie Consent
Hi, this website uses essential cookies to ensure its proper operation and tracking cookies to understand how you interact with it. The latter will be set only after consent.
Read our Privacy Policy

Dimensionality Reduction

Dimensionality reduction refers to the process of reducing the number of input variables in a dataset. In high-dimensional datasets, it can be very difficult to visualize the relationships between variables or even to understand the dataset itself. Dimensionality reduction techniques can help handle this 'curse of dimensionality' by translating high-dimensional data into a lower-dimensional space.

How Dimensionality Reduction works

Dimensionality reduction works by identifying and removing redundant or irrelevant features, or by creating new composite features that effectively capture the most important information from the original set. This process can be carried out using various methods.

Feature Selection is a straightforward method that involves choosing a subset of the original features without any transformation. It can be done using techniques like backward elimination, forward selection, and random forests.

Feature Extraction, on the other hand, transforms the data in the high-dimensional space to a space of fewer dimensions. Techniques used include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Generalized Discriminant Analysis (GDA).

PCA, for example, works by identifying the hyperplane that lies closest to the data and then it projects the data onto it. The axes on this plane are called principal components. The first principal component is the axis that explains the largest amount of variance in the data. The second principal component is orthogonal to the first and explains the second highest amount of variance, and so on.

In summary, dimensionality reduction helps in reducing computational complexity, improving model performance, and dealing with multicollinearity in data.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Related terms
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.