VOCABULARY

GRU

GRU or Gated Recurrent Units is a type of recurrent neural network (RNN) that is used in the field of deep learning. They were proposed as a simpler, more computationally efficient alternative to LSTM (Long Short-Term Memory) units, which are another type of RNN. GRUs can also address the problem of vanishing gradients, a common challenge in training traditional RNNs.

How GRU works

GRUs have two types of gates: update gates and reset gates.

Update Gates determine what proportion of information should be passed to the output. They decide how much of the past information (from previous time steps) needs to be passed along to the future. They are responsible for controlling the degree of influence that the new input and the previous memory will have on the final output.

Reset Gates are used to decide how much of the past information to forget. If the reset gate is near 0, the past hidden state is ignored and only the current input matters.

In practice, these gates allow the model to keep relevant information over long time steps and discard irrelevant data, thus mitigating the vanishing gradient problem.

The GRU's ability to selectively forget or remember information makes them very effective for tasks involving sequential data, such as time-series analysis, natural language processing, and speech recognition.

Lakera LLM Security Playbook
Learn how to protect against the most common LLM vulnerabilities

Download this guide to delve into the most common LLM security risks and ways to mitigate them.

Related terms
Activate
untouchable mode.
Get started for free.

Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.

Join our Slack Community.

Several people are typing about AI/ML security. 
Come join us and 1000+ others in a chat that’s thoroughly SFW.