Feed-forward neural networks (FNNs) are a type of artificial neural network where connections between nodes do not form a cycle. The information moves in only one direction, forward, from the input nodes, through the hidden nodes (if any) and to the output nodes. There are no cycles or loops in the network.
How Feed-forward Neural Networks work
In feed-forward neural networks, each node in the first layer (input layer) receives an input signal, then passes a transformed version of the signal to the next layer (hidden layer). This process is repeated until the last layer (output layer) is reached. The nodes in the hidden and output layers apply a non-linear transformation, often referred to as activation function, before passing on the signal.
The output of each neuron is a combination of inputs, which are multiplied by weights. Weights are values that neurons learn during the training phase. They determine how much impact the input will have on the neuron's output.
The FNN learns from the training data by adjusting these weights to reduce the difference between the actual and predicted output. This process, often called training or learning, involves a method such as gradient descent and a loss function to provide a measure of difference.
FNNs are often used in supervised learning tasks (where training data contains both input and expected output) in varied domains including function approximation, time-series prediction, and classification.
Download this guide to delve into the most common LLM security risks and ways to mitigate them.
Lakera Guard protects your LLM applications from cybersecurity risks with a single line of code. Get started in minutes. Become stronger every day.
Several people are typing about AI/ML security. Come join us and 1000+ others in a chat that’s thoroughly SFW.