How does a neural network make predictions?

assorted-color yard threads

Recently, neural networks have grabbed a lot of attention. They are a computing system of interconnected nodes and act similarly to how a brain functions. This system clusters large, raw data sets, and by finding patterns they can then solve very complex problems and classify inputs or even make difficult predictions. The most amazing part? They have the ability to never stop learning!

A simple analogy

At the core of a neural network, the basic building block of one, is a perceptron. Perceptrons are necessary because they break down complex inputs into smaller, simpler pieces. If you were to take a picture of a face and break it down, you would most likely think of common facial features: eyes, nose, eyebrows, mouth, ears, etc. Each one of these can be a perceptron in a single layer. In the next layer, these features can be broken up into smaller features. For example, a left and right eye, upper and lower lip, and those features can be broken up in another layer into smaller features like a pupil and iris, or eyelashes, etc. Each one of these features can be a perceptron, and after breaking it down into the smallest features, it will have the building blocks of a face.

Nerual Network Diagram

For this example, image classification will be used to show a more descriptive explanation of how a neural network makes predictions. A neural network can start by taking a picture of a face and breaking it down into certain features, and rebuilds it by telling the next layer if the features are there. By the end, depending on how many 1 (or true) features were passed on, the neural network can make a prediction by telling how many features it saw compared to how many features make up a face. If most features are seen, then it will classify it as a face. Otherwise, it will be classified as not a face (notice how it is not classified as something else, it is either a face or not a face, true or false).

Beware though, a perceptron and a neuron are not the same things. Although they do sound the same, a neuron is different from a perceptron. A perceptron is a unit with weighted inputs and a bias, which produces a binary output. A neuron is a generalization of a perceptron in an artificial neural network. A neuron still takes in a weighted input and a bias but this is where it differs from a perceptron: it produces an output which is a graded value between 0 and 1. Note that nodes are biased to choosing an extreme value close to 0 or 1 using the sigmoid activation (output) function, which allows it to function very similarly to a perceptron.

Conclusion

Overall, a neural network is a very simple idea, but large networks can produce amazing results. Each neuron is responsible for classifying a single feature and counts on the previous neuron to do its job properly in order to make an accurate decision itself. Very similar to any good team, generally speaking, they value trust and teamwork above all else. It is no wonder why they are so powerful together.

Share on facebook
Share on twitter
Share on linkedin

Related Articles

Gerry Saporito

CTO & Software Engineer

Full Stack and Distributed Systems

Gerry is a co-founder & the CTO of Lumaki Labs, a startup empowering companies  to build future-proof talent pipelines by building a platform to maximize the internship lifecycle. When he isn’t working, he is playing tennis, watching anime, or with his Shiba Inu named Primo.

My Personal Favourites
Explore