Neural networks are the technology behind modern AI breakthroughs — from image recognition to language translation. Inspired by the human brain, they learn patterns from data. This topic covers the basics of how neural networks work.
Inspired by biological neurons. Artificial neuron (perceptron): receives inputs, each with a weight (importance), sums them up, adds bias, applies activation function, produces output. Layers: Input layer (receives data), hidden layers (learn patterns — more layers = deeper network), output layer (final prediction). Forward pass: data flows from input through hidden to output. Each layer extracts increasingly complex features (edges → shapes → objects).
Training: show many examples, adjust weights to reduce errors. Loss function: measures how wrong predictions are. Backpropagation: calculate error at output, propagate backwards through layers, adjust each weight to reduce error. Learning rate: how big each weight adjustment is. Epoch: one complete pass through all training data. CNNs (Convolutional Neural Networks): specialised for images — filters detect features. Try it: TensorFlow Playground (playground.tensorflow.org) — visualise neural network training in your browser!
"Deep" refers to the number of hidden layers. A neural network with many hidden layers (often tens to thousands) is a "deep" neural network, and training it is called "deep learning." More layers allow the network to learn hierarchical features: early layers learn simple patterns (edges, colours), middle layers learn combinations (shapes, textures), deep layers learn complex concepts (faces, objects). The breakthrough: GPUs made training deep networks feasible. Before ~2012, networks were shallow (1-2 hidden layers) because training was too slow.
Book a Trial + Diagnostic session. Get a personalized Learning Path with clear milestones, tutor match, and a plan recommendation — all within 24 hours.
Book Trial + Diagnostic →