Lecture

How Perceptrons Work

An artificial neuron takes input values, multiplies each by its corresponding weight, sums the results, and adds a bias.

This calculated value is then passed through an activation function to produce the final output.

Expressed as a formula, it looks like this:

Artificial Neuron Calculation Formula
y = f(w₁x₁ + w₂x₂ + ... + wₙxₙ + b)
  • y: Final output of the neuron

  • f: Activation function

  • w: Weights for each input value

  • x: Input values

  • b: Bias


Understanding with a Simple Example

Let’s look at a simple scenario: a perceptron deciding whether to “turn on the air conditioner if it’s hot (Input1) and humid (Input2)”.

Input Values

  • Temperature: 86°F (Input1)
  • Humidity: 90% (Input2)

Weights

  • Weight for temperature: 0.7
  • Weight for humidity: 0.3

Bias

  • Bias value: -10

In this case, the perceptron calculates as follows:

(Temperature × Temperature Weight) + (Humidity × Humidity Weight) + Bias
= (86 × 0.7) + (90 × 0.3) + (-10)
= 60.2 + 27 - 10
= 77.2

Since the result does not exceed a set threshold (e.g., 98), the activation function decides to "keep the air conditioner off (Output=0)".

If the result had exceeded the threshold, the decision would be "turn on the air conditioner (Output=1)".

This example shows how a perceptron makes simple decisions by combining inputs, weights, and bias.


When multiple perceptrons are connected, they form a neural network, and when these network layers are stacked to form deep structures, it is known as Deep Learning.

In the next lesson, we will delve deeper into Deep Learning.

Quiz
0 / 1

A perceptron makes its final decision by combining input values, weights, and biases.

True
False

Lecture

AI Tutor

Design

Upload

Notes

Favorites

Help