Machine Learning Rules :Perceptron learning rule , delta learning rule (LMS–Widrow Hoff)

 

Machine Learning Learning Rules: Perceptron and Delta Rule (LMS / Widrow-Hoff)

Introduction

Machine Learning is fundamentally about learning from mistakes. Just like humans improve through feedback, machines update their internal parameters using learning rules.

Two of the most important and foundational learning rules are:

  • Perceptron Learning Rule
  • Delta Learning Rule (LMS / Widrow-Hoff Rule)

These rules define how a model adjusts itself when it makes correct or incorrect predictions.


Real-World Analogy (Important for Marks)

Imagine a student preparing for an exam:

  • If the student answers correctly → no change needed
  • If the student answers incorrectly → they revise and improve

This is exactly how learning rules work:

  • The model predicts
  • Compares with actual answer
  • Adjusts itself if wrong

The Perceptron Rule behaves like a strict teacher (only corrects mistakes), while the Delta Rule behaves like a smart tutor (adjusts based on how wrong you are).


Perceptron Learning Rule

Definition

The Perceptron is one of the earliest neural network models introduced by Frank Rosenblatt. It is used for binary classification problems.


Working Mechanism

The perceptron follows these steps:

  1. Takes multiple inputs
  2. Assigns weights to each input
  3. Computes weighted sum
  4. Applies a step activation function
  5. Produces output (0 or 1)

Mathematical Model

y=f(wixi+b)y = f(\sum w_i x_i + b)


Learning Rule

wnew=wold+ηxyw_{new} = w_{old} + \eta \cdot x \cdot y


Explanation

  • If prediction is correct → no update
  • If prediction is wrong → weights are adjusted

This makes perceptron simple but limited.


Example Scenario

Consider spam detection:

  • Input: Email features
  • Output: Spam (1) or Not Spam (0)

If the model wrongly classifies a spam email as non-spam, weights are updated to improve future predictions.


Limitation

The perceptron only works when data is linearly separable. It fails in problems like XOR where no straight line can separate the data.


Delta Learning Rule (LMS / Widrow-Hoff Rule)

Definition

The Delta Rule improves the perceptron by introducing error-based learning. It was developed by Bernard Widrow and Ted Hoff.


Key Idea

Instead of updating only when wrong, the Delta Rule updates weights based on how much error exists.

This makes learning smoother and more accurate.


Mathematical Rule

wnew=wold+η(ty)xw_{new} = w_{old} + \eta \cdot (t - y) \cdot x


Error Function

E=12(ty)2E = \frac{1}{2}(t - y)^2


Working Process

  1. Model predicts output
  2. Error is calculated
  3. Weights are adjusted proportionally
  4. Process repeats until error is minimized

Example Scenario

Consider predicting house prices:

  • Actual price: 50 lakhs
  • Predicted price: 40 lakhs

Error = 10 lakhs

Delta Rule adjusts weights based on this error, making predictions closer to actual values over time.


Why It Is Better

  • Handles continuous outputs
  • More stable learning
  • Reduces error gradually
  • Foundation for gradient descent

Comparison Between Perceptron and Delta Rule

FeaturePerceptron RuleDelta Rule
Learning TypeBased on correctnessBased on error
UpdateOnly when wrongAlways updates
OutputBinaryContinuous
StabilityLowHigh
IntelligenceBasicAdvanced
        






import numpy as np

# Simple Perceptron
weights = np.array([0.2, 0.4])
learning_rate = 0.1

# Input and target
x = np.array([1, 1])
target = 1

# Prediction
output = np.dot(weights, x)

# Update rule
if output <= 0:
weights = weights + learning_rate * x * target

print("Updated Weights:", weights)

Conclusion

The Perceptron Learning Rule and Delta Learning Rule are essential building blocks of machine learning.

  • The perceptron is simple and useful for basic classification
  • The delta rule introduces error-based learning and improves accuracy

Understanding these concepts helps in mastering advanced topics like neural net

Comments

Popular posts from this blog

Logic programming using Prolog

Types of environments in AI