Machine Learning Rules :Perceptron learning rule , delta learning rule (LMS–Widrow Hoff)
Machine Learning Learning Rules: Perceptron and Delta Rule (LMS / Widrow-Hoff)
Introduction
Machine Learning is fundamentally about learning from mistakes. Just like humans improve through feedback, machines update their internal parameters using learning rules.
Two of the most important and foundational learning rules are:
- Perceptron Learning Rule
- Delta Learning Rule (LMS / Widrow-Hoff Rule)
These rules define how a model adjusts itself when it makes correct or incorrect predictions.
Real-World Analogy (Important for Marks)
Imagine a student preparing for an exam:
- If the student answers correctly → no change needed
- If the student answers incorrectly → they revise and improve
This is exactly how learning rules work:
- The model predicts
- Compares with actual answer
- Adjusts itself if wrong
The Perceptron Rule behaves like a strict teacher (only corrects mistakes), while the Delta Rule behaves like a smart tutor (adjusts based on how wrong you are).
Perceptron Learning Rule
Definition
The Perceptron is one of the earliest neural network models introduced by Frank Rosenblatt. It is used for binary classification problems.
Working Mechanism
The perceptron follows these steps:
- Takes multiple inputs
- Assigns weights to each input
- Computes weighted sum
- Applies a step activation function
- Produces output (0 or 1)
Mathematical Model
Learning Rule
Explanation
- If prediction is correct → no update
- If prediction is wrong → weights are adjusted
This makes perceptron simple but limited.
Example Scenario
Consider spam detection:
- Input: Email features
- Output: Spam (1) or Not Spam (0)
If the model wrongly classifies a spam email as non-spam, weights are updated to improve future predictions.
Limitation
The perceptron only works when data is linearly separable. It fails in problems like XOR where no straight line can separate the data.
Delta Learning Rule (LMS / Widrow-Hoff Rule)
Definition
The Delta Rule improves the perceptron by introducing error-based learning. It was developed by Bernard Widrow and Ted Hoff.
Key Idea
Instead of updating only when wrong, the Delta Rule updates weights based on how much error exists.
This makes learning smoother and more accurate.
Mathematical Rule
Error Function
Working Process
- Model predicts output
- Error is calculated
- Weights are adjusted proportionally
- Process repeats until error is minimized
Example Scenario
Consider predicting house prices:
- Actual price: 50 lakhs
- Predicted price: 40 lakhs
Error = 10 lakhs
Delta Rule adjusts weights based on this error, making predictions closer to actual values over time.
Why It Is Better
- Handles continuous outputs
- More stable learning
- Reduces error gradually
- Foundation for gradient descent
Comparison Between Perceptron and Delta Rule
| Feature | Perceptron Rule | Delta Rule |
|---|---|---|
| Learning Type | Based on correctness | Based on error |
| Update | Only when wrong | Always updates |
| Output | Binary | Continuous |
| Stability | Low | High |
| Intelligence | Basic | Advanced |
import numpy as np
# Simple Perceptron
weights = np.array([0.2, 0.4])
learning_rate = 0.1
# Input and target
x = np.array([1, 1])
target = 1
# Prediction
output = np.dot(weights, x)
# Update rule
if output <= 0:
weights = weights + learning_rate * x * target
print("Updated Weights:", weights)
Conclusion
The Perceptron Learning Rule and Delta Learning Rule are essential building blocks of machine learning.
- The perceptron is simple and useful for basic classification
- The delta rule introduces error-based learning and improves accuracy
Understanding these concepts helps in mastering advanced topics like neural net
Comments
Post a Comment