Master Algorithms, Step by Step

Learn the theory, plug in your data, and watch the full manual calculation unfold — exactly how your exam expects it.

Built for students, by a student :)

Naive Bayes Classifier

The Naive Bayes classifier is a fast and effective machine learning algorithm based on probability. It is called 'naive' because it makes a massive assumption: it believes every single feature in your dataset is completely independent of the others. For example, it assumes 'Age' has absolutely no effect on 'Income'. While this is rarely true in the real world, the algorithm still performs surprisingly well, especially for things like spam filtering.

View Theory & Solver

K-Nearest Neighbors (KNN)

The K-Nearest Neighbors (KNN) algorithm is a simple yet powerful Machine Learning algorithm primarily used for classification problems. It works on a very basic logical principle: 'tell me who your neighbors are, and I will tell you who you are.' Instead of learning an explicit mathematical model, it memorizes the dataset and compares new data points to existing ones based on their distance.

View Theory & Solver

K-Nearest Neighbors (KNN) Regression

KNN Regression operates on the exact same distance-measuring principle as KNN Classification, but it solves a different type of problem. Instead of trying to guess a category (like 'Spam' or 'Not Spam'), it predicts a continuous numerical value (like predicting the price of a house based on its square footage and bedrooms). It finds the closest neighbors and simply calculates the average of their values.

View Theory & Solver

Decision Tree (ID3)

A Decision Tree helps us make predictions by mapping out different choices in a tree-like structure. It works like a game of 20 questions, automatically figuring out the best questions to ask to split the data. The ID3 (Iterative Dichotomiser 3) version specifically uses the concepts of Entropy and Information Gain to mathematically decide which feature separates the data the cleanest at every step.

View Theory & Solver

Linear Regression

Linear Regression is the foundational algorithm for predicting continuous numbers. While classification algorithms predict categories (like 'Spam' or 'Not Spam'), regression predicts exact values (like forecasting a student's exam score based on how many hours they studied). It works by drawing a straight 'Best-Fit Line' right through the middle of your dataset, allowing you to estimate future outcomes based on past trends.

View Theory & Solver

Multiple Linear Regression

Multiple Linear Regression extends simple linear regression to handle multiple input features at once. Instead of drawing a line through 2D points, it fits a 'hyperplane' through multi-dimensional data. For example, predicting a house price based on both its size AND its age — not just one factor. The math uses matrices to solve for all the coefficients (b0, b1, b2...) simultaneously, which is exactly the kind of numerical your 5th-semester exams will test.

View Theory & Solver

K-Means Clustering

K-Means is a powerhouse of 'Unsupervised Learning'. Unlike previous algorithms where we had a target Class Label to predict, K-Means looks at raw, unlabeled data and groups it into hidden patterns all by itself. It works by placing central anchor points (called 'Centroids') into the data, and iteratively shuffling them around until data points are neatly separated into $k$ distinct clusters. It is heavily used in the real world for things like customer segmentation and image compression.

View Theory & Solver

Apriori Algorithm

The Apriori algorithm is the foundation of recommendation engines (like 'Customers who bought this also bought...'). It is an unsupervised learning technique that searches through massive datasets to find hidden relationships between items. Instead of predicting a specific number or class, it generates 'Association Rules'. It operates on a very logical principle: if a combination of items is frequent, then all of its individual subsets must also be frequent. By aggressively filtering out infrequent items early on, it saves the computer from having to calculate every possible combination in the universe.

View Theory & Solver

Genetic Algorithm (One-Max)

Genetic Algorithms (GAs) are search and optimization techniques inspired by Charles Darwin's theory of natural evolution. Instead of using complex calculus, they 'breed' solutions. The 'One-Max' problem is the classic "Hello World" of GAs. The goal is simple: evolve a random string of 0s and 1s until it becomes entirely 1s. It does this by mimicking natural selection—scoring each string's fitness, keeping the strongest 'parents', and combining their 'DNA' (bits) to create an even stronger next generation.

View Theory & Solver

Genetic Algorithm (Knapsack)

If the One-Max problem is learning to walk, the Knapsack problem is learning to run an obstacle course. You are given a bag (the knapsack) with a strict weight limit, and a list of items that each have a Weight and a Benefit. Your goal is to pack the bag with the maximum possible benefit without breaking the strap! In our Genetic Algorithm, a chromosome like '101' means we pack Item 1, skip Item 2, and pack Item 3. It introduces a massive real-world concept: Constraint Handling.

View Theory & Solver