## Why “Gradient” Descent?

Recently I was thinking about the gradient descent algorithm and I was bothered was one question - Why do we go in the direction of…

Skip to content
#### Page 2 of 2

##
Why “Gradient” Descent?

##
Eigenvectors from Eigenvalues – Application

##
Random Variables and Probability Functions

##
Market Basket Analysis – The Apriori Algorithm

##
“It’s Not Special” – The Best Advice I Ever Had

##
Probability – Definitions and Axioms

##
Introduction to Probability – Set Theory

##
Nearest Neighbour Classifiers

Recently I was thinking about the gradient descent algorithm and I was bothered was one question - Why do we go in the direction of…

Introduction Recently, Quanta magazine published an article about how a new basic identity about Linear Algebra has surfaced from Applied Physics. The identity is about…

Continue reading → Eigenvectors from Eigenvalues – Application

In this post I will build on the previous posts related to probability theory - I have defined the main results of probability from axioms…

Continue reading → Random Variables and Probability Functions

Prologue Earlier my career I worked for a loyalty card scheme for one of the largest supermarkets in the UK. During my time I was…

Continue reading → Market Basket Analysis – The Apriori Algorithm

Impostor syndrome is a real feeling and I get it at least twice a day with respect to what I do. Sure, I have a…

Continue reading → “It’s Not Special” – The Best Advice I Ever Had

After the last post discussing set theory here, the next logical step is to discuss the concept of probability theory. The theory of probability is…

Recently I have been undertaking a project to commit my accrued notes relating to statistics online. I want to do this for a few reasons:…

Nearest neighbour algorithms classify unlabelled instances (data observations/ cases) by assigning them to a class which is the most similar found in the training data.…