## Word2vec vs Fasttext – A First Look

Introduction Recently, I've had a chance to play with word embedding models. Word embedding models involve taking a text corpus and generating vector representations for…

Skip to content
## Tag: Machine Learning

##
Word2vec vs Fasttext – A First Look

##
ML Classifier Evaluation – A First Look

##
Naive and Proud: Introducing the Naive Bayes Algorithm

##
Gradient Descent: The Workhorse of Machine Learning

##
Job Searching in Data Science

##
Why “Gradient” Descent?

##
Random Variables and Probability Functions

##
Nearest Neighbour Classifiers

Introduction Recently, I've had a chance to play with word embedding models. Word embedding models involve taking a text corpus and generating vector representations for…

Once you’ve built a machine learning classifier, the next step is to validate it and see how well it fits the data. This short post…

The Naive Bayes Algorithm is a simple and elegant approach for tackling supervised learning problems in Machine Learning. This post will be a brief introduction…

Continue reading → Naive and Proud: Introducing the Naive Bayes Algorithm

If you’re like me, you’ve heard a lot about Gradient Descent. You’ve heard that it is a foundational algorithm for optimising functions which all self…

Continue reading → Gradient Descent: The Workhorse of Machine Learning

Job searching can be irritating at best and hopelessly frustrating at worst. This is particularly true in Data Science where the field is still relatively…

Recently I was thinking about the gradient descent algorithm and I was bothered was one question - Why do we go in the direction of…

In this post I will build on the previous posts related to probability theory - I have defined the main results of probability from axioms…

Continue reading → Random Variables and Probability Functions

Nearest neighbour algorithms classify unlabelled instances (data observations/ cases) by assigning them to a class which is the most similar found in the training data.…