Entropy, Cross-Entropy & KL-Divergence

Here is a 10-minute video by Aurélien Géron explaining entropy, cross-entropy and KL-divergence using Information Theory.

He has some more interesting videos on his channel. Do check it out!

Advertisements

Connect-3 Game Playing Bot

We use Minimax algorithm to predict the next optimal move after every move by the user. This work demonstrates how a complete search done by Minimax algorithm can always yield optimal results. To speed up the search, alpha-beta prunning is implemented to prune moves that do no better than the currently explored moves. We test two methods using Minimax algorithm for game playing – with and without Alpha-Beta Prunning. A 14x faster first move is obtained using alpha-beta prunning.

The code is publicly available on github.


Project Connect-3 Game Playing Bot photo #1

Probabilistic Inference using Bayesian Networks

In this work, we study the application of bayesian networks for probabilistic inference. We consider a hypothetical real-world scenario where we answer queries regarding various events (health problems, accidents etc.) caused by factors such as air pollution, bad road conditions etc.

Each event/factor is modeled as a random variable with a certain probability distribution function (given as input). Variable dependence graph is constructed and bayes rule is applied on the markov blanket of the query variables to reduce the computational effort. Detailed documentation can be found in the code.

The code is publicly available at github.



https://dmtyylqvwgyxw.cloudfront.net/instances/132/uploads/images/photo/image/46217/5afde020-23c6-4dd0-a9b6-346cf6b72624.png?v=1530479116

Visualizing Perceptron Learning

This program visualizes the learning process of a perceptron. For simplicity, we consider the perceptron to learn the identity function. We give a 2 dimensional input <x, y> and classify each point as being below the line or above the line (binary classification). We update the weights of the perceptron whenever misclassification occurs. Over several examples, the perceptron learns the identity mapping.

The code is publicly available on github.


The Curse of Dimensionality: Inside Out

The Curse of Dimensionality, introduced by Bellman, refers to the explosive nature of spatial dimensions and its resulting effects, such as, an exponential increase in computational effort, large waste of space and poor visualization capabilities. Higher number of dimensions theoretically allow more information to be stored, but practically rarely help due to the higher possibility of noise and redundancy in real world data. In this article, the effects of high dimensionality is studied through various experiments and the possible solutions to counter or mitigate such effects are proposed. The source code of the experiments performed is available publicly on github.

Read the Paper | Get the Code

Building an inline Turing Machine for C++

Introduction

Turing Machines are mathematical abstractions of a computing systems. Suppose you are given an input string of letters of a given alphabet, and you want to do some computation on it, then given enough time and memory, a Turing Machine can do the job for you. The only caveat is, the language that you want to compute upon should be “computable”. What this means is, there are certain languages that are not computable in reasonable time by a Turing Machine. Such languages are called “hard”, because it is hard to write an algorithm to compute in reasonable time. This is a very abstract version of what a Turing Machine is. Let us explore it in brief. Continue reading

A new kind of interpretation

Introduction

In all of computer programming, we have been solving problems by modeling the real world into code. Consider the reverse process, where we derive a relation between data structures and the real world. Can we model code in real world objects? Or in other words, can we represent information in real world entities? My effort in the following article will be to guide you through various scenarios which will (hopefully) change your perception towards our environment.

"OPEN YOUR MIND"

Continue reading

The Three Pillars of Learning

knowledge triangle-01

This illustration was published in decentralized.in and depicts the Three Pillars of Learning. As explained by the author, the three pillars of learning are:

  1. Pillar 1: To be good at one particular thing
  2. Pillar 2: To be okay in 2-3 fields
  3. Pillar 3: To have basic knowledge of as many things as possible

The triangle on the top is the Knowledge Triangle, which refers to the interaction between research, education and innovation. The three pillars of learning give the foundation to the Knowledge Triangle. The stronger the pillars, the longer the knowledge triangle stands!