Here is a 10-minute video by Aurélien Géron explaining entropy, cross-entropy and KL-divergence using Information Theory.

He has some more interesting videos on his channel. Do check it out!

Here is a 10-minute video by Aurélien Géron explaining entropy, cross-entropy and KL-divergence using Information Theory.

He has some more interesting videos on his channel. Do check it out!

Advertisements

We use Minimax algorithm to predict the next optimal move after every move by the user. This work demonstrates how a complete search done by Minimax algorithm can always yield optimal results. To speed up the search, alpha-beta prunning is implemented to prune moves that do no better than the currently explored moves. We test two methods using Minimax algorithm for game playing – with and without Alpha-Beta Prunning. A 14x faster first move is obtained using alpha-beta prunning.

The code is publicly available on github.

In this work, we study the application of bayesian networks for probabilistic inference. We consider a hypothetical real-world scenario where we answer queries regarding various events (health problems, accidents etc.) caused by factors such as air pollution, bad road conditions etc.

Each event/factor is modeled as a random variable with a certain probability distribution function (given as input). Variable dependence graph is constructed and bayes rule is applied on the markov blanket of the query variables to reduce the computational effort. Detailed documentation can be found in the code.

The code is publicly available at github.

This program visualizes the learning process of a perceptron. For simplicity, we consider the perceptron to learn the identity function. We give a 2 dimensional input <x, y> and classify each point as being below the line or above the line (binary classification). We update the weights of the perceptron whenever misclassification occurs. Over several examples, the perceptron learns the identity mapping.

The code is publicly available on github.

The Curse of Dimensionality, introduced by Bellman, refers to the explosive nature of spatial dimensions and its resulting effects, such as, an exponential increase in computational effort, large waste of space and poor visualization capabilities. Higher number of dimensions theoretically allow more information to be stored, but practically rarely help due to the higher possibility of noise and redundancy in real world data. In this article, the effects of high dimensionality is studied through various experiments and the possible solutions to counter or mitigate such effects are proposed. The source code of the experiments performed is available publicly on github.

A very simple wallpaper.

Download Full Size (1920 x 1080)

One of the best cultural fests of India is coming this fall. Here’s a tiny banner that I designed in Photoshop.

Be there.

**28th Oct – 1st Nov 2015**

**BITS Pilani
Pilani Campus
Rajasthan 333031
India
**

BITS Pilani, It’s Magic!

Hi there!

If you want me to describe myself in one word, I’d say, “ComputerGeekMusicLoverProgrammerProgamerScienceLover”

And, I love the sentence case just because of this.

I have presently finished my final year in High School and I am about to join a college. So, I’m using this time to the fullest to get myself back online.

I did make many blogs since 2009, but now I’m planning to combine all of my online stuff into one piece. #BackToSquareOne

So, here I go to reboot my #ServerProcessor.

See you soon..