Uniform sampling within an ellipsoid

Written for PyTorch with CUDA compatibility.

Use Case: Instead of doing a multivariate normal sampling (available in torch.distributions.multivariate_normal), one could also do a random sampling within a specified confidence region of the multivariate gaussian function.

Although it will be an approximation, one could obtain a confidence region using the variance across each dimension (diagonal of the covariance matrix). Suppose we define a 3*s boundary (Upto 3 x variance across each dimension is an accepted confidence interval), then EllipsoidSampler can be used to construct such an ellipsoid (with given mean = mu and lengths of axes = axes. This utility will further help sample from within the ellipsoid in a random fashion (instead of a random normal fashion).

Get code on gist.github

 

Advertisements

Entropy, Cross-Entropy & KL-Divergence

Here is a 10-minute video by Aurélien Géron explaining entropy, cross-entropy and KL-divergence using Information Theory.

He has some more interesting videos on his channel. Do check it out!

Connect-3 Game Playing Bot

We use Minimax algorithm to predict the next optimal move after every move by the user. This work demonstrates how a complete search done by Minimax algorithm can always yield optimal results. To speed up the search, alpha-beta prunning is implemented to prune moves that do no better than the currently explored moves. We test two methods using Minimax algorithm for game playing – with and without Alpha-Beta Prunning. A 14x faster first move is obtained using alpha-beta prunning.

The code is publicly available on github.


Project Connect-3 Game Playing Bot photo #1

Probabilistic Inference using Bayesian Networks

In this work, we study the application of bayesian networks for probabilistic inference. We consider a hypothetical real-world scenario where we answer queries regarding various events (health problems, accidents etc.) caused by factors such as air pollution, bad road conditions etc.

Each event/factor is modeled as a random variable with a certain probability distribution function (given as input). Variable dependence graph is constructed and bayes rule is applied on the markov blanket of the query variables to reduce the computational effort. Detailed documentation can be found in the code.

The code is publicly available at github.



https://dmtyylqvwgyxw.cloudfront.net/instances/132/uploads/images/photo/image/46217/5afde020-23c6-4dd0-a9b6-346cf6b72624.png?v=1530479116

Visualizing Perceptron Learning

This program visualizes the learning process of a perceptron. For simplicity, we consider the perceptron to learn the identity function. We give a 2 dimensional input <x, y> and classify each point as being below the line or above the line (binary classification). We update the weights of the perceptron whenever misclassification occurs. Over several examples, the perceptron learns the identity mapping.

The code is publicly available on github.


The Curse of Dimensionality: Inside Out

The Curse of Dimensionality, introduced by Bellman, refers to the explosive nature of spatial dimensions and its resulting effects, such as, an exponential increase in computational effort, large waste of space and poor visualization capabilities. Higher number of dimensions theoretically allow more information to be stored, but practically rarely help due to the higher possibility of noise and redundancy in real world data. In this article, the effects of high dimensionality is studied through various experiments and the possible solutions to counter or mitigate such effects are proposed. The source code of the experiments performed is available publicly on github.

Read the Paper | Get the Code

#GettingStarted

Hi there!

If you want me to describe myself in one word, I’d say, “ComputerGeekMusicLoverProgrammerProgamerScienceLover”
And, I love the sentence case just because of this.

I have presently finished my final year in High School and I am about to join a college. So, I’m using this time to the fullest to get myself back online.

I did make many blogs since 2009, but now I’m planning to combine all of my online stuff into one piece. #BackToSquareOne

So, here I go to reboot my #ServerProcessor.

See you soon..