Frequentist or Bayesian, Who am I?

I am a Software Architect and an Independent Researcher who has designed and developed data products from Ideation to Go To Market at enterprise scale through my career. I am a perpetual learner who learn new things and make them work. My passion is Programming and Mathematics for Deep Learning and Artificial Intelligence. My focus area is Computer Vision and Temporal Sequences for Prediction and Forecasting.

Selected Reads Selected Watch More About Me

Selected Writes - AI, ML, Math

Cocktail Party Problem - Eigentheory and Blind Source Separation Using ICA

We will never achieve 100% accuracy on our predictability of real world events using any AI/ML algorithm and accuracy is a one simple metric that always lead to deception, Why? Data observed from the nature is always a mixture of multiple distinct sources, separating them by their origin is the basis for understanding. The process of separating the signals that consummate an observed data is called Blind Source Separation. Pondering, we human beings are creatures of grit and competence to come up with techniques like Independent Component Analysis(ICA) in the quest for understanding the complex entities of nature.

Courage and Data Literacy Required to Deploy an AI Model and Exploring Design Patterns for AI

Have you ever came across a situation where your dataset is closely linked with human beings and you are expected to optimize certain operations/processes. Does it made you feel anxious? You are not alone, operational optimizations at industrial/business processes are often focused towards minimizing human errors to maximize productivity/profitability - Most likely, depend on machines(to support) rather than fully rely on humans in decision making. These decisions might exacerbate the basic livelihood of certain section of people(often the ones in the bottom of the value chain) involved in the process if AI is done wrongly.

Eigenvalue, Eigenvector, Eigenspace and Implementation of Google's PageRank Algorithm

Feature extraction techniques like Principal Component Analysis use eigenvalues and vectors for dimensionality reduction in a machine learning model by density estimation process through eigentheory. Eigenvalues depicts the variance of distribution of data in certain direction, the vector having the highest eigenvalue is the principal component of the feature set. In simple terms, eigenvalues helps us to find patterns inside a noisy data. By the way, Eigen is a German word and it means Particular or Proper - When it combined with value, it means - the proper value.

Why Covariance Matrix Should Be Positive Semi-Definite, Tests Using Breast Cancer Dataset

Are you keep hearing this phrase Covariance Matrix is Positive Semidefinite when you indulge in deep topics of machine learning and deep learning especially on the optimization front? Is it causing certain sense of uneasiness and makes you feel anxious about the need for your existence? You are not alone, In this post we shall see the properties of a Covariance Matrix and its properties. Also, we shall see the nature of eigen values of a covariance matrix.

Shannon's Entropy, Measure of Uncertainty When Elections are Around

What is the most pressing issue in everyone's life, It is our inability to predict how things will turn out. i.e. Uncertainties, How awesome(or depressing) it will be if we make precise predictions and perform accurate computation to measure uncertainties.


Selected Reads - Papers, Articles, Books

Density Estimation using Real NVP - GOOGLE RESEARCH/ICLR

This paper is going to change your perspective on AI research tangentially, if you stepping into Probabilistic DNNs. Start from here for unsupervised learning of probabilistic model using real-valued non-volume preserving transformations. Model natural images through sampling, log-likelihood and latent variable manipulations read...

The Neural Code between Neocortical Pyramidal Neurons Depends on Neurotransmitter Release‚ÄČProbability - PNAS

This 1997 paper brings bio-physics, electro-physiology, neuroscience, differential equations etc in one place. A good starting point to understand neural plasticity, synpases, neurotransmitters, ordinary differential equations read...

Using AI to read Chest X-Rays for Tuberculosis Detection and evaluation of multiple DL systems - NATURE

Deep learning (DL) is used to interpret chest xrays (CXR) to screen and triage people for pulmonary tuberculosis (TB). This study have compared multiple DL systems and populations with a retrospective evaluation of 3 DL systems. read...

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization - IEEE/ICCV

How to approach compute complexities, ie time and space complexity problems while designing a software system to avoid obvious bottlenecks in an abstract fashion. read...

Evolve Your Brain: The Science of Changing Your Mind by Joe Dispenza - BOOK

Ever wonder why you repeat the same negative thoughts in your head? Why you keep coming back for more from hurtful family members, friends, or significant others? read...

Selected Watch - Social Media/OTT Content

Eureka : Dr V. Srinivasa Chakravarthy, Prof, CNS Lab,IITM

Interaction with Prof. Chakra, Head of the Computational Neuroscience Lab. Computational neuroscience serves to advance theory in basic brain research as well as psychiatry, and bridge from brains to machines. watch...

Quantum, Manifolds & Symmetries in ML

Conversation with Prof. Max Welling on Deep Learning with non-Euclidean geometric data like graphs/topology or allowing networks to recognize new symmetries watch...

The Lottery Ticket Hypothesis

Yannic's review of The Lottery Ticket Hypothesis - A paper on network optimization through sub-networks. This paper is from MIT team watch...

Backpropagation through time - RNNs, Attention etc

MIT S191 Introduction to Deep Learning by Alexandar Amini and Ava Soleimany. Covers intuition to Recurrent LSTM, Attention, Gradient Issues, Sequential Modelling etc watch...

What is KL-Divergence?

A cool explanation of Kulbuck Liebler Divergence by Kapil Sachdeva. It declutters many issues like asymmetry, loglikelihood, cross-entropy and forward/reverse KLDs. watch...

Overfitting and Underfitting in Machine Learning

In this video, 2 PhD students are talking about overfitting and underfitting, super important concepts to understand about ML models in an intuitive way. watch...

Attitude ? Explains Chariji - Pearls of Wisdom - @Heartfulness Meditation

Chariji was the third in the line of Raja Yoga Masters in the Sahaj Marg System of Spiritual Practice of Shri Ram Chandra Mission (SRCM). Shri Kamlesh Patel also known as Daaji, is the current Guide of Sahaj Marg System (known today as HEARTFULNESS ) and is the President of Shri Ram Chandra Mission. watch...