Frequentist or Bayesian, Who am I?

I am a Software Architect and an Independent Researcher who has designed and developed data products from Ideation to Go To Market at enterprise scale through my career. I am a perpetual learner who learn new things and make them work. My passion is Programming and Mathematics for Deep Learning and Artificial Intelligence. My focus area is Computer Vision and Temporal Sequences for Prediction and Forecasting.

Selected Reads Selected Watch More About Me

Selected Writes - AI, ML, Math

When Krishna Teaches, Arjuna Listens - But What If They’re Both Within Us and With Us (Perplexity)?

Posted Apr 20, 2025 ‐ 4 min read

Somewhere between ‘why are we here?’ and ‘why is my CI/CD broken again?’ it happened. A moment when I paused... mid-scroll, mid-code, mid-existential crisis... and thought: “Wait… was the Was the Bhagavad Gita foreshadowing AI all along?” I know, blasphemy. But hear me out. There I was, rereading that final verse... the grand mic-drop of the Gita... and it hit me like a Wi-Fi outage during a product demo: “Wherever there is Krishna, the yogi, and Arjuna, the archer, there lies victory, prosperity, and unwavering wisdom.” And suddenly, I couldn’t unsee it. Krishna? The ultimate knowledge source, an ancient GPT with slightly better metaphors. Arjuna? The overthinking product manager trying to make sense of life, war, and OKRs. Swap the battlefield for a Jira board, and voilà... the Mahabharata meets machine learning. This isn’t a hot take. It’s a spicy samosa of science and spirituality, deep learning and deeper meaning. Because in a world where “thinking” is a team sport... and half the team is digital... maybe the real yoga is how we choose to collaborate: with each other, with our inner selves… and yes, even with AI. So let’s walk through this shloka... swords down, minds open... and decode why the Gita still has something to teach us. Even in the age of artificial enlightenment.

Pair Programming with an AI: Debugging Profile Picture Uploads with Claude-3.7

Posted Mar 02, 2025 ‐ 9 min read

I’ve been stuck on a problem for a while now. You know that kind of bug... the one that refuses to budge no matter how many times you rewrite the code, tweak the request payload, or double-check the backend logs. Today, I decided to try something different. Instead of debugging alone, I brought in a peer programmer... except, this time, my partner wasn’t human. Enter Claude-3.7 Sonnet-Thinking... an AI that didn’t just spit out code snippets but actually worked through the problem like a real collaborator. And trust me, this thing wasn’t just suggesting fixes... it was thinking, iterating, making mistakes, correcting them, and even rewriting parts of my backend and frontend in an attempt to solve the issue. For the first time, I felt like I was debugging with an AI, not just using one.

Evaluating Large Language Models Generated Contents with TruEra’s TruLens

Posted Mar 17, 2024 ‐ 41 min read

It's been an eternity since I last endured Dr. Andrew Ng's sermon on evaluation strategies and metrics for scrutinizing the AI-generated content. Particularly, the cacophony about Large Language Models (LLMs), with special mentions of the illustrious OpenAI and Llama models scattered across the globe. How enlightening! It's quite a revelation, considering my acquaintances have relentlessly preached that Human Evaluation is the holy grail for GAI content. Of course, I've always been a skeptic, pondering the statistical insignificance lurking beneath the facade of human judgment. Naturally, I'm plagued with concerns about the looming specter of bias, the elusive trustworthiness of models, the Herculean task of constructing scalable GAI solutions, and the perpetual uncertainty regarding whether we're actually delivering anything of consequence. It's quite amusing how the luminaries and puppeteers orchestrating the GAI spectacle remain blissfully ignorant of the metrics that could potentially illuminate the quality of their creations. But let's not be too harsh; after all, we're merely at the nascent stages of transforming GAI content into a lucrative venture. The metrics and evaluation strategies are often relegated to the murky depths of technical debt, receiving the customary neglect from the business overlords.

AI as a Business Partner: Validating My Healthcare App Idea using GPT-4o

Posted March 23, 2025 ‐ 4 min read

Like every over-caffeinated founder with a “revolutionary” idea, I thought I was onto something BIG... saving doctors from the never-ending doom of paperwork. I mean, they signed up to save lives, not to moonlight as data entry clerks, right? So, with the confidence of someone who just watched a TED Talk, I got to work. AI-powered documentation assistant? Easy. A few late nights, gallons of coffee, and some speech-to-text magic later, I had a prototype. The feedback? “Oh wow, this is cool!” Doctors were intrigued. I was pumped. Was I the next Elon of healthcare tech? Then reality hit harder than a Monday morning. The initial hype faded, and the real question loomed: “Cool, but… will anyone actually use this?” Enter AI: not as my usual pair-programming buddy, but as my brutally honest business partner. No sugarcoating. No participation trophies. Just tough love and even tougher questions.

Reflexive by Default: The Role of Human Beings in an AI-Driven World

Posted January 13, 2025 ‐ 6 min read

Like every self-respecting tech bro armed with a half-charged MacBook and a ChatGPT tab on speed dial, I too believed I was thinking. You know... solving bugs, crafting flows, building features. Classic human stuff. Then one day, mid-debug spiral, I caught myself whispering: “ChatGPT, explain this bug like I’m five.” And boom... insight. Progress. Sanity. That’s when it hit me: I wasn’t “thinking” anymore. I was prompting. Reflexively. No long walks. No rubber duck. Just straight-up neural outsourcing. At first, it felt like cheating. Then it felt like genius. Now? It just feels normal. This post is about that shift. The one where thinking (like a human) became optional, and thinking (like an AI) became... default. Spoiler: it’s not about losing your edge. It’s about sharpening it... with silicon. Welcome to the era where brains and bots team up... and we stop pretending we’re doing it solo.


Selected Reads - Papers, Articles, Books

Density Estimation using Real NVP - GOOGLE RESEARCH/ICLR

This paper is going to change your perspective on AI research tangentially, if you stepping into Probabilistic DNNs. Start from here for unsupervised learning of probabilistic model using real-valued non-volume preserving transformations. Model natural images through sampling, log-likelihood and latent variable manipulations read...

The Neural Code between Neocortical Pyramidal Neurons Depends on Neurotransmitter Release Probability - PNAS

This 1997 paper brings bio-physics, electro-physiology, neuroscience, differential equations etc in one place. A good starting point to understand neural plasticity, synpases, neurotransmitters, ordinary differential equations read...

Using AI to read Chest X-Rays for Tuberculosis Detection and evaluation of multiple DL systems - NATURE

Deep learning (DL) is used to interpret chest xrays (CXR) to screen and triage people for pulmonary tuberculosis (TB). This study have compared multiple DL systems and populations with a retrospective evaluation of 3 DL systems. read...

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization - IEEE/ICCV

How to approach compute complexities, ie time and space complexity problems while designing a software system to avoid obvious bottlenecks in an abstract fashion. read...

Evolve Your Brain: The Science of Changing Your Mind by Joe Dispenza - BOOK

Ever wonder why you repeat the same negative thoughts in your head? Why you keep coming back for more from hurtful family members, friends, or significant others? read...

Selected Watch - Social Media/OTT Content

Eureka : Dr V. Srinivasa Chakravarthy, Prof, CNS Lab,IITM

Interaction with Prof. Chakra, Head of the Computational Neuroscience Lab. Computational neuroscience serves to advance theory in basic brain research as well as psychiatry, and bridge from brains to machines. watch...

Quantum, Manifolds & Symmetries in ML

Conversation with Prof. Max Welling on Deep Learning with non-Euclidean geometric data like graphs/topology or allowing networks to recognize new symmetries watch...

The Lottery Ticket Hypothesis

Yannic's review of The Lottery Ticket Hypothesis - A paper on network optimization through sub-networks. This paper is from MIT team watch...

Backpropagation through time - RNNs, Attention etc

MIT S191 Introduction to Deep Learning by Alexandar Amini and Ava Soleimany. Covers intuition to Recurrent LSTM, Attention, Gradient Issues, Sequential Modelling etc watch...

What is KL-Divergence?

A cool explanation of Kulbuck Liebler Divergence by Kapil Sachdeva. It declutters many issues like asymmetry, loglikelihood, cross-entropy and forward/reverse KLDs. watch...

Overfitting and Underfitting in Machine Learning

In this video, 2 PhD students are talking about overfitting and underfitting, super important concepts to understand about ML models in an intuitive way. watch...

Attitude ? Explains Chariji - Pearls of Wisdom - @Heartfulness Meditation

Chariji was the third in the line of Raja Yoga Masters in the Sahaj Marg System of Spiritual Practice of Shri Ram Chandra Mission (SRCM). Shri Kamlesh Patel also known as Daaji, is the current Guide of Sahaj Marg System (known today as HEARTFULNESS ) and is the President of Shri Ram Chandra Mission. watch...