Kailash Nadh's View on AI
Kailash nadh is one of the most loved technical CTO. Here is a simple podcast summary of "AI and Paradox" with Kunal Shah. Kailash always has really first-principles thinking on any topic. Let's summarize the video and wrap it up in a few minutes with some important pointers. If I want to revisit or someone else wants to have a quick glance through it :)
The questions covered in the podcast are:
- Why is A.I. exponentially different this time?
- Impact of A.I. on jobs
- What does the next 10 years hold for A.I.?
- Economic changes with A.I.
- Can you make an LLM with $100B?
- Evolution of work and jobs in the coming 10 years
- How will India evolve with A.I.?
- Will A.I. affect software companies?
- How will A.I. change content?
- 3 decisions as the CTO of India
- What would Kailash tweak in society or history to better humanity?
AI isn't new. Despite no agreed definition, the simplest one is recreating human-like intelligence in machines. Roots go back millennia to ancient Indian and Greek debates on Mind vs. Body. Even atomicism connects to AI thinking. Practical attempts? Medieval automatons with cogwheel mechanisms that mimicked life. Modern AI started 100+ years ago a French mathematician first modeled neurons mathematically. By the 1940s, physical artificial neurons existed (before modern computers). Electricity goes in, pulses come out. Then breakthroughs: Perceptron (1950s), Hopfield networks (1980s), Transformers (2017). Point is: today's breakthroughs are 60-80 years of research compounding. Nothing emerges in isolation.
Why this time is exponential: To define a breakthrough, you need a baseline. Compare what computational AI achieved in 60 years versus what happened in the last 6 months. It's evident from behavior. The Holy Grail was computers understanding human language. Natural Language Processing was an entire field. Now? LLMs don't need NLP rules programmed in. They parse, respond, and reason (or create the illusion of it) in all known human and computer languages. NLP as a field? Absorbed. Gone. You don't need those algorithms anymore one thing does it all.
Then there's emergent phenomena. One LLM showing multiple weird behaviors nobody programmed. Language is just the interface, but these models attempt reasoning, get into logical arguments, solve complex problems, maintain chain-of-thought conversations. Each of these was a separate AI field for decades. Now one class of technology the LLM subsumes entire fields of research into itself. That's the exponential shift.
White-collar jobs are at risk. First time technology directly attacks knowledge work. LLMs condense 50 page documents in seconds. Information processors, analysts, researchers becoming redundant. Worse: AI progresses from generating intelligence → automating decisions → executing actions. Even specialized professions (radiologists, lawyers) face obsolescence. The monopoly on professional judgment is ending.
Wealth inequality will explode. Post-Industrial Revolution, the gap widened even as poverty improved because total wealth grew exponentially. AI will make today's disparity "look like a joke." The wealthiest will reach a new order of magnitude. Questions: What happens to nations? How do poorer countries survive? What about conflicts?
You can't build AI with just money. Carl Sagan: "To make an apple pie, you must first create the universe." $100B can't create an LLM from scratch. Needs decades of research, culture, published papers (like Google's Transformer work), infrastructure. Exception: wartime scenarios where humanity unites extremely rare. Breakthrough requires civilizational infrastructure, not just capital.
On planning: "I don't have long-term plans." Can't predict 2 years ahead, let alone 10. Focus on what's in your control your choices, your projects. Pick work that produces societal good. Accept that it might not matter, but try anyway. That's what it means to be human. The show must go on.
Absurdist worldview: One conversation, one word overheard can change timelines. Butterfly effect across 8 billion humans. Coincidences shape everything. Making 50-year plans? "A nice thought exercise." Worry about your actions. Outcomes are random.
India's future? Too complex to sum up. 1.4 billion people, massive diversity. No single recommendation works. Just experiment and tinker. Can't predict what a billion people will do. Focus on individual action.
Software companies: Already seeing impact one company canceled 6 developer hires because GPT-4/Copilot replaced the need. IT services have temporary protection: legacy tech (40-year-old systems too risky to change), business continuity fears, regulatory barriers. But it's only temporary. Disruption is delayed, not prevented.
Content creation paradox: AI democratizes creation production cost drops to zero. Everyone can make high-quality videos/images/text from prompts. Result? Content flood. Value of content itself drops.
The shift: Technical production skill becomes worthless. Value moves to unique human perspective ideation, curation, opinion. New creator definition: Great curator + great prompter. The bottleneck shifts from execution to ideas.
Climate comparison: If planetary collapse hasn't united humans, why would AI? Wars continue. Business as usual. Don't expect rational collective action.
If you were CTO of India, what 3 decisions would you make? Refuses to answer. "You're asking me to be a completely different person—different lived experience, outlook, knowledge. I might as well be Tom Cruise." Managing a tech team of 30 people with clear goals is not the same as running technology for an entire civilization. It's way beyond comprehension—so incomprehensible he wouldn't even dare to daydream about it. "I would definitely not want such a position."
What would he tweak in history? The absurdity tweaking one tiny event 50 years ago could completely change everything. Billions of random events shape history. But barring all those paradoxes, one thing seems obvious: 50-60 years ago, when globalization was starting and the world economy was being defined GDP, production, consumption maybe that should have been looked at differently. We extract from nature infinitely. Where do we pay the price? You sell me something, I pay you. The price factors in infinite complexities of production. But when people extract from nature minerals, water, trees, oil nobody's paying the cost. That's why we've ended up here. We've extracted unlimited resources. Nobody's slid coins into nature's piggy bank, in every figurative sense.
We should have had a different perspective on growth, progress, and economy in the technological era. One that took into consideration natural wealth. Might have curtailed insistent consumerism. Global economies might have been more sustainable. Our idea of progress and development should have been redefined to include the planet and the cost we're supposed to pay for what we take.
Connection to Kunal's insight: Status driven societies vs. wealth driven societies. Status driven is self contained, mammalian, biologically relevant. Every species is status-driven and manages to coexist with their environment. Wealth driven creates the consumption machinery. That's the fundamental problem.
Core principles:
- Nothing exists in isolation
- Uncertainty is the only certainty
- Act meaningfully despite knowing outcomes are unpredictable
- Small events compound unpredictably
- Focus on what you control
What to do:
- Don't make rigid 10-year plans
- Focus on projects that create societal good
- Develop curation and ideation skills, not just production
- Accept uncertainty and act anyway
- For orgs: Don't trust temporary protection, start experimenting now
- For creators: Develop unique perspective, become excellent curator/prompter
The paradox: Must plan and act in a world where planning is nearly impossible. Answer isn't nihilism—it's focused action on what you control + acceptance of radical uncertainty. Keep moving forward precisely because the future is unknowable.