Ideas

Writing on AI, leadership, and what I'm learning building at enterprise scale.

13 articles
No articles in this category yet.
Are you a FUD of AI?
Feared. Uncertain. Doubtful. Every engineering manager, coder, CEO and CTO is in the same boat. Here's how to stop being FUD.

Feared. Uncertain. Doubtful.

You're not alone. Every engineering manager, coder, CEO, and CTO is in the same boat. Except Sam Altman. He has a yacht.

This is a normal place to be. The AI wave hit fast in 2020 and hasn't slowed down. That's good news. Humanity just leapfrogged from nominal to linear to exponential growth.

We can make pictures from text. Text from pictures. People like me can even follow what's being said about me in a crowded Spanish bar. (They mostly praise me, thankfully.)

But here's the flip side: executives everywhere are under pressure to "show results." As if we live in a world without tech debt. Without managerial debt. Without reality.

You become FUD.


But you don't have to stay FUD. A few steps I remind myself of:

This is a marathon, not a sprint. If AI doesn't fit into your strategy, that's fine. It should always be the other way around — strategy first, then AI as a tool.

You don't need 16 AI projects in production. You need 2–3 good ones that actually solve problems. A large surface area looks good at first, but it bites later. Focus on 2–3 use cases with agents and reasoning models.

Don't let good talent walk out the door. Keep your builders. The ones who obsess over customer problems, solve them like it's a calling, and then do it again and again.

Is this list exhaustive? No. It's my list. It's flawed. But it's a start.

Let's make sure none of us stay FUD.

Read on LinkedIn ↗
A lot of Agentic AI. None of AGI.
The more I work with agentic AI, the more I believe AGI will always remain a North Star — powerful, worth striving toward, but never truly reached.

The more I work with agentic AI — or AI agents — the more I align with the prediction that AGI will never be achieved. It will always remain a North Star. A powerful North Star that humanity should strive toward, but never truly reach.

What I am convinced of is this: the future of work and communication will run through agents. Agents more intelligent than scripts, but not as smart as humans.


How we define work

The way I define work is different from how you define work. We don't have one universal definition for anything. That's the beauty of being human — diversity of thought and perspective.

Because we lack universal definitions, we also lack universal expectations. Which means we can't agree on how to evaluate, score, or benchmark "work."

Forget work for a moment — take cricket. We don't even agree on the "best batsman." For me it's Sachin, then Kapil. Yours may be different.

Boundary erosion

Humans are uniquely creative and thrive in complex, ambiguous situations. Our brains, shaped over thousands of years, evolved to navigate uncertainty.

Sometimes we jump when we hear leaves move. Sometimes we don't. Sometimes we survive the tiger. Sometimes we don't.

This kind of fluid judgment seems impossible for AI. Boundaries — 100% of the time — are defined by us, for us. Agents can excel within those boundaries. AGI cannot.

Resources will dry up

VCs with deep pockets, and companies with even deeper ones, will soon realize the money is in Agentic AI — not AGI. At least not for 10–20 years. Most investors want ROI in a decade, maybe two. Beyond that horizon, it becomes very difficult to justify.

Few people think in centuries. Some do. Most don't.


In short, this isn't bad news. It's the best possible outcome.

Agents give us leverage, efficiency, and new ways of creating value — without the illusion of "general intelligence." They're not the substitute for humans. They're the multiplier.

Read on LinkedIn ↗
Spikes of brilliance and Valleys of incompetence
We've built machines that draft legal contracts and ace graduate exams — yet fail at basic logic puzzles. Welcome to the age of Artificial Jagged Intelligence.

We've built machines that can draft legal contracts, compose music, and ace graduate-level exams — yet they still fail at basic logic puzzles. This paradox defines the age of Artificial Jagged Intelligence: systems of dazzling capability and baffling fragility.

Artificial Jagged Intelligence — AJI — is intelligence which can be the best product manager or poet and then turn to vegetable when asked 9.11 - 9.8.

AJI will NOT take us to AGI. These two are different concepts and live in different worlds.


Why AJI is NOT AGI

Non Emergent: Current LLMs rely on training. A lot of training. These models do not get better without it. Every time we need better performance, better accuracy, better reasoning, we need to train again. True AGI will be emergent — it will know how and from where to get information.

Non Integrated: Can it connect to other systems and communicate without specific instruction? With MCP it sort of does — however, this also assumes there is an MCP server available to complete the objective.

Non Brittle: We come across the failings of this generation of LLMs pretty much every day. 9.8 - 9.11. How many R's in Strawberries. If the knowledge is limited and very specific, can it really be called "General"?


Where do we go from here

AJI will not become AGI. And I am betting against Sundar Pichai's views. Bold move.

However, AJI will pave the way for AGI. There will be a new transformer paper by some other title, with roots in the current ecosystem. That paper was made possible since we made so many powerful AJIs. And then it will give birth to AGI.

Innovations do not happen in an instance or month or year or even a decade. It is the culmination of various events happening one after another. Sometimes without a goal in mind. And that is how two bicycle mechanics got us into an airplane. Somehow.

We live in extraordinary times.

Read on LinkedIn ↗
Chips are common. Talent is rare.
Meta is spending hundreds of millions on AI engineers. NVIDIA chips go for $30K–$40K. The pattern is clear — and it changes everything about how you lead.

Meta is spending hundreds of millions on AI engineers. OpenAI just added $1.5M in bonus for every employee. NVIDIA's new Blackwell chips go for $30K–$40K, while consumer GPUs cost under $1,000.

The pattern is clear.


People: The Only Differentiator

The best human minds — the ones who can reason, build, code, and scale — will be worth far more than the chips they run on. Any company with a budget can buy the same GPUs, the same LLMs, the same infrastructure.

What they can't buy off the shelf is elite talent: highly specialized individuals at the top of their niche, guided by exceptional leadership.

Ironically, as AI accelerates, the only sustainable advantage is the most human one: talent.


End Note

In the end, the companies that win won't be the ones with the most GPUs. They'll be the ones with the strongest people strategy. For leaders, now is the moment to double down on talent as the ultimate competitive advantage.

Read on LinkedIn ↗
It works. Not sure why?
The reason we don't understand why Claude or ChatGPT excels at some tasks may be both the biggest advantage and the biggest risk in enterprise AI adoption.

The reason we do not understand why ChatGPT or Claude does very well in some tasks may be the biggest benefactor — or evil — for us.

ChatGPT writes that email / presentation / code and everyone is impressed. You did a wonderful job prompting. And the results agree. However, you are scratching your head.

You do not know WHY?


Enter Unexplained Variance

The gap between AI performance and human understanding. There are two types of AI, broadly:

Explainable: Deterministic result. Predictable path.

Unexplainable (black box): Stochastic result. Unpredictable path.

Unexplainable models win in cases like writing a poem or recognizing patterns. They fail in cases requiring regulatory compliance — since decisions are not rationale and mostly psychological — or deterministic results.


Your Strategy for Adoption

Audit your AI systems regularly. Match variances to risk exposure. Monitor usage and applications.

The models keep getting better. But your strategy for how you use them matters more than which model you pick.

Read on LinkedIn ↗
Bet on Simplicity. Even More.
WhatsApp had 56 employees. Instagram had 13. Meanwhile, tech giants ship a button color change in 13 months. Complexity is a tax — and it compounds.

This is even more important today than ever. The smartest leaders are betting on simple over complex. Not sure why you are not?

WhatsApp had 56 employees and was acquired by Facebook at $19 billion. Instagram had 13 employees and was acquired by Meta at $1 billion. Meanwhile, tech giants famously ship changes to a blue button in 13 months.


Complexity Tax is killing you.

Complexity doesn't just exist — it compounds. This is bigger than technical debt. It is not just slow code — it is slow thinking, hiring, and pivoting.

10% more complexity means 20% longer onboarding. 20% longer onboarding means 40% slower debugging. 40% slower debugging means 80% slower go-to-market.

None of those numbers are concrete. But they're directional. You get it.


The Three Complexities

Organizational Complexity: Eliminate exponential organizational cost creators. You do not need more approvals. More process.

Product Complexity: You do not want to be a feature factory. Every new feature that does not add revenue or increase retention increases the debt on the team.

Technical Complexity: Adding a solution that resolves an edge case helping 2 customers out of 33 million will increase tech debt enormously.


What to do.

Start with Why. Measure Complexity. Optimize for Change.

Your job isn't building the most sophisticated systems. It is building the systems that let your team move faster toward the right outcomes.

Read on LinkedIn ↗
10 Mediocre bets does not equal 1 bold bet.
10 × 1 does not equal 1 × 10. Scaling mediocre ideas produces mediocre results. Focus compounds. Fragmentation dissipates.

10 ✖️ 1 Does not Equal 1 ✖️ 10. This framework stays rent free in my head. Thanks to Rory Sutherland for the gift of a good mental model.


Scale only works on substance

How many times have you tried running poor prompts on ChatGPT with minimal success and increasing frustration? Starting with a good prompt could have saved all those hours.

Scaling 10 mediocre ideas will not produce a bang for your buck. It will give you 10 — or probably 20 — mediocre results.

Scale should amplify substance, not noise.


Focus Outweighs Fragmentation

Deploying your already thin and scattered resources across multiple enterprise objectives will not yield ROI. It will bring down morale, burn out excellent engineers, and make you look like you don't know how to strategize.

Pick one or two really meaningful objectives. And then go big bang on them.

Focus compounds; fragmentation dissipates.


Leverage Comes from Excellence

Raw talent. In an AI age, your talent counts more than ever. Everything else is getting commoditized. Except good talent.


My takeaway

10 × 1 = 1 × 10 in mathematics. It hardly works in any other field. Focus on outcomes, not outputs.

Competitive advantage doesn't come from "doing more." It comes from focusing on the right "one" and compounding it at scale.

Read on LinkedIn ↗
Limit == Growth
Anthropic did something rare: built a product so immediately valuable that usage must be constrained. What Claude Code's usage limits tell us about product-market fit.

Anthropic has done something rare in tech and AI: a product so immediately valuable that usage must be constrained.

I got to know it with a slightly disheartening email from them limiting the usage of Claude Code — which I have started loving due to its immense context understanding and clean code quality.

Workflow-First. From the start.

Unlike existing AI coding tools, Claude Code meets developers where they actually work: the terminal. This represents a fundamental shift in thinking.

Traditional approach: Build AI capabilities, then ask users to adapt their workflow.

Claude Code's approach: Understand existing workflows, then integrate AI seamlessly. No context switching.

Why Usage Limits Signal Success

Quality over scale: Anthropic prioritizes user experience over rapid growth.

Natural demand: Users accept limitations because the core value is undeniable. I wait for the limit to reset before starting again.

Market validation: When people wait for access rather than seek alternatives, you've found something special.

The Competitive Gap

The gap isn't in AI capability — it's in workflow philosophy. Most competitors enhanced existing patterns rather than reimagining the developer experience entirely.

Key Insights for Product Builders

Meet users in their natural environment. Focus beats features. Technical excellence isn't differentiating — workflow integration matters more.

Read on LinkedIn ↗
Manus: RPA on Steroids
AI has changed the world. The question isn't when Manus arrives — it's when AI breaks the fourth wall and starts moving things in the physical world.

You must be living under a rock if you haven't heard about Manus. The name is derived from Latin — but Marathi (Manus: Man) has some equivalent to it.

Over the past few days, this slick piece of AI + RPA + Web Design has taken the world by storm. I still don't have access to it — even though I applied at 2 AM.

By the looks of the functionality it promises, one thing is certain: AI has changed the world. And for the better.

A lot of possible futures are becoming plausible. We can perform financial analysis and build a dashboard with a command. Write research papers on topics where we are not the subject matter expert.

The question for us is: when does the fourth wall break? When does stupendous AI such as Manus break into the physical world and help move it — or build a Dyson sphere?

Read on LinkedIn ↗
Internalization for Founders
Herb Kalman's three kinds of social influence: compliance, identification, and internalization. The most powerful — and most subtle — is the last one.

Herb Kalman's three kinds of social influence are compliance, identification, and internalization. The most crucial and powerful of these is internalization.

Internalization occurs when the company's culture seeps into every aspect of the organization at an early stage — by following the founders' footsteps and vision, embracing their beliefs. It is also one of the most subtle forms of influence.

Sam Walton did it with Walmart in 1962. Steve Jobs and Steve Wozniak did it with Apple in 1976. Brian Chesky did it with Airbnb in 2007.

Symbols such as minimalist office spaces and sharp attention to the company's mission illustrate how this works. The story is consistently told from the founders' perspective.

This message appears repeatedly on office walls and in boardroom presentations, guiding the internalization of company values for every employee. Consistent messaging keeps the vision alive.

Read on LinkedIn ↗
So What?
Every leader needs to ask this. Every day and in every meeting. The two-word question the Greeks used 3,000 years ago that still cuts through every AI conversation.

Every leader needs to ask this. Every day and in every meeting. In every conversation.

With the GenAI revolution, a consistent ask by stakeholders to their AI teams has been: "What are we doing with LLMs — do we have ChatGPT yet?"

Ask them "So What."

Wrong Question: "How can we introduce ChatGPT to our customer service?"

Correct Question: "How can we transform customer service with tools such as ChatGPT?"


You can use the tool used by Greeks 3,000 years ago:

Colleague: How can we introduce ChatGPT to customer service?
You: So what?
Colleague: It could improve response times.
You: So what? Why does faster response time matter?
Colleague: Faster responses mean better customer satisfaction.
You: So what? How does that translate to business outcomes?
Colleague: Happier customers stay loyal and recommend us — increasing revenue and reducing churn.
You: Now we're talking. Let me push further — so what if we don't implement it?


Always connect new tools to measurable business goals. Otherwise, why bother?

So do "So What" when an idea gets thrown at you. Never try at home.

Read on LinkedIn ↗
Gatekeeping is dead. Long live guard rails.
LLMs have finally broken the chain of command. From enterprise infrastructure to data science to product management — the gatekeepers are not anymore.

LLMs and their hurricane has finally made gatekeeping gasp its last breath. From enterprise infrastructure to data science to product management, gatekeepers were everywhere. And they are not anymore.

They stopped you from getting important information by creating information bubbles. They created monopolies so large that companies with infinite budgets controlled and defined innovation. On their turf, by their rules.

The noble goal of social science was shadowed by ambitious — and sometimes corrupted — incentives.

And then LLMs happened.

Now, LLMs have finally broken the chain of command. The question is no longer who controls access to information. The question is who builds the guard rails that keep it safe, responsible, and genuinely useful.

Gatekeeping is dead. Long live guard rails.

The Power and Pitfalls of Reasoning by Analogy
Reasoning by analogy simplifies how we understand problems by drawing parallels between similar situations. In AI, this method is a game-changer — and a significant risk.

Came across an interesting discussion on the above and haven't stopped thinking about it.

Reasoning by analogy simplifies how we understand and solve problems by drawing parallels between similar situations. In AI, this method can be a game-changer — but also poses significant risks.

The Pros

Understandable: Analogies make AI's decisions relatable. Think: "If it looks like a duck and quacks like a duck, it's probably a duck."

Quick Solutions: AI leverages known scenarios for fast decisions — like using a recipe to cook a similar dish.

Simplified Learning: Analogies break down complex ideas. Imagine explaining quantum physics with traffic flow.

The Cons

The same analogical leap that makes AI intuitive can make it dangerously overconfident. When a model reasons "this looks like X" and X has known outcomes, it may skip the verification that a novel situation demands.

In high-stakes domains — health, finance, compliance — this is not just inefficient. It is risky.

The lesson: analogies are a starting point, not a conclusion. The human in the loop still matters.