-
Every developer I know has the same frustrating ritual. Open Claude Code or Cursor and ask it to do a task. The AI gives you generic code, sometimes useful (but usually not). You correct it. It apologizes. You explain again, with additional context.
Rinse, repeat, until you want to throw your laptop out the window.
David Cramer from Sentry recently shared his AI workflow where he maintains manual rules files to give LLMs context. Solid approach, but it feels like too much copy-pasting. It’s 2025, and machines can do a better job of remembering things.
It’s funny how we’ve built the most powerful reasoning systems in human history, then lobotomized them by making them forget everything after each conversation. My question: is there a better way?
Continue Reading → -
This is part of my “AI in SF” series, where I share real AI engineering workflows from San Francisco startups. I recently interviewed engineers from Parabola (they’re hiring btw, more on that at the end). Here’s a technique to teach AI to learn from your mistakes.
You know that feeling when you leave the same code review comment for the third time this month? “Hey, we use relative imports here, not absolute ones.” Or “Remember to handle both null and empty string cases.” Or “This should use our ORM helper, not raw SQL.”
Your team agrees it’s important. People follow it for some time. But three weeks later, nothing’s changed, and you’re still leaving the same comments.
I recently interviewed C.J. and Zach from Parabola (a Series B data automation company) about how they use AI in their engineering workflow. They shared a simple approach that’s replaced most of their linter rules: teaching Cursor to remember code review feedback permanently.
Continue Reading → -
This is a part of my “AI in SF” series, where I share real AI engineering workflows of SF startups. I recently interviewed an engineer from Pallet (they’re hiring - more on that at the end). Here’s an insight that will make your AI-generated code better.
Most developers use Cursor like expensive autocomplete. They let it generate whatever code it wants, fight with inconsistent outputs, and spend more time debugging AI mistakes than they save.
There’s a better way. During my interview with Vidhur from Pallet, I learned about a simple technique that made their AI-generated code dramatically better: the “gold standard” file approach.
Continue Reading → -
Social media is full of people showing off their perfect little demo apps claiming AI is revolutionary, meanwhile, AI keeps suggesting fixes for files that don’t exist or rewrites working code into broken messes.
Does that sound familiar?
Here’s the thing — the “vibe coders” are not wrong about AI being powerful. They’re just not dealing with what you’re dealing with.
You’re working on a real codebase, with real dependencies, real business logic, and real users. Real codebases are messy.
I spent ten years of my life running a development agency, and Cursor has legitimately saved me weeks of work, but only after I stopped expecting it to just “figure things out.” Now, I’m going to share with you the workflow that made Cursor work for me on complex projects.
Continue Reading → -
A mindset shift that changed the way I think about the world
In India, knowledge is currency. Three months ago, if another founder asked me about my marketing strategy, I’d give them some generic answer and change the subject. You don’t share knowledge until there’s something in it for you.
I recently moved to San Francisco. A CTO of a unicorn startup had read one of my blog articles and we started talking over DMs. When I got to SF, I asked him to meet, and he agreed.
We met in FiDi for a casual lunch. This guy runs the entire company, and he was treating me — a new founder — like an equal. He was openly sharing his experiences, his journey, and his insights. When we were leaving, he offered to help with connections, fundraising, whatever I need.
He gave me a full hour of his day, just to shoot the breeze like two developers do.
This was nothing like what I was used to. Back in India, a person with even a 100-person office would have an air of arrogance. They’d guard their knowledge and time, only sharing when there was a clear benefit to them.
It was that day that I understood the beautiful “infinite sum game” being played in SF.
Continue Reading → -
Reddit discovered the funniest thing in tech this week, and it shows exactly how broken the AI narrative is.
The newly released GitHub Copilot agent was given permission to make pull requests on Microsoft’s .NET runtime, and the results couldn’t be funnier.
The AI confidently submitted broken code, while human developers patiently explained why it didn’t work. Over and over again, for days.
Continue Reading → -
AI code generation is error-prone. Why, then, are programmers still using it?
Everyone from YC partners to Fiverr’s CEO has been proclaiming that “90% of code is AI-generated” or that they’re becoming “AI-first” companies.
The subtext they’re forcing on us is clear: programmers who don’t embrace AI will be left behind.
But after two years of daily AI coding — from the earliest Cursor version to the latest agentic tools — I’ve uncovered the truth: AI coding tools are simultaneously terrible and necessary.
Continue Reading →