How Tech is Disrupting...Tech.
- Tauhid Rehman

- 2 days ago
- 3 min read

By the time you read this, some of this may already be outdated. That’s the speed we’re operating at. You might be thinking, “Great—another AI post.” Fair. But there are a few ideas I’ve been sitting on that feel worth putting out into the ether.
Note: AI is a large, complex field. In this piece, I’m primarily discussing generative AI.
A few years ago, people in my professional orbit told me AI would be a bubble — especially after the ChatGPT moment in late 2022. The hype was loud. Billions flowed into data centers. Venture capital moved aggressively. At times, it felt overheated.
But I challenged the “bubble” narrative then—and I’m doubling down now.
Yes, there’s hype. Yes, there’s AI slop—deepfakes, low-quality content, endless noise. But bubbles burst when value fails to show up. What we’re seeing instead is accelerating capability. The ceiling isn’t cracking—it’s rising.
We haven’t yet seen a defining breakthrough in healthcare, education, or other human-impact domains that many expected. But technological evolution doesn’t happen overnight. AI is still in its experimental phase, driven by multiple foundation model players—OpenAI, Anthropic, Google. We’re still early.
AI Changing Software Development
The tech landscape is shifting at breakneck speed—nowhere more visible than in software development.
Writing code is cheaper than it’s ever been. Coding assistants are reducing the time it takes to build and ship features. What used to require writing everything line by line can now be generated, debugged, and improved from a simple prompt.
This doesn’t eliminate engineers. It changes what engineers do.
Smaller teams can now prototype what once required entire departments. Senior engineers focus on architecture and direction, while AI handles portions of implementation. Instead of scaling with junior developers, companies are leaning toward experienced engineers who can design systems, catch issues early, and make sound tradeoffs. The value is shifting.
Execution is becoming cheaper. Judgment is becoming more expensive.
This is re-perception in action. Software engineers are no longer measured by how quickly they can deliver, but by how well they can think, design, and orchestrate scalable systems.
AI Has a PR Problem.
AI doesn’t just have a capabilities problem—it has a PR problem. The leaders building it can’t seem to agree on what story to tell.
Alex Karp, CEO of Palantir, said AI “will destroy humanities jobs,” basically telling an entire class of knowledge workers to find a backup plan (Business Insider).
Meanwhile, Sam Altman, CEO of OpenAI, is selling a much safer narrative—“AI won’t replace humans, but humans who use AI will replace those who don’t” (Times of India).
And then we have Dario Amodei, CEO of Anthropic, stating that there’s a “25% chance that things go really, really badly” (Axios), which sounds less like a product roadmap but more like a warning label.
Put that together and it’s no surprise people are confused. Is AI a tool, a threat, or an existential risk? Depending on who you ask, it’s all three—and that inconsistency in messaging is starting to build mistrust faster than the technology can build confidence.
AI today isn’t a rogue superintelligence plotting disruption. It’s pattern recognition. It’s copilots helping engineers deliver faster. It’s customer support systems reducing wait times and handling repetitive tasks. I’m not suggesting we remove the human from the loop—but that humans remain the final layer of oversight.
We’ve framed AI as either magic or menace. Rarely do we frame it as augmentation.
And because we haven’t controlled the narrative, the loudest voices have:
“AI will take your job.”
“AI can’t be trusted.”
“AI is just hype.”
All three contain fragments of truth—but all three miss the bigger picture.
What’s happening isn’t replacement—it’s a shift in how we define value.
Star Trek or Terminator?
When most people think about AI, they don’t think in terms of technical progress. They think in science fiction.
On one end, AI is Star Trek—a future of abundance, medical breakthroughs, and intelligent systems augmenting humanity. Aspirational. Collaborative. Expansive.
On the other end, AI is Terminator—autonomous systems out of control, mass unemployment, machines turning against their creators.
And the conversation swings wildly between these extremes.
But AI today is neither Starfleet nor Skynet. It is data, algorithms, and predictive models running on GPUs. It is workflow automation, computer vision, and generative interfaces.
It is not conscious, malicious, or benevolent.
The people who understand that shift—not those who fear it—will shape what comes next.
By the time you read this, parts of this will already be obsolete. That’s the point. And I’m optimistic about where this is going.

I think the biggest takeaway from the article is narrative control. The alarmists have the loudest voices. The everyday LLM users are just doing their thing and not talking about it. Seems like a microcosm of today's media/social network landscape.