Vibe Engineering: We Are All Vibe Coders Now

Vibe Engineering: We are all Vibe Coders Now

“Vibe coding” started as a joke with sharp edges: Andrej Karpathy describing a mode where you “forget that the code even exists.” A year later, it is not a joke, it is a default workflow. It’s time to evolve this concept to “Vibe Engineering.” Most developers are already using AI in the loop, and the tooling has shifted from autocomplete to agents that plan, edit, run, and iterate.

Vibe coding is not a niche, it is the median developer workflow

Stack Overflow’s 2025 survey data shows 84% of respondents are using or planning to use AI tools, and 47.1% say they use them daily. (Stack Overflow Survey)

If your mental model is still “some juniors dabble with ChatGPT,” you are behind the curve.

Also notable: the same survey reports more developers distrust AI accuracy (46%) than trust it (33%), and 66% are frustrated by AI answers that are “almost right.” (Stack Overflow Survey)
That tension, high adoption plus low trust, is basically the vibe coding era in one sentence.

The “most common vibe coding tools” are not a mystery, and the list is getting more agentic

From Stack Overflow’s 2025 AI tooling breakdown, the mainstream “out-of-the-box” stack looks like this:

  • ChatGPT (81.7%)
  • GitHub Copilot (67.9%)
  • Google Gemini (47.4%)
  • Claude Code (40.8%)
  • Microsoft Copilot (31.3%)

Then you get the long tail: Replit, Tabnine, v0.dev, Bolt.new, Lovable.dev, Amazon CodeWhisperer, Cody, Devin, and more.

Two important meta-shifts are hiding inside that list:

  1. Chat-first is still dominant (ChatGPT is the on-ramp for almost everyone).
  2. Agent-first is accelerating (Claude Code shows up as a first-class tool, and “prompt-to-app” products are in the same shortlist).

Vibe Engineering: Tooling is converging on “multi-model, agent-enabled, repo-native” workflows

GitHub is openly moving from “Copilot writes snippets” to “Copilot, Claude, Codex, and custom agents operate inside issues and pull requests,” which is a strong signal that the center of gravity is shifting to agentic, repo-aware automation. (The Verge)

Meanwhile, the market is treating vibe coding like a real platform shift. Microsoft’s AI leadership is explicitly talking about vibe coding as a way software becomes easier to build and, therefore, easier to replace. (Business Insider)

If you want a simpler framing: the IDE is becoming an orchestration layer for agents, models, and eval loops, not just a text editor with autocomplete.

Vibe Engineering: The stats say “code is being generated,” but the real story is “verification is failing to keep up”

Sonar’s 2026 State of Code survey is blunt:

  • Developers report 42% of committed code is AI-generated or significantly AI-assisted today
  • They expect that to rise to 65% by 2027
  • 72% of developers who have tried AI coding tools use them daily
  • 96% do not fully trust AI output
  • Only 48% say they always verify before committing (SonarSource)

This is the core operational risk of “we are all vibe coders now.” The generation engine improved faster than the organizational habits around verification.

Stack Overflow’s data aligns: distrust is high, and “almost right” output is the top frustration.

The best vibe engineers are still software engineers, because the hard parts did not go away

If you want to be a good vibe coder, you are not optimizing for “less typing.” You are optimizing for:

  • Correctness under change (design, interfaces, invariants)
  • Fast feedback (tests, repro harnesses, observability)
  • Security and dependency hygiene (threat modeling, SAST, SBOM discipline)
  • Operational clarity (logs, metrics, feature flags, rollback plans)

AI can accelerate the construction step, but it does not remove the engineering realities of production systems. If anything, it makes them more important because you can now create complexity faster than you can understand it.

Even the optimistic productivity data points in this direction: Microsoft Research found developers with Copilot completed a coding task 55.8% faster, but that is speed on a bounded task, not a waiver on engineering judgment. (Microsoft)

Vibe Engineering: “Traditional engineers vs AI tools” is a losing argument, and it always was

A few adoption receipts:

  • GitHub Copilot crossed 20 million all-time users (as of July 30, 2025, per Microsoft’s earnings call reporting). (TechCrunch)
  • GitHub’s Octoverse reports that 80% of new developers use Copilot in their first week, and AI-related repos exceed 4.3 million. (The GitHub Blog)
  • Gartner predicts 75% of enterprise software engineers will use AI code assistants by 2028, up from less than 10% in early 2023. (gartner.com)

At this point, “pretending Claude/ChatGPT/Copilot aren’t useful” is just self-inflicted irrelevance. The adult conversation is: how do we use them without shipping nonsense.

Vibe Engineering: Here is the playbook for engineering-grade vibe coding

If you adopt only one mindset, make it this: AI is allowed to propose; your system must prove.

Practical rules that work in real teams:

  • Never accept greenfield architecture from a single prompt. Use AI to draft options, then pick based on constraints you can articulate.
  • Force the work into tests. Require the agent to add or update tests before you even look at the implementation.
  • Add a “repro script” for every bug fix. If the agent cannot reproduce it deterministically, you do not have a fix.
  • Constrain the diff surface area. Small PRs, tight scopes, explicit acceptance criteria. Vibe coding loves sprawling edits. Fight that.
  • Treat prompts like build inputs. Capture the intent, constraints, and assumptions that produced the change. This is how you debug the future.
  • Use static analysis and linters as guardrails, not suggestions. If the toolchain says no, it is no.
  • Ban “mystery code.” If nobody on the team can explain it, it does not ship.

This is how you get the upside of speed without buying the downside of compounding technical debt.

The near-term future is more agents, more cost pressure, more scrapped projects

Gartner has already warned that a large share of “agentic AI” efforts will get canceled due to cost and unclear value, even while agentic features proliferate inside enterprise software.
So expect a split:

  • Teams that build evaluation, governance, and cost controls into their vibe coding workflow will scale it.
  • Teams that treat agents like magic will ship a lot quickly, then spend the next year unshipping it.

Recent (2/2026) Coverage Worth Skimming:


Discover more from John Farrier

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from John Farrier

Subscribe now to keep reading and get access to the full archive.

Continue reading