IntroductionThe rise of Vibe-codingVibe-coding limitationsFrom vibe-coding to AI Native EngineeringSpec-driven DevelopmentTools and frameworksKiroSpec KitBMAD MethodOther playersWhat should I use? Current Limitations of Spec-Driven Development (SDD)Mismatch between spec size and task complexityTeam settings and collaborative workflows are under-definedLegacy systems, brownfield code and integration challengesTooling maturity, consistency & reproducibilitySkills, culture and change managementSummary
Introduction
In this pillar, we will discuss what Spec-Driven Development (SDD) is, how it emerged from vibe coding, what problems itâs solving and how it works. Then, we will discuss some of the tools in the space so that you can understand whatâs the best one for your use case. At the end, as SDD is a new concept, we will discuss what the challenges and open topics to make SDD a mature methodology.
The rise of Vibe-coding
Itâs February 2025. Andrej Karpathy, one of the most influential AI scientists of our time publishes a post on X. Karpathy just invented the term âvibe codingâ.

The idea is simple, you describe in plain English what you desire, the AI will interpret the request and try to build the code to satisfy it. You donât do planning or research. You donât evaluate tradeoffs or account for risks. You just describe your functional needs, the AI will try to fill the gaps and implement it.
Â
In just a few months, vibe coding platforms emerged. The promise is simple: from now on, everyone, not just engineers, will be able to create fully working applications!
Demos are great, examples are shiny, websites created with these tools are cheap to build and start to generate revenueâŠ
Everyone forgot one important thing though: Karpathy, in the end mentions that this form of interaction with AI is not really coding. You see stuff, say stuff, run stuff, copy paste, fix when things donât work and it mostly works.
Â
If youâre reading this article, most probably youâre a Software Developer, an Architect or, in general, someone that knows how to code. As you may guess, this approach is very limited and cannot be used in production-grade applications.
Vibe-coding limitations
Vibe-coding looks magical in demos, but when people started using it for real projects, a long list of problems appeared. The core issue is simple: the AI guesses. And when you build software on guesses, things break in surprising (and sometimes painful) ways. Even people started to advertise themselves as âVibe Coding Cleanup Specialistâ.

Letâs go through the biggest limitations, with real stories that circulated online in early 2025.
Â
First of all, AI doesnât always respect your instructions. You write âDonât change productionâ, âFreeze codeâ, âAsk me firstâ, but AI may ignore it. For example, in mid-2025 the platform Replit Agent reportedly deleted a live production database, despite explicit code-freeze instructions. The CEO of Replit publicly apologised: âunacceptable and should never be possible.â
So yes, if youâre treating âvibe-codeâ as fully autonomous production-grade code, youâre playing with fire.
Â
Code quality, readability & maintenance suffer. When AI generates large chunks of code from your prompts, you may not fully understand what it did, or why. According to one large-scale study, AI-generated snippets had a high proportion of security weaknesses (e.g., 29.5% of Python snippets had issues) when using tools like GitHub Copilot.
Many tools skip standard security practices, compliance checks (e.g., GDPR) or proper code review.
Examples of issues:
- Old/outdated libraries used via AI-generated code
- Missing parameter validation / injection risk
- Policies not enforced because the AI âjust wrote something that workedâ
If youâre working in a regulated environment (education tech, healthcare, finance) this is a red flag.
Â
With vibe coding, since you didnât author every line (the AI did), when something breaks you might struggle to trace it. The AIâs reasoning is hidden in prompts + model behavior, not documented architecture.
Â
If you have real engineering knowledge, youâll recognize the mismatch:
- Vibe-coding skips much of planning, architecture, risk-analysis
- It puts trust in an AI agent that may not understand your domain, may not respect constraints
- It delivers âworking codeâ quickly, but maybe wrong, insecure, or brittle
- In production-grade systems, you still need design, review, governance, testing, monitoring
So the key takeaway: Vibe-coding is cool, useful for prototyping, rapidly iterating ideas (yes you can build a side-project over pizza). But it is not a replacement for proper software engineering when stakes are high.
Â
From vibe-coding to AI Native Engineering
Vibe coding is not well suited for actual production-grade code, right?
What if instead of abandoning AI, we step back, remove the hype, and think how to use AI as a real engineering tool across the software development lifecycle (SDLC)? That is the move from vibe-coding to AI Native Software Engineering.
Â
When we treat AI as a partner rather than an âall-knowing magic boxâ, interesting opportunities (and challenges) open up:
- Better code quality, because AI can help with repetitive, error-prone work and free humans for design, architecture, edge cases. For example, AI can generate test cases or suggest documentation.
- Faster throughput on certain tasks as studies show AI tools are helping engineers save hours per week and increase productivity when used well.
- More focus on value: Engineers spend less time on boilerplate and more on the parts that really matter (business logic, reliability, maintainability).
- Better collaboration: Instead of âAI did this, hope it worksâ you get âAI helped me do this, I review it, we build togetherâ.
Â
Ok, but how can I move from Vibe coding to AI native engineering? The risk is to just say âAlright, Iâll just write again code manuallyâ. There has to be a better way! A middle ground where both the human and the AI collaborate together to build high-quality software. Here are some the key principles.

Â
First of all: context-rich input instead of simple prompt
In vibe-coding you might just say: âBuild a user-login pageâ. But in proper AI native engineering you provide the system with context: existing architecture diagrams, coding standards, dependency graph, module boundaries, style guidelines. This is also called memory bank in some tools.
Without that context the AI generates code, but youâre left deciphering it, integrating it, and maybe discovering it violates your conventions. The research around AI native engineering emphasizes that âjust promptâ is not enough. Usually, when we just give a prompt, AI agents will try to perform dynamic context discovery: they read your files, search your codebase and try to understand how you usually write code.
Â
If you want to test the power of proper context you can provide the AI with your API spec, target frontend/back-end tech stack, SLA/throughput targets, then ask âCreate the endpoints for user managementâ rather than âmake a login pageâ. If you want to learn more about context you can read the Context pillar.
Â
Â
Another component is Human-in-the-loop (HITL) and no full autonomy
One of the biggest issues of vibe-coding is letting the system run without proper human oversight. In production settings that is risky. In contrast: AI native engineering uses AI as a collaborator, and humans remain accountable. Architecture review, security review, integration, deployment still involve people. This layered approach reduces risk.
Think of AI as the friendly intern who can whip up drafts, but youâre still the senior engineer who says âYep, sign it offâ (or âNope, go back and fix thisâ).
If you want to learn more about this topic, you can read the HITL Pillar.
Â
Â
Do you remember about Divide & conquer? It helps to break down the system and you can use AI where it helps most!
Instead of asking for âBuild the whole systemâ (vibe-coding style), you break the project into modules or phases (ever heard of epics/stories?), assign AI-assisted tasks for particular scopes, then integrate.
This âdivide & conquerâ makes reviews manageable and reduces risk of chaotic AI output. Together with HITL itâs a powerful tool to review AI-generated code.
It also allows you to pick âsafe betsâ for AI use first (low-risk modules) and gradually expand.
Â
Once you are using AI as a collaborator, the next step is: how do you define good specs that the system can work against? How do you formalize your context and integrate AI into your SDLC in a controlled manner? Thatâs where spec-driven development comes in.
Spec-driven Development

Â
At its core, Spec-Driven Development (SDD) is about flipping the old âcode first, document laterâ workflow by letting specifications become the driving artifact for AI Native Engineering.
The key idea is simple: we treat the specification (what we want + why + constraints) as the source of truth. We give the AI that spec + context, then let it generate the code (and maybe tests, tasks, etc). Humans validate, evolve the spec, steer the AI. We do not hand over fully autonomous control.
SDD tries to overcome vibe-coding limitation, by leveraging all the concepts we mentioned earlier.
Â
Do you like the idea? Unfortunately, as explicitly described by ThoughtWorks researcher Birgitta Böckeler: âLike with many emerging terms⊠the definition of âspec-driven developmentâ (SDD) is still in flux.â Put another way: we're still figuring out âhow exactly to do SDDâ as the tools are appearing, the vocabulary is emerging, but itâs not yet a matured methodology.
Â
Because the methodology is still new, each tool developed so far has its own flavor and practices. There is no âone size fits allâ version of SDD yet: your organization will need to tailor.
Â
This approach is similar to what FAANG engineers are doing in the industry, as reported by this thread on Reddit:

Â
Tools and frameworks

Â
Since SDD is emerging, a number of tools and frameworks are being built to support it. These tools reflect different ways of implementing the intuition above (spec becomes truth â AI generates â human reviews). In this section we will see how different tools and frameworks are implementing different versions of SDD.
Kiro
Kiro is an âAgentic AIâ IDE from AWS. Instead of you writing a prompt, youâre writing a goal and Kiro helps you plan, design and build across multiple files and tasks.
So if standard AI code tools are like âhereâs a code snippetâ, Kiro is more like âletâs map out what weâre building, how weâll build it, then we will generate the list of tasks and we will write code for each one.â

Â
Because many devs using AI assistants complain about: âit generated code, but I donât know how it made decisions, it doesnât fit my architecture, I lose track of changes.â Kiro addresses those UX issues by:
- Planning first: The UI gives you markdown âSpecsâ (requirements.md, design.md, tasks.md) to review and edit before code is generated. This gives clarity and reduces surprises.
- Structured workflow panels: The interface gives you side-panels for specs, tasks, hooks, chats with agents. The familiarity helps reduce friction.
- Diff and review mentality: Instead of blind generation, you can see the changes (diffs) Kiro wants to make, approve or reject them. That gives control and reduces âAI surpriseâ.
- Persistent context (Steering + Hooks): You define project-wide rules (steering files) so the AI aligns to your architecture, naming conventions, test strategy. Agent hooks automate repetitive tasks like updating tests, docs, etc when certain files change.
Â
Hereâs how you might use Kiro in a real project: from idea to code rollout.
- Install & setup
- Visit the Kiro website and download the version for your OS (macOS, Linux, Windows).
- Install and do a sign in
- (Optional) Import your existing VS Code settings if you use VS Code so the UI feels familiar.
- Define your feature/goal (Prompt)
- In the chat or commandâpanel, you describe what you want to build. Example: âI want to add authentication + password reset to our web app.â Kiro takes that intent and then:
- Generates a requirements.md with user stories and acceptance criteria.
- You review/edit the requirements if needed.
- Design phase
- Once requirements are approved, Kiro analyses your codebase (or a scaffold) and proposes a design.md: architecture, data flows, interfaces, tech stack decisions.
- You review the design and can make edits or refine nuance (e.g., âuse Postgres vs MySQLâ, âuse React-TS for frontendâ)
- Task breakdown
- Kiro breaks the design into tasks.md: discrete, actionable steps (create user model, implement login API, write unit tests, update docs, etc). Each task may link to which requirement(s) or design parts it covers. You can then pick a task to execute or run them in sequence.
- Execution & review
- Once you click on âStart taskâ in the task list, Kiro will apply changes (create files, modify files) to your codebase. It supports two modes: supervised (you review each diff) or autopilot (you allow it to proceed but still review final results).
- You inspect diffs, test results, evaluate if code meets the spec/requirements.
- You iterate: if something doesnât align, you can adjust the spec, design or task list and re-run. Alternatively, you can chat with the panel on the right to fix the code.
Â
Hooks & automation
While development is underway (or for future features), you configure agent hooks: triggers that run automatically when certain events happen (file save, new file creation, commit, etc). Example: âWhen a new React component is added, auto-generate unit test skeleton and update docs.â
These hooks help keep the workflow consistent and reduce boilerplate manual work.
Â
Steering and project context
You create âsteeringâ markdown files that hold your projectâs conventions, architecture decisions, style guides. Kiro refers to them to guide its output (so naming, patterns, testing style follow your rules). Example steering file:
steering/tech_stack.md, steering/tests_convention.md. This means over time Kiro âlearnsâ your project style and fits new code to that.Â
One of the big strengths of Kiro is the UI/UX and how it implements the spec-driven development. As Kiro offers a lot of guidance during the workflow, itâs also very easy to use and it has a very simple learning curve.
Spec Kit
Want to see Spec Kit in action? Watch the video overview!

The Spec Kit toolkit is a flexible, open-source command-based framework designed to bring SDD into your AI-assisted workflow. Think of it as a lightweight shell around your favorite AI coding assistant: you install a small CLI, use slash-commands in the IDE, and the toolkit helps you scaffold specs â plans â tasks, rather than diving straight into âtell the AI to write codeâ.
Spec-Kit itâs tool-agnostic: works with multiple AI agents, doesnât lock you into a vendor or proprietary platform.
Â
Hereâs how it works, in plain developer-terms:
- Install & bootstrap
- Install the
specify-cli(a small command-tool) into your existing environment. - Run something like
specify init <PROJECT_NAME>(or via slash-commands inside your AI-assistant) and pick your coding-agent of choice (e.g., GitHub Copilot, Claude Code, etc). - The tool scaffolds a directory structure (e.g., a
.specifyorspecs/folder, plus prompt templates).
- Define your âconstitutionâ (optional but recommended)
- Use a command like
/speckit.constitutionor similar to document your non-negotiable rules (project conventions, styles, testing mandates). - This becomes part of the AIâs context so it doesnât âgo rogueâ and use random libraries or ignore your style.
- Write the spec
- Use
/speckit.specify(or similar) to declare what you want and why. Not the nitty-gritty of how, but the feature or requirement (e.g., âBuild user-photo album with date grouping and drag-dropâ). - This spec becomes the artefact the AI will use as the input starting-point.
- Generate the implementation plan
- Use
/speckit.planto convert the spec into the âhowâ: tech stack, architecture decisions, module breakdown, dependencies. - You can review/edit this plan.
- Break down into tasks
/speckit.taskscreates granular actionable items (user stories, tasks/sub-tasks, test cases, docs updates) based on the plan.- This lets your AI (and you) pull one task at time rather than the AI doing âeverythingâ in one shot.
- Execute implementation via your AI assistant
- Use
/speckit.implement(or equivalent) to instruct the AI to draft code/tests/docs for those tasks, within the context of spec + plan + tasks. - You still review, test, merge, and integrate like normal.
Â
BMAD Method

The BMAD Method is arguably the most powerful current methodology for SDD. It stands out because it offers a full, end-to-end workflow for engineering with AI assistance, and it gives you expandability: you can customize agents, workflows, and tailor the system to your own domain or organization.
In short: BMAD treats the specification and planning as first-class artifacts, defines distinct agents (such as Analyst, Product Manager, Architect, Developer, Scrum-Master) each with a clear function in the workflow, and guides code generation, integration, testing, and review in a structured way.
The methodology guides you from analysis to planning, solutioning (architecture/design) to implementation, so youâre not skipping design and context.
Â
One cool aspect is that it is fully open source and it can be installed and used in your favorite IDE or Agentic environment. No vendor lock-in, no new tool to install which is not backwards compatible with your current workflow.
Â
Hereâs a step-by-step of how BMAD typically plays out. Think: âyouâre still in charge, AI helps; you still review; you enforce qualityâ.
- Analysis Phase
- The âAnalystâ (AI agent) helps gather and clarify high-level intent: business goals, market/competitive context, user problem.
- You work with the agent to define the why and what at a broad level (not yet coding).
- Output: a brief but clear business case / feature description / scope document.
- Benefit: reduces ambiguity early, avoids âjust ask AI to write codeâ without framing.
- Planning Phase
- The âProduct Managerâ agent converts the business case into a complete PRD (Product Requirements Document)
- The âArchitectâ agent takes that and produces system-level design: high-level architecture diagrams, module boundaries, data flows, interface definitions.
- You review and refine: choose tech stack, define constraints (libraries to use/avoid), coding standards, deployment model.
- Solutioning/Task Breakdown
- The âScrum Masterâ agent breaks down the design into granular work items: epics, tasks, sub-tasks, dependencies, test cases, docs to update.
- Each task comes with the background context and relevant spec pointers (so the AI working on them knows âwhyâ).
- Implementation Phase
- The âDeveloperâ agent (or agent + you) implements the tasks: generates code, tests, docs, possibly CI/CD changes.
- AI makes the first draft; you review diffs, test results, adherence to spec and architecture.
- If something does not align, you loop back: adjust spec or task and regenerate.
- At each merge/release, you still apply standard engineering practices (code review, security scan, performance review).
- Output: Working code, tests and docs, all traceable to spec.
Â
BMAD is incredibly powerful, but it is not a magic shield that prevents all errors.
First limitation: there is no strict enforcement layer for the agents yet.
BMAD defines clear roles (Analyst, Architect, Developer, QA), but the system will not force an AI agent to behave exactly within that role. If you or the model drift outside the intended workflow (maybe the âScrum Masterâ starts writing code) BMAD wonât automatically stop you.
This means that you can still accidentally misuse an agent or skip essential steps if you are not careful.
Â
Second limitation: BMAD has a learning curve.
For teams new to spec-first thinking, the mental shift can feel uncomfortable at first. You must learn how to:
- write actionable specs rather than jumping straight into coding
- choose the right Agent for the right step
- maintain clean context for AI
- review output with stricter discipline
After a few cycles, teams get used to the rhythm and the structure becomes natural. Once that happens, BMADâs benefits become much more obvious and consistent.
Â
Other players
Please note that these tools evolve quickly and new approaches are emerging almost weekly, so this is not meant to be a comprehensive list, but rather more a nice set of pointers to start your own investigation.
Â
Tessl describes itself as an âAI-native developmentâ platform. Essentially: the idea is to shift from writing a lot of code manually to defining specifications (what you want) and letting AI (or a framework) generate and maintain the code under clear guardrails..
They offer two main pieces:
- a Spec Registry, where you can find many pre-built specs (10 000+ mentioned) for common libraries/patterns.
- a Framework / CLI / toolchain that integrates specs into your codebase, lets you generate code, test it, maintain it.
Hereâs a simplified flow:
- You write a spec file describing a component: what it does, its public API, maybe constraints or tests. (In Tessl this might use
@generate,@describe,@testannotations)
- The framework uses that spec to generate code (or link with existing code) and produce tests.
- The code is part of your project, and the spec remains a âsource of truthâ so future modifications reference the spec first rather than free-hand code.
- If you upgrade a library or make a change, the spec/registry help ensure agents donât hallucinate APIs or introduce unintended side-effects (one of the problems Tessl explicitly cites)
Â
Another tool in the space is OpenSpec. It is an open-source CLI tool and workflow framework that supports âspec-driven development for AI coding assistantsâ. In plain terms: before you ask the AI to code, you agree with the AI (and your team) on what will be built (the spec), then you execute, then you archive the spec.
It supports many AI coding tools (Claude Code, Cursor, CodeBuddy, etc.) via slash commands or CLI commands. So you donât need to commit to a specific vendor (similarly to Speckit or BMAD).
Â
What should I use?
Choosing between Kiro, Spec-Kit and BMAD really comes down to how much structure you want, how much change your team is willing to adopt and how deeply you want AI woven into your engineering workflow.
Â
Kiro is a great choice if you want a guided, visual experience embedded directly inside an IDE. The UI helps you plan, design and execute features with AI side-by-side, almost like having a built-in project navigator. This is ideal if you like strong guardrails and a clear workflow presented in a friendly interface. The tradeoff is that youâll likely need to work inside a new IDE environment, which means a small disruption to your existing habits.
Â
Spec-Kit is perfect if you want something simple to install, simple to use and easy to layer on top of your current AI agent. It uses a set of commands to generate specs, plans and tasks without forcing you into a new tool or workflow. This keeps vendor lock-in low and the learning curve gentle. Itâs a great âlightweight SDD starter kitâ that feels natural for teams who want to adopt spec-driven thinking without changing how they code day to day.
Â
BMAD is the right choice if you want to explore the full power of a complete, customizable SDD workflow. It manages the entire lifecycle, assigns distinct AI roles and lets you create tailored agents or processes that fit your teamâs domain. It offers the highest flexibility and depth, but also requires the most discipline and onboarding. You trade simplicity for control and extensibility.
Â
In the end the tradeoffs typically revolve around UI/UX, vendor lock-in, learning curve and customization. If you want a smooth visual experience, pick Kiro. If you want simplicity and compatibility with your current AI assistant, pick Spec-Kit. If you want full control, full workflow management and room to experiment with custom agents, BMAD is the strongest option.
Current Limitations of Spec-Driven Development (SDD)

Â
Even though SDD holds a lot of promise (i.e., using formal specifications + AI + human-in-the-loop), it is still very much an emerging methodology. As such, there are several practical limitations teams and organizations are facing right now. I deeply believe that these limitations are just temporary and will be fixed in a relatively short amount of time both with technical solutions, tooling or new team practices and improved workflows.
Mismatch between spec size and task complexity
One major limitation with spec-driven development is that the size and depth of a specification and the other generated artifacts donât yet scale smoothly with the size and complexity of the task.
Â
When the work involves a large module, many dependencies or complex integrations, writing a full spec + plan + task breakdown makes sense and can bring real value.
But for smaller features or quick changes, the overhead of creating a full specification workflow becomes burdensome, often outweighing the benefits.
In effect there is no streamlined âlightweight specâ path yet: teams either skip the spec approach for minor work (losing consistency) or apply it and spend more time upfront than they gain downstream.
This imbalance means that spec-driven development currently works best for mid-to-large efforts, but struggles to fit comfortably into quick, small scoped tasks.
Â
Tools like BMAD started to implement features such as âQuick Flowâ, which is tailored for bug fixes or small features.
Other tools are trying to smooth out this imbalance. Editors like Cursor and GitHub Copilot have introduced a âPlanâ mode, which gives the AI just enough space to think before coding without forcing the developer through a full, heavyweight specification workflow. Instead of a large, formal spec, the model produces a brief one-page plan: a bit of research, a short outline of steps, and a clear explanation of what will change.
This lighter structure helps keep small tasks consistent and intentional, without slowing teams down with unnecessary ceremony. Itâs not a full solution yet, but itâs a promising middle ground that makes spec-driven thinking feel practical even for quick fixes and small features.
Â
Team settings and collaborative workflows are under-defined
Another limitation: most SDD tools and workflows are oriented toward individual developers or small prototyping contexts, rather than full team, multi-role, enterprise workflows. For example, many toolkits assume a single developer writes the spec, then the AI generates code, then the same developer reviews. But in real development teams you have product owners, business analysts, architects, QA, operations, security.
What this means practically:
- Who owns the spec? How do roles align (product, architecture, dev, QA)?
- How do multiple team members contribute/edit the spec?
- How is versioning, branching, collaboration handled in the spec layer?
- How does the workflow integrate with existing team practices (sprints, agile ceremonies, code reviews, CI/CD)?
Because these aspects are not yet mature, teams risk creating process friction when adopting SDD. While some tools are already doing progress, there is not yet a standard to work within multi-repo contexts.
Â
Legacy systems, brownfield code and integration challenges
SDD works best when you are building something new (greenfield). But most organizations maintain large legacy systems (brownfield). Some SDD tools currently struggle with:
- Understanding existing codebase context and dependencies.
- Generating specs and code that integrate cleanly with existing modules, rather than assuming a fresh start.
- Aligning generated code with existing architecture, patterns, conventions and non-functional requirements.
Using SDD for legacy systems itâs possible, but often involves additional overhead (reverse-engineering context, refactoring prior to spec), which takes some time, needs review and the AI might not be able to reverse-engineer all the requirements and business needs.
Â
Tooling maturity, consistency & reproducibility
When applying spec-driven development (SDD) in practice, one of the biggest hurdles is the maturity and predictability of the tools and AI agents involved. First, teams run into reproducibility issues: unlike a traditional compiler where the same input and settings reliably produce the same output, AI-based generation does not guarantee this. As one practitioner puts it, âoutput varies across tools and models. The same spec will produce different code from different agents.â
Â
Another critical limitation is context and scope. Large or complex specifications, big codebases, numerous files and dependencies can exceed an agentâs effective context window or lead to âcontext blindnessâ where the AI misses earlier constraints or architectural rules. The result: generated code works but does not meet the underlying intent or integration requirements.
The tooling ecosystem itself is still evolving. Many of the frameworks and agents branded for SDD are experimental, with frequent breaking changes, limited support, and few established best practices. Teams adopting SDD thus face risks of tool fatigue, migration pain, inconsistent workflows and lacking documentation or community guidance.
Â
Finally, the measurement and feedback layer is underdeveloped. How do you quantify the benefit of SDD plus AI tooling? What metrics show defect reduction, velocity improvement, or spec-to-code alignment? These questions remain largely unanswered in published practice.
Without robust feedback loops, teams may adopt SDD with hype rather than clarity.
Â
Skills, culture and change management
Finally, adopting SDD requires changes in how teams work: writing specs becomes a more central task, humans shift roles, AI becomes part of the flow. The skills and culture of teams may need to adapt. Some limitations here:
- Writing good, actionable specs is hard. Not all product owners, architects or developers currently have that skill set.
- Teams may resist the perceived overhead of spec writing.
- New roles or responsibilities (e.g., âspec ownerâ, âAI-agent reviewerâ) might not yet exist in many orgs.
- There is risk of over-reliance on AI output or under-review.
- There is a learning curve for the team to adopt and learn how to use properly these tools, so teams or specific team members might resist the change
Summary
Spec-Driven Development is a compelling evolution of how we work with AI in software engineering: we move from ad-hoc âvibe-codingâ toward a structured workflow: specify â plan â tasks â implement, with AI as collaborator and humans as reviewers. But it is not yet fully matured.
Even though there are limitations the future is bright: we can expect lightweight modes, better collaboration tooling, living specs/contracts, brownfield integration, agile-friendly workflows, mature metrics, and teams fully trained in the new way of working.