📝

Spec-driven Development

Introduction

In this pillar, we will discuss what Spec-Driven Development (SDD) is, how it emerged from vibe coding, what problems it’s solving and how it works. Then, we will discuss some of the tools in the space so that you can understand what’s the best one for your use case. At the end, as SDD is a new concept, we will discuss what the challenges and open topics to make SDD a mature methodology.

The rise of Vibe-coding

It’s February 2025. Andrej Karpathy, one of the most influential AI scientists of our time publishes a post on X. Karpathy just invented the term “vibe coding”.
notion image
The idea is simple, you describe in plain English what you desire, the AI will interpret the request and try to build the code to satisfy it. You don’t do planning or research. You don’t evaluate tradeoffs or account for risks. You just describe your functional needs, the AI will try to fill the gaps and implement it.
 
In just a few months, vibe coding platforms emerged. The promise is simple: from now on, everyone, not just engineers, will be able to create fully working applications!
Demos are great, examples are shiny, websites created with these tools are cheap to build and start to generate revenue

Everyone forgot one important thing though: Karpathy, in the end mentions that this form of interaction with AI is not really coding. You see stuff, say stuff, run stuff, copy paste, fix when things don’t work and it mostly works.
 
If you’re reading this article, most probably you’re a Software Developer, an Architect or, in general, someone that knows how to code. As you may guess, this approach is very limited and cannot be used in production-grade applications.

Vibe-coding limitations

Vibe-coding looks magical in demos, but when people started using it for real projects, a long list of problems appeared. The core issue is simple: the AI guesses. And when you build software on guesses, things break in surprising (and sometimes painful) ways. Even people started to advertise themselves as “Vibe Coding Cleanup Specialist”.
notion image
Let’s go through the biggest limitations, with real stories that circulated online in early 2025.
 
First of all, AI doesn’t always respect your instructions. You write “Don’t change production”, “Freeze code”, “Ask me first”, but AI may ignore it. For example, in mid-2025 the platform Replit Agent reportedly deleted a live production database, despite explicit code-freeze instructions. The CEO of Replit publicly apologised: “unacceptable and should never be possible.”
So yes, if you’re treating “vibe-code” as fully autonomous production-grade code, you’re playing with fire.
 
Code quality, readability & maintenance suffer. When AI generates large chunks of code from your prompts, you may not fully understand what it did, or why. According to one large-scale study, AI-generated snippets had a high proportion of security weaknesses (e.g., 29.5% of Python snippets had issues) when using tools like GitHub Copilot.
Many tools skip standard security practices, compliance checks (e.g., GDPR) or proper code review.
Examples of issues:
  • Old/outdated libraries used via AI-generated code
  • Missing parameter validation / injection risk
  • Policies not enforced because the AI “just wrote something that worked”
If you’re working in a regulated environment (education tech, healthcare, finance) this is a red flag.
 
With vibe coding, since you didn’t author every line (the AI did), when something breaks you might struggle to trace it. The AI’s reasoning is hidden in prompts + model behavior, not documented architecture.
 
If you have real engineering knowledge, you’ll recognize the mismatch:
  • Vibe-coding skips much of planning, architecture, risk-analysis
  • It puts trust in an AI agent that may not understand your domain, may not respect constraints
  • It delivers “working code” quickly, but maybe wrong, insecure, or brittle
  • In production-grade systems, you still need design, review, governance, testing, monitoring
So the key takeaway: Vibe-coding is cool, useful for prototyping, rapidly iterating ideas (yes you can build a side-project over pizza). But it is not a replacement for proper software engineering when stakes are high.
 

From vibe-coding to AI Native Engineering

Vibe coding is not well suited for actual production-grade code, right? What if instead of abandoning AI, we step back, remove the hype, and think how to use AI as a real engineering tool across the software development lifecycle (SDLC)? That is the move from vibe-coding to AI Native Software Engineering.
 
When we treat AI as a partner rather than an “all-knowing magic box”, interesting opportunities (and challenges) open up:
  • Better code quality, because AI can help with repetitive, error-prone work and free humans for design, architecture, edge cases. For example, AI can generate test cases or suggest documentation.
  • Faster throughput on certain tasks as studies show AI tools are helping engineers save hours per week and increase productivity when used well.
  • More focus on value: Engineers spend less time on boilerplate and more on the parts that really matter (business logic, reliability, maintainability).
  • Better collaboration: Instead of “AI did this, hope it works” you get “AI helped me do this, I review it, we build together”.
 
Ok, but how can I move from Vibe coding to AI native engineering? The risk is to just say “Alright, I’ll just write again code manually”. There has to be a better way! A middle ground where both the human and the AI collaborate together to build high-quality software. Here are some the key principles.
notion image
 
First of all: context-rich input instead of simple prompt
In vibe-coding you might just say: “Build a user-login page”. But in proper AI native engineering you provide the system with context: existing architecture diagrams, coding standards, dependency graph, module boundaries, style guidelines. This is also called memory bank in some tools.
Without that context the AI generates code, but you’re left deciphering it, integrating it, and maybe discovering it violates your conventions. The research around AI native engineering emphasizes that “just prompt” is not enough. Usually, when we just give a prompt, AI agents will try to perform dynamic context discovery: they read your files, search your codebase and try to understand how you usually write code.
 
If you want to test the power of proper context you can provide the AI with your API spec, target frontend/back-end tech stack, SLA/throughput targets, then ask “Create the endpoints for user management” rather than “make a login page”. If you want to learn more about context you can read the Context pillar.
 
 
Another component is Human-in-the-loop (HITL) and no full autonomy
One of the biggest issues of vibe-coding is letting the system run without proper human oversight. In production settings that is risky. In contrast: AI native engineering uses AI as a collaborator, and humans remain accountable. Architecture review, security review, integration, deployment still involve people. This layered approach reduces risk.
Think of AI as the friendly intern who can whip up drafts, but you’re still the senior engineer who says “Yep, sign it off” (or “Nope, go back and fix this”).
If you want to learn more about this topic, you can read the HITL Pillar.
 
 
Do you remember about Divide & conquer? It helps to break down the system and you can use AI where it helps most!
Instead of asking for “Build the whole system” (vibe-coding style), you break the project into modules or phases (ever heard of epics/stories?), assign AI-assisted tasks for particular scopes, then integrate.
This “divide & conquer” makes reviews manageable and reduces risk of chaotic AI output. Together with HITL it’s a powerful tool to review AI-generated code.
It also allows you to pick “safe bets” for AI use first (low-risk modules) and gradually expand.
 
Once you are using AI as a collaborator, the next step is: how do you define good specs that the system can work against? How do you formalize your context and integrate AI into your SDLC in a controlled manner? That’s where spec-driven development comes in.

Spec-driven Development

notion image
 
At its core, Spec-Driven Development (SDD) is about flipping the old “code first, document later” workflow by letting specifications become the driving artifact for AI Native Engineering.
The key idea is simple: we treat the specification (what we want + why + constraints) as the source of truth. We give the AI that spec + context, then let it generate the code (and maybe tests, tasks, etc). Humans validate, evolve the spec, steer the AI. We do not hand over fully autonomous control.
SDD tries to overcome vibe-coding limitation, by leveraging all the concepts we mentioned earlier.
 
Do you like the idea? Unfortunately, as explicitly described by ThoughtWorks researcher Birgitta Böckeler: “Like with many emerging terms
 the definition of ‘spec-driven development’ (SDD) is still in flux.” Put another way: we're still figuring out “how exactly to do SDD” as the tools are appearing, the vocabulary is emerging, but it’s not yet a matured methodology.
 
Because the methodology is still new, each tool developed so far has its own flavor and practices. There is no “one size fits all” version of SDD yet: your organization will need to tailor.
 
This approach is similar to what FAANG engineers are doing in the industry, as reported by this thread on Reddit:
notion image
 

Tools and frameworks

notion image
 
Since SDD is emerging, a number of tools and frameworks are being built to support it. These tools reflect different ways of implementing the intuition above (spec becomes truth → AI generates → human reviews). In this section we will see how different tools and frameworks are implementing different versions of SDD.

Kiro

Kiro is an “Agentic AI” IDE from AWS. Instead of you writing a prompt, you’re writing a goal and Kiro helps you plan, design and build across multiple files and tasks.
So if standard AI code tools are like “here’s a code snippet”, Kiro is more like “let’s map out what we’re building, how we’ll build it, then we will generate the list of tasks and we will write code for each one.”
notion image
 
Because many devs using AI assistants complain about: “it generated code, but I don’t know how it made decisions, it doesn’t fit my architecture, I lose track of changes.” Kiro addresses those UX issues by:
  • Planning first: The UI gives you markdown “Specs” (requirements.md, design.md, tasks.md) to review and edit before code is generated. This gives clarity and reduces surprises.
  • Structured workflow panels: The interface gives you side-panels for specs, tasks, hooks, chats with agents. The familiarity helps reduce friction.
  • Diff and review mentality: Instead of blind generation, you can see the changes (diffs) Kiro wants to make, approve or reject them. That gives control and reduces “AI surprise”.
  • Persistent context (Steering + Hooks): You define project-wide rules (steering files) so the AI aligns to your architecture, naming conventions, test strategy. Agent hooks automate repetitive tasks like updating tests, docs, etc when certain files change.
 
Here’s how you might use Kiro in a real project: from idea to code rollout.
  1. Install & setup
    1. Visit the Kiro website and download the version for your OS (macOS, Linux, Windows).
    2. Install and do a sign in
    3. (Optional) Import your existing VS Code settings if you use VS Code so the UI feels familiar.
  1. Define your feature/goal (Prompt)
    1. In the chat or command‐panel, you describe what you want to build. Example: “I want to add authentication + password reset to our web app.” Kiro takes that intent and then:
    2. Generates a requirements.md with user stories and acceptance criteria.
    3. You review/edit the requirements if needed.
  1. Design phase
    1. Once requirements are approved, Kiro analyses your codebase (or a scaffold) and proposes a design.md: architecture, data flows, interfaces, tech stack decisions.
    2. You review the design and can make edits or refine nuance (e.g., “use Postgres vs MySQL”, “use React-TS for frontend”)
  1. Task breakdown
    1. Kiro breaks the design into tasks.md: discrete, actionable steps (create user model, implement login API, write unit tests, update docs, etc). Each task may link to which requirement(s) or design parts it covers. You can then pick a task to execute or run them in sequence.
  1. Execution & review
    1. Once you click on “Start task” in the task list, Kiro will apply changes (create files, modify files) to your codebase. It supports two modes: supervised (you review each diff) or autopilot (you allow it to proceed but still review final results).
    2. You inspect diffs, test results, evaluate if code meets the spec/requirements.
    3. You iterate: if something doesn’t align, you can adjust the spec, design or task list and re-run. Alternatively, you can chat with the panel on the right to fix the code.
 
Hooks & automation
While development is underway (or for future features), you configure agent hooks: triggers that run automatically when certain events happen (file save, new file creation, commit, etc). Example: “When a new React component is added, auto-generate unit test skeleton and update docs.”
These hooks help keep the workflow consistent and reduce boilerplate manual work.
 
Steering and project context
You create “steering” markdown files that hold your project’s conventions, architecture decisions, style guides. Kiro refers to them to guide its output (so naming, patterns, testing style follow your rules). Example steering file: steering/tech_stack.md, steering/tests_convention.md. This means over time Kiro “learns” your project style and fits new code to that.
 
One of the big strengths of Kiro is the UI/UX and how it implements the spec-driven development. As Kiro offers a lot of guidance during the workflow, it’s also very easy to use and it has a very simple learning curve.

Spec Kit

Want to see Spec Kit in action? Watch the video overview!
Spec Kit video header
The Spec Kit toolkit is a flexible, open-source command-based framework designed to bring SDD into your AI-assisted workflow. Think of it as a lightweight shell around your favorite AI coding assistant: you install a small CLI, use slash-commands in the IDE, and the toolkit helps you scaffold specs → plans → tasks, rather than diving straight into “tell the AI to write code”.
Spec-Kit it’s tool-agnostic: works with multiple AI agents, doesn’t lock you into a vendor or proprietary platform.
 
Here’s how it works, in plain developer-terms:
  1. Install & bootstrap
      • Install the specify-cli (a small command-tool) into your existing environment.
      • Run something like specify init <PROJECT_NAME> (or via slash-commands inside your AI-assistant) and pick your coding-agent of choice (e.g., GitHub Copilot, Claude Code, etc).
      • The tool scaffolds a directory structure (e.g., a .specify or specs/ folder, plus prompt templates).
  1. Define your “constitution” (optional but recommended)
      • Use a command like /speckit.constitution or similar to document your non-negotiable rules (project conventions, styles, testing mandates).
      • This becomes part of the AI’s context so it doesn’t “go rogue” and use random libraries or ignore your style.
  1. Write the spec
      • Use /speckit.specify (or similar) to declare what you want and why. Not the nitty-gritty of how, but the feature or requirement (e.g., “Build user-photo album with date grouping and drag-drop”).
      • This spec becomes the artefact the AI will use as the input starting-point.
  1. Generate the implementation plan
      • Use /speckit.plan to convert the spec into the “how”: tech stack, architecture decisions, module breakdown, dependencies.
      • You can review/edit this plan.
  1. Break down into tasks
      • /speckit.tasks creates granular actionable items (user stories, tasks/sub-tasks, test cases, docs updates) based on the plan.
      • This lets your AI (and you) pull one task at time rather than the AI doing “everything” in one shot.
  1. Execute implementation via your AI assistant
      • Use /speckit.implement (or equivalent) to instruct the AI to draft code/tests/docs for those tasks, within the context of spec + plan + tasks.
      • You still review, test, merge, and integrate like normal.
 

BMAD Method

notion image
The BMAD Method is arguably the most powerful current methodology for SDD. It stands out because it offers a full, end-to-end workflow for engineering with AI assistance, and it gives you expandability: you can customize agents, workflows, and tailor the system to your own domain or organization.
In short: BMAD treats the specification and planning as first-class artifacts, defines distinct agents (such as Analyst, Product Manager, Architect, Developer, Scrum-Master) each with a clear function in the workflow, and guides code generation, integration, testing, and review in a structured way.
The methodology guides you from analysis to planning, solutioning (architecture/design) to implementation, so you’re not skipping design and context.
 
One cool aspect is that it is fully open source and it can be installed and used in your favorite IDE or Agentic environment. No vendor lock-in, no new tool to install which is not backwards compatible with your current workflow.
 
Here’s a step-by-step of how BMAD typically plays out. Think: “you’re still in charge, AI helps; you still review; you enforce quality”.
  1. Analysis Phase
      • The “Analyst” (AI agent) helps gather and clarify high-level intent: business goals, market/competitive context, user problem.
      • You work with the agent to define the why and what at a broad level (not yet coding).
      • Output: a brief but clear business case / feature description / scope document.
      • Benefit: reduces ambiguity early, avoids “just ask AI to write code” without framing.
  1. Planning Phase
      • The “Product Manager” agent converts the business case into a complete PRD (Product Requirements Document)
      • The “Architect” agent takes that and produces system-level design: high-level architecture diagrams, module boundaries, data flows, interface definitions.
      • You review and refine: choose tech stack, define constraints (libraries to use/avoid), coding standards, deployment model.
  1. Solutioning/Task Breakdown
      • The “Scrum Master” agent breaks down the design into granular work items: epics, tasks, sub-tasks, dependencies, test cases, docs to update.
      • Each task comes with the background context and relevant spec pointers (so the AI working on them knows “why”).
  1. Implementation Phase
      • The “Developer” agent (or agent + you) implements the tasks: generates code, tests, docs, possibly CI/CD changes.
      • AI makes the first draft; you review diffs, test results, adherence to spec and architecture.
      • If something does not align, you loop back: adjust spec or task and regenerate.
      • At each merge/release, you still apply standard engineering practices (code review, security scan, performance review).
      • Output: Working code, tests and docs, all traceable to spec.
 
BMAD is incredibly powerful, but it is not a magic shield that prevents all errors.
First limitation: there is no strict enforcement layer for the agents yet.
BMAD defines clear roles (Analyst, Architect, Developer, QA), but the system will not force an AI agent to behave exactly within that role. If you or the model drift outside the intended workflow (maybe the “Scrum Master” starts writing code) BMAD won’t automatically stop you.
This means that you can still accidentally misuse an agent or skip essential steps if you are not careful.
 
Second limitation: BMAD has a learning curve.
For teams new to spec-first thinking, the mental shift can feel uncomfortable at first. You must learn how to:
  • write actionable specs rather than jumping straight into coding
  • choose the right Agent for the right step
  • maintain clean context for AI
  • review output with stricter discipline
After a few cycles, teams get used to the rhythm and the structure becomes natural. Once that happens, BMAD’s benefits become much more obvious and consistent.
 

Other players

Please note that these tools evolve quickly and new approaches are emerging almost weekly, so this is not meant to be a comprehensive list, but rather more a nice set of pointers to start your own investigation.
 
Tessl describes itself as an “AI-native development” platform. Essentially: the idea is to shift from writing a lot of code manually to defining specifications (what you want) and letting AI (or a framework) generate and maintain the code under clear guardrails..
They offer two main pieces:
  • a Spec Registry, where you can find many pre-built specs (10 000+ mentioned) for common libraries/patterns.
  • a Framework / CLI / toolchain that integrates specs into your codebase, lets you generate code, test it, maintain it.
Here’s a simplified flow:
  1. You write a spec file describing a component: what it does, its public API, maybe constraints or tests. (In Tessl this might use @generate, @describe, @test annotations)
  1. The framework uses that spec to generate code (or link with existing code) and produce tests.
  1. The code is part of your project, and the spec remains a “source of truth” so future modifications reference the spec first rather than free-hand code.
  1. If you upgrade a library or make a change, the spec/registry help ensure agents don’t hallucinate APIs or introduce unintended side-effects (one of the problems Tessl explicitly cites)
 
Another tool in the space is OpenSpec. It is an open-source CLI tool and workflow framework that supports “spec-driven development for AI coding assistants”. In plain terms: before you ask the AI to code, you agree with the AI (and your team) on what will be built (the spec), then you execute, then you archive the spec.
It supports many AI coding tools (Claude Code, Cursor, CodeBuddy, etc.) via slash commands or CLI commands. So you don’t need to commit to a specific vendor (similarly to Speckit or BMAD).
 

What should I use?

Choosing between Kiro, Spec-Kit and BMAD really comes down to how much structure you want, how much change your team is willing to adopt and how deeply you want AI woven into your engineering workflow.
 
Kiro is a great choice if you want a guided, visual experience embedded directly inside an IDE. The UI helps you plan, design and execute features with AI side-by-side, almost like having a built-in project navigator. This is ideal if you like strong guardrails and a clear workflow presented in a friendly interface. The tradeoff is that you’ll likely need to work inside a new IDE environment, which means a small disruption to your existing habits.
 
Spec-Kit is perfect if you want something simple to install, simple to use and easy to layer on top of your current AI agent. It uses a set of commands to generate specs, plans and tasks without forcing you into a new tool or workflow. This keeps vendor lock-in low and the learning curve gentle. It’s a great “lightweight SDD starter kit” that feels natural for teams who want to adopt spec-driven thinking without changing how they code day to day.
 
BMAD is the right choice if you want to explore the full power of a complete, customizable SDD workflow. It manages the entire lifecycle, assigns distinct AI roles and lets you create tailored agents or processes that fit your team’s domain. It offers the highest flexibility and depth, but also requires the most discipline and onboarding. You trade simplicity for control and extensibility.
 
In the end the tradeoffs typically revolve around UI/UX, vendor lock-in, learning curve and customization. If you want a smooth visual experience, pick Kiro. If you want simplicity and compatibility with your current AI assistant, pick Spec-Kit. If you want full control, full workflow management and room to experiment with custom agents, BMAD is the strongest option.

Current Limitations of Spec-Driven Development (SDD)

notion image
 
Even though SDD holds a lot of promise (i.e., using formal specifications + AI + human-in-the-loop), it is still very much an emerging methodology. As such, there are several practical limitations teams and organizations are facing right now. I deeply believe that these limitations are just temporary and will be fixed in a relatively short amount of time both with technical solutions, tooling or new team practices and improved workflows.

Mismatch between spec size and task complexity

One major limitation with spec-driven development is that the size and depth of a specification and the other generated artifacts don’t yet scale smoothly with the size and complexity of the task.
 
When the work involves a large module, many dependencies or complex integrations, writing a full spec + plan + task breakdown makes sense and can bring real value.
But for smaller features or quick changes, the overhead of creating a full specification workflow becomes burdensome, often outweighing the benefits.
In effect there is no streamlined “lightweight spec” path yet: teams either skip the spec approach for minor work (losing consistency) or apply it and spend more time upfront than they gain downstream.
This imbalance means that spec-driven development currently works best for mid-to-large efforts, but struggles to fit comfortably into quick, small scoped tasks.
 
Tools like BMAD started to implement features such as “Quick Flow”, which is tailored for bug fixes or small features.
Other tools are trying to smooth out this imbalance. Editors like Cursor and GitHub Copilot have introduced a “Plan” mode, which gives the AI just enough space to think before coding without forcing the developer through a full, heavyweight specification workflow. Instead of a large, formal spec, the model produces a brief one-page plan: a bit of research, a short outline of steps, and a clear explanation of what will change.
This lighter structure helps keep small tasks consistent and intentional, without slowing teams down with unnecessary ceremony. It’s not a full solution yet, but it’s a promising middle ground that makes spec-driven thinking feel practical even for quick fixes and small features.
 

Team settings and collaborative workflows are under-defined

Another limitation: most SDD tools and workflows are oriented toward individual developers or small prototyping contexts, rather than full team, multi-role, enterprise workflows. For example, many toolkits assume a single developer writes the spec, then the AI generates code, then the same developer reviews. But in real development teams you have product owners, business analysts, architects, QA, operations, security.
What this means practically:
  • Who owns the spec? How do roles align (product, architecture, dev, QA)?
  • How do multiple team members contribute/edit the spec?
  • How is versioning, branching, collaboration handled in the spec layer?
  • How does the workflow integrate with existing team practices (sprints, agile ceremonies, code reviews, CI/CD)?
Because these aspects are not yet mature, teams risk creating process friction when adopting SDD. While some tools are already doing progress, there is not yet a standard to work within multi-repo contexts.
 

Legacy systems, brownfield code and integration challenges

SDD works best when you are building something new (greenfield). But most organizations maintain large legacy systems (brownfield). Some SDD tools currently struggle with:
  • Understanding existing codebase context and dependencies.
  • Generating specs and code that integrate cleanly with existing modules, rather than assuming a fresh start.
  • Aligning generated code with existing architecture, patterns, conventions and non-functional requirements.
Using SDD for legacy systems it’s possible, but often involves additional overhead (reverse-engineering context, refactoring prior to spec), which takes some time, needs review and the AI might not be able to reverse-engineer all the requirements and business needs.
 

Tooling maturity, consistency & reproducibility

When applying spec-driven development (SDD) in practice, one of the biggest hurdles is the maturity and predictability of the tools and AI agents involved. First, teams run into reproducibility issues: unlike a traditional compiler where the same input and settings reliably produce the same output, AI-based generation does not guarantee this. As one practitioner puts it, “output varies across tools and models. The same spec will produce different code from different agents.”
 
Another critical limitation is context and scope. Large or complex specifications, big codebases, numerous files and dependencies can exceed an agent’s effective context window or lead to “context blindness” where the AI misses earlier constraints or architectural rules. The result: generated code works but does not meet the underlying intent or integration requirements.
The tooling ecosystem itself is still evolving. Many of the frameworks and agents branded for SDD are experimental, with frequent breaking changes, limited support, and few established best practices. Teams adopting SDD thus face risks of tool fatigue, migration pain, inconsistent workflows and lacking documentation or community guidance.
 
Finally, the measurement and feedback layer is underdeveloped. How do you quantify the benefit of SDD plus AI tooling? What metrics show defect reduction, velocity improvement, or spec-to-code alignment? These questions remain largely unanswered in published practice.
Without robust feedback loops, teams may adopt SDD with hype rather than clarity.
 

Skills, culture and change management

Finally, adopting SDD requires changes in how teams work: writing specs becomes a more central task, humans shift roles, AI becomes part of the flow. The skills and culture of teams may need to adapt. Some limitations here:
  • Writing good, actionable specs is hard. Not all product owners, architects or developers currently have that skill set.
  • Teams may resist the perceived overhead of spec writing.
  • New roles or responsibilities (e.g., “spec owner”, “AI-agent reviewer”) might not yet exist in many orgs.
  • There is risk of over-reliance on AI output or under-review.
  • There is a learning curve for the team to adopt and learn how to use properly these tools, so teams or specific team members might resist the change

Summary

Spec-Driven Development is a compelling evolution of how we work with AI in software engineering: we move from ad-hoc “vibe-coding” toward a structured workflow: specify → plan → tasks → implement, with AI as collaborator and humans as reviewers. But it is not yet fully matured.
Even though there are limitations the future is bright: we can expect lightweight modes, better collaboration tooling, living specs/contracts, brownfield integration, agile-friendly workflows, mature metrics, and teams fully trained in the new way of working.