Ignoring Acceptance Criteria and Definition of Done

Estimated reading: 7 minutes 7 views

Stories are meant to be conversation starters — not final contracts. But too many teams treat them as if they were. I’ve seen sprint after sprint end with stories marked “done,” only to fail in QA because the acceptance conditions were never defined. That’s not a bug. It’s a systemic flaw.

When acceptance criteria go missing, you’re not just delaying delivery — you’re building on sand. The team guesses what “done” means. The PO assumes the feature works. QA tests what they think is correct. The result? Re-work, scope creep, and a backlog that accumulates invisible debt.

This chapter cuts through the noise. You’ll learn how to define actionable, testable criteria that prevent misunderstandings. I’ll share real examples from my years working with teams across fintech, healthcare, and SaaS — from startups to enterprise. The goal isn’t perfection. It’s clarity.

Why Missing Acceptance Criteria Breaks Agile Delivery

Acceptance criteria are not optional extras. They are the bridge between intent and implementation. Without them, a story becomes a black box.

Imagine a story: “As a user, I want to reset my password so I can regain access.” Without criteria, no one knows what “reset” means. Does it mean:

  • Send an email with a link?
  • Allow temporary access via security code?
  • Require multi-factor authentication?

That ambiguity leads to assumptions. And assumptions drive rework. I’ve seen teams spend days building a feature that didn’t even align with the user’s actual need — all because the acceptance rules were never defined.

Definition of done issues arise when the team’s “done” means something different than the PO’s. One team marked stories “done” after code review, only to discover the UI wasn’t tested. Another assumed “done” meant “deployed” — even though no regression tests passed.

The Hidden Cost of Unwritten Rules

Here’s what happens when you skip acceptance criteria:

  • Re-work at scale: Features built without testable conditions often fail in staging or production.
  • Scope creep: The PO keeps adding “minor changes” because the original story wasn’t clear.
  • Team frustration: Developers feel stuck, not because they can’t code, but because they can’t tell when they’re done.
  • PO distrust: If delivery is inconsistent, the PO loses confidence in the team’s ability to deliver value.

It’s not just about testing. It’s about shared understanding.

How to Build Testable Story Definitions

Testable story definition isn’t about writing more. It’s about writing better.

Start with a simple rule: every story must have acceptance criteria that answer three questions:

  1. What does success look like?
  2. What are the edge cases?
  3. How will we verify it?

Let’s walk through a real example.

Before: A Vague Story

As a customer, I want to see my order history so I can track past purchases.

Too broad. No clear success conditions. No boundaries.

After: A Testable Story

As a customer, I want to see my order history so I can track past purchases.

Acceptance Criteria:

  • When I visit the order history page, I see at least 10 most recent orders.
  • Each order shows the date, total amount, and status (e.g., Delivered, Processing).
  • If I have no orders, the page displays “You have no past orders.”
  • Orders older than 6 months are not shown by default.
  • Clicking an order redirects me to a details page with line items and shipping address.

This version is now testable. A QA engineer can write a scenario. A developer can implement against clear conditions.

Designing Actionable Acceptance Criteria

Not all criteria are created equal. Some are vague. Some are redundant. The best ones are specific, measurable, and aligned with user value.

Use this checklist when writing acceptance criteria:

  • Start with the user: Every criterion should reflect a user-facing behavior.
  • Use Given-When-Then: This format forces clarity. Given I’m on the order page, when I click “Show All”, then I see all 50 orders.
  • Include edge cases: What if there are zero orders? What if the API fails?
  • Make it executable: Can a test be written based on this? If not, rewrite it.
  • Keep it concise: More than 5 criteria often mean the story is too broad.

Ask yourself: “Could someone not on the team understand this and verify it?” If not, it’s not testable.

Common Failure Patterns

Here are the most frequent mistakes I’ve seen:

Problem Fix
“The system should work.” Replace with: “When I submit the form, the confirmation message appears within 2 seconds.”
Criteria defined only in the PO’s head. Write them down. Even if incomplete, get them on the board.
Acceptance criteria written as tasks. Reframe: “Implement authentication” → “When I’m logged in, I see the dashboard.”
Only positive scenarios. Add: “If the API is down, show a user-friendly error message.”

Linking Acceptance Criteria to Definition of Done

Definition of done is not a checklist — it’s a team agreement. But it only works when tied to real acceptance rules.

Here’s a practical pattern:

  1. Define acceptance criteria during refinement.
  2. Verify each criterion during testing.
  3. Only mark story as “done” when all criteria pass.
  4. Update the Definition of Done to reflect this.

Example:

My team’s DoD now includes:

  • All acceptance criteria have been validated.
  • Automated tests pass for all positive and negative scenarios.
  • No known bugs in the acceptance test suite.

This prevents the “I thought it was working” trap.

When to Stop: The Minimum Viable Acceptance Criteria

Just because a story is testable doesn’t mean you need 20 criteria. You want the minimum that ensures value delivery.

Ask: “What is the simplest thing that could possibly work?”

For a login screen, the minimal testable definition might be:

  • When I enter a valid email and password, I’m redirected to the dashboard.
  • When I enter invalid credentials, an error message appears.
  • When I click “Forgot password?”, I’m taken to the reset page.

That’s enough for a working feature. Add more as needed.

Frequently Asked Questions

How do I know when I’ve written enough acceptance criteria?

When the story can be tested by someone not on the team who understands the user’s goal. If you’re still asking “what does this mean?”, keep refining.

Can acceptance criteria be written by the developer?

Not alone. They can help draft them, but acceptance criteria must emerge from collaboration between PO, dev, and QA. The goal is shared understanding.

What if the PO doesn’t know what the criteria should be?

That’s a red flag. The PO should work with users or stakeholders to define success. Use workshops or user interviews to clarify expectations. Acceptance criteria are not technical specs — they’re value specs.

Do acceptance criteria replace technical acceptance tests?

No. Acceptance criteria define *what* the system should do. Technical tests (unit, integration) verify *how* it’s built. Both are needed. But acceptance criteria define the boundary of value.

How do I handle changing acceptance criteria mid-sprint?

It’s rare, but if it happens, discuss it in the sprint review. The story should be re-evaluated. If changes exceed scope, split or defer. The team must agree.

Is there a tool to help manage acceptance criteria?

Yes. Cucumber, SpecFlow, or even simple tables in Jira or Azure DevOps work. But the tool doesn’t matter — consistency and clarity do. Focus on writing criteria that anyone can read and verify.

Share this Doc

Ignoring Acceptance Criteria and Definition of Done

Or copy link

CONTENTS
Scroll to Top