kiteto logo

Early Access

Test Automation Without Coding: Who Should Write E2E Tests

Test Automation Without Coding: Who Should Write E2E Tests

Robert Dey March 23, 2026

In most teams, the bottleneck in test automation is not that people can’t code. It’s that the people who know what needs testing can’t write the tests, and the people who write the tests don’t fully know what needs testing. Codeless test automation is changing who can participate in quality assurance and, more importantly, who should.

The knowledge transfer problem most teams ignore

Every time a requirement becomes a test, information is lost.

A product owner defines a user flow in a ticket. A developer reads the ticket, builds the feature, and interprets what needs to be verified. A tester then translates that interpretation into a test script. By the time the test runs in CI, it often validates something slightly different from what the PO originally intended.

This information loss compounds over time. Teams accumulate test suites that cover the technical behavior of the code rather than the business behavior of the product. Bugs slip through not because tests are absent, but because the tests answer the wrong question.

The classic response to this problem is more communication: test case reviews, acceptance criteria templates, three amigos meetings, user story workshops. These help, but they address the symptom rather than the root cause. The root cause is that the people who carry domain knowledge and the people who can write automated tests are almost never the same person.

Consider the typical flow in a sprint: A product owner writes a user story with acceptance criteria. The developer builds the feature. Either the developer writes unit tests covering the technical implementation, or the team asks a QA engineer to write E2E tests after the fact. In both cases, the person writing the test is not the person who knows the requirements best.

The PO knows that premium customers should see an automatic 10% discount when their cart exceeds €50, without entering a voucher code. The developer knows how to implement discount logic. But the test that checks the full user experience, including the moment the discount appears in the UI, the cart threshold behavior, the premium customer flag, and the absence of a required voucher field, requires all of that context at once. Rarely does one person have all of it.

Who actually knows what needs to be tested?

Product owners know what the product is supposed to do. Business analysts know the edge cases. Manual testers know the common failure patterns from years of exploratory testing. These are the people who can most accurately define what “correct” looks like for a given feature.

This isn’t a new observation. The idea that those who define requirements should own the tests has a clear practical logic: they hold the domain knowledge that tests need to capture. Behavior-Driven Development (BDD), introduced in the mid-2000s, was explicitly designed to give non-technical stakeholders a role in test definition. The idea was that product owners and business analysts would write test scenarios in plain language using Gherkin syntax, and developers would implement the step definitions behind them.

BDD partly worked. It improved collaboration and made requirements more testable. But it never fully democratized test creation because Gherkin still has a learning curve. It requires structural thinking: scenarios, steps, given-when-then format. Non-technical stakeholders can read Gherkin scenarios, but writing them fluently is another matter. The collaborative intent of BDD still ended up delegated to technical team members in most organizations.

The result is that despite twenty years of effort to make testing more accessible, the fundamental barrier remains: automated tests require either code or a structured syntax that feels like code. For most product owners and business analysts, that barrier is high enough to stop participation entirely.

Why developers are not the right owners for E2E tests

Developers are good at testing what they built. Unit tests and integration tests, where the goal is to verify that a function returns the expected output given a specific input, are well-suited to developers. They understand the internal structure, know the edge cases in their implementation, and can write tests quickly at this level.

E2E tests are different in a fundamental way. An end-to-end test is not checking whether the code works. It’s checking whether the product works for a user. That requires knowledge of the user’s intent, the business rules behind the UI, the sequence of steps a real person would take, and the criteria by which a real person would judge success or failure.

Developers often write E2E tests that pass all the time, even when the product is broken from a user perspective, because they test what they know the code does rather than what the user expects the product to do. The discount calculation example from an earlier article illustrates this well: a developer test checks whether a voucher code passed via API applies a discount. The actual requirement, that the discount applies automatically to premium customers without any voucher, never gets tested.

This isn’t a failure of skill. It’s a structural misalignment. Developers write good tests for the layer they understand best. Asking them to also carry the domain knowledge of product owners is asking them to be two people at once.

There’s also a maintenance dynamic worth noting. E2E test suites owned by developers tend to drift. As product logic changes, the tests that were once accurate gradually stop reflecting what the product is supposed to do. Developers update tests to match the new code, not necessarily to match the new requirements. This drift is invisible until a bug reaches production and someone looks at the test suite and realizes it never checked for this scenario.

The cost of the knowledge round trip

In teams where domain experts and developers are separate, every requirement-to-test journey involves at least one full translation round trip.

The product owner explains the requirement. The developer interprets it and builds the feature. The QA engineer or developer writes the test, often asking the PO to clarify edge cases they don’t understand. The PO reviews the test description or the test results and identifies gaps. The cycle repeats. This back-and-forth is so embedded in most teams’ workflows that it’s treated as normal, a necessary overhead of the development process.

It is not inevitable. The overhead exists because the person who can most accurately define the test cannot create the test. If the PO could describe the test scenario in natural language and have a working automated test produced from that description, the round trip shortens to a single step.

This is the genuine promise of codeless test automation in 2026, not just making testing faster, but putting test authorship in the hands of the people who understand what needs to be tested.

What “codeless” means now versus five years ago

The codeless testing category has existed for over a decade, but it has changed significantly. Early no-code tools relied on record-and-playback: a tester would perform actions in the browser, and the tool would record those actions as a test script. These tools were easy to start with but brittle in practice. Every UI change, a renamed button, a shifted layout, a changed selector, would break the recorded tests. Maintenance costs were high enough that many teams abandoned these tools after initial enthusiasm.

Visual testing tools improved on this by using image comparison rather than CSS selectors. They were more resilient to layout changes but introduced different problems: false positives from font rendering differences, screenshot comparisons that flagged cosmetic changes as failures.

The category has moved again. The current generation of codeless automation testing platforms uses language models to interpret intent rather than recording actions. Instead of capturing what the user clicks, they understand what the user is trying to accomplish. This produces tests that are structurally different: they describe a scenario at the intent level, which makes them more resilient to UI changes and more readable to non-technical stakeholders.

This shift matters because it changes who can write a test. Recording clicks requires you to have access to the UI and perform the scenario yourself. Describing an intent in plain text requires only domain knowledge. The barrier moves from “can you operate the browser recording tool” to “do you understand the scenario you want to verify.” For product owners and business analysts, the second question is much easier to answer.

The market is taking notice. MarketsAndMarkets projects the codeless testing market will reach $55.2 billion by 2028, up from roughly $15 billion in 2023. That growth rate reflects genuine adoption, not just marketing interest. Teams are finding that the tools have become reliable enough to use in production environments.


Could your team’s domain experts define your most critical test scenarios right now?

kiteto generates complete E2E test cases from plain text descriptions, no coding required. Product owners, business analysts, and manual testers can describe a user flow and get working Playwright code in return.


How plain-text test generation differs from earlier no-code approaches

When someone says “no-code testing,” the mental model is still often the record-and-playback tool from a decade ago. It’s worth being specific about what plain-text test generation is and isn’t.

Traditional record-and-playback tools capture actions: click button with ID checkout-btn, fill input with selector #email, assert text equals Order confirmed. These recordings are tied to the current state of the UI. Change the button ID and the test breaks.

Plain-text generation works at a different level. The input is a description of a scenario: “A registered user adds two items to their cart and proceeds to checkout. The system should require email address and payment method before allowing order placement. After successful payment, a confirmation page should appear with the order number.” From this description, a language model generates test code that navigates to the product pages, adds items, checks out, fills in the required fields, submits payment, and asserts that a confirmation page with an order number is displayed.

Under the hood, the generated test code contains concrete actions similar to record-and-playback: selector lookups, clicks, form inputs, assertions. The difference is in the source of truth. With record-and-playback, the recorded session is what you maintain. When the UI changes, you re-record. With plain-text generation, the description is what you maintain. When the UI changes, you keep the description and re-generate the test from it. The description is stable; the implementation can be regenerated whenever it needs to be updated.

The practical difference for non-technical users: you don’t need to perform the scenario yourself to create the test. You don’t need to know CSS selectors or XPath expressions. You describe what the user does and what should happen, and the tool handles the implementation.

A concrete scenario: from product owner description to working test

Consider a PO responsible for an e-commerce checkout flow. She knows that the three most critical paths are: guest checkout for first-time users, returning user checkout with saved payment method, and premium customer checkout with automatic discount. These are the scenarios where bugs would have immediate revenue impact.

Traditionally, getting automated tests for all three paths would require: writing detailed acceptance criteria, handing them to a developer or QA engineer, waiting for implementation, reviewing the tests, and iterating on gaps. Two weeks, if the team prioritizes it. Often more.

With a plain-text test generation approach, the PO describes the first scenario: “A new visitor to the store adds a product to the cart, clicks checkout, fills in their email address and shipping details, enters a test credit card, and completes the order. The confirmation page should display the order number and a message thanking the customer by their first name.” She submits this and kiteto runs the test directly.

She can run the test immediately inside kiteto, without setting up a test framework or touching any configuration. The result comes back with a pass or fail, a step-by-step trace, and screenshots at each stage. The test does what it’s supposed to do: verify that the checkout flow works end-to-end, from the perspective of the person who knows what that flow is supposed to accomplish.

This is what makes the approach qualitatively different from earlier tools. The PO’s domain knowledge is going directly into the test artifact, with the language model handling the translation into executable code.

Where kiteto stands today and what’s coming next

Individual tests can already be generated and executed directly inside the kiteto interface. No test framework setup is required. Describe the scenario, run it, and see the results with a full step-by-step trace and screenshots.

With test suites, teams will be able to group individual tests into suites, run the full suite in one go, and schedule automated runs at a defined cadence or trigger them via a CI pipeline. This enables teams to build up a library of automated checks organized around the product areas the POs and BAs own.

What this changes for your team

If domain experts can author test scenarios directly, the distribution of responsibilities in QA shifts.

Product owners and business analysts take ownership of business-critical test scenarios. They describe the flows they care about. They’re not maintaining test infrastructure, and they’re not writing Playwright code. They’re defining what needs to be verified, which is what they’re best at.

Manual testers transition from execution to design. Instead of running the same manual checklist every release cycle, they document their test expertise as automated scenarios. Their knowledge of failure patterns, edge cases, and user behavior becomes encoded in tests that run automatically. This is the transition from manual tester to automation that the industry has talked about for years, but made accessible to people without coding backgrounds.

Developers focus on the test layers they’re best positioned to own: unit tests and integration tests for the code they write. They review the generated E2E tests when needed and can adjust them, but they’re not the bottleneck for E2E coverage anymore.

QA engineers focus on test architecture, complex scenarios that require custom logic, performance testing, and the maintenance of the test infrastructure. Their skills are applied where they add most value, not in writing test cases for standard business flows that product owners could define more accurately anyway.

This is not a reduction of roles. It’s a reorientation of where different expertise is applied.

Why this matters now

The conditions that make this shift possible are relatively recent. Language models capable of generating reliable code from plain text descriptions have existed at production quality for only about a year. The tools built on top of them for testing are newer still.

The growth of the codeless automation testing market reflects this. The gap between “record-and-playback that breaks on every UI change” and “intent-based test generation from plain text” is significant, and teams that have tried both tend to find the latter usable in ways the former wasn’t.

The broader context of AI test automation in 2026 is that multiple layers of the testing stack are being automated simultaneously: test generation, test maintenance through self-healing, test analysis through AI-powered failure triage. Plain-text E2E test generation is one piece of this, but it’s the piece that most directly addresses the knowledge transfer problem.

The business case for better E2E coverage is well-established: production bugs cost 10-100x more to fix than bugs caught in testing, and most teams have critical paths that aren’t covered by automated tests. The question has usually been: who is going to write those tests, given the time and skill constraints teams operate under? Domain-expert-driven test authorship changes that calculation.

Conclusion

The argument for domain experts writing tests is not new. It’s been the theoretical case behind BDD and specification-by-example approaches for two decades. What’s new is that the tools have caught up with the theory. Writing a test no longer requires learning a programming language, a test framework, or a structured syntax. It requires describing a scenario in the same language you’d use to explain it to a colleague.

This shifts the bottleneck from “who can code” to “who understands the domain.” For product owners, business analysts, and manual testers, the second question is not a barrier. It’s their core competency.

Teams that adopt this approach tend to find three things: their test coverage of business-critical flows improves, the communication overhead around test authorship decreases, and the QA engineers and developers they have can focus on work that requires their specific skills. Not every team will find this transition straightforward, but the direction is clear: the people who define requirements should own the tests that verify them, and in 2026, the tools make that possible.

Frequently Asked Questions

What is codeless test automation and how does it differ from record-and-playback?

Codeless test automation is the ability to create automated tests without writing code, using visual interfaces, recorded actions, or plain text descriptions. Modern intent-based tools like kiteto accept plain text scenario descriptions and generate working test code, rather than recording clicks. This makes the resulting tests more resilient to UI changes and accessible to non-technical team members.

Can product owners really write automated tests without coding experience?

With intent-based codeless test automation, yes. Product owners describe a user scenario in plain text, and the tool generates working test code from that description. No knowledge of CSS selectors, XPath, or test frameworks is required. The PO's input is the scenario description; the tool handles the technical implementation.

Why is QA testing for non-developers becoming more common?

Because non-developers often have better domain knowledge than the developers writing the tests. Product owners and business analysts understand the business rules, edge cases, and acceptance criteria better than anyone. Codeless tools allow this domain expertise to go directly into test scenarios without requiring a technical intermediary.

How does a manual tester transition to test automation without learning to code?

Manual testers can document their existing test knowledge as plain text scenario descriptions, which codeless automation tools convert into automated tests. Their expertise in failure patterns, user flows, and edge cases becomes encoded in tests that run automatically, rather than being executed by hand each release cycle.

What is the codeless testing market size and where is it headed?

MarketsAndMarkets projects the codeless testing market will reach $55.2 billion by 2028. This growth reflects adoption of a newer generation of tools that use AI to interpret test intent rather than recording user actions. The earlier record-and-playback tools had significant maintenance problems; AI-powered intent-based tools are proving more usable in production environments.

Welcome to the kiteto Early Access!

Describe test cases in your own words. Let AI handle the automation.

  • Empower your entire Team to create automated tests
  • Stop fixing broken Tests
  • Save valuable developer time
  • Ship with confidence, ship faster
  • Book a free demo