Release notes··4 updates

    Onboarding, run-trace, and verdict improvements

    4 updates shipped on May 8, 2026.

    1. Update 01 of 04

      Chat-edited tests are tracked in the revisions sheet

      See exactly which conversation produced which version of a test.

      When the chat agent edits a test case, the resulting revision is now tagged with a "Chat" badge in the test's revisions sheet. Click the badge to jump back to the conversation that produced that change. Useful for understanding why a test looks the way it does, and for telling apart human edits from agent edits at a glance.

      # Direct link

    2. Update 02 of 04

      Failed network requests show up in the run trace

      If a request never reached the server, you'll see it now instead of having to guess.

      When a test triggers a fetch or XMLHttpRequest that fails before getting a response – DNS error, TLS error, connection refused, abort – the request now shows up in the Network panel of the run trace as a red ERROR entry with the failure reason. Previously these requests were invisible, which made it hard to tell whether your application was actually trying to talk to the API.

      # Direct link

    3. Update 03 of 04

      A smarter onboarding chat

      Onboarding now actively explores your app instead of asking for three tests and stopping.

      The default onboarding chat used to ask you for the first three to five tests and stop there. That was rarely enough to get a useful test suite. The new version actively crawls and explores your application, recovers when it hits a login wall or an IP allowlist, builds context recursively, and asks for the domain feedback that makes future test generation reliable. It generates tests in waves rather than in a fixed batch, and stores everything it learns as project knowledge so the agent doesn't ask you the same questions again later.

      Tests created during onboarding are now activated automatically instead of left as drafts, so your first run after signup is a real one.

      # Direct link

    4. Update 04 of 04

      Smarter verdicts on partial test runs

      The Assessment Agent now treats "the agent didn't actually try the thing" differently from "the thing failed".

      A run where the agent only covered part of the test goal used to sometimes get a confident pass when the parts it did cover happened to work. The Assessment Agent now explicitly checks step coverage and detects when a blocker step was skipped rather than completed, and adjusts the verdict and confidence accordingly. Fewer wrong passes on partial runs, fewer wrong fails on tests that legitimately stopped early.

      # Direct link

    Ready to end the QA bottleneck?

    See how QA.tech agents test your product in a 30-minute demo – and leave with a plan to reclaim those hours.

    Get a demo