Changelog

New updates and improvements to QA.tech

Preview Test Changes in Edit View

31.03.2025

We've introduced a new feature that lets you preview how your changes would impact the AI agent directly within the edit view. This allows you to quickly retry specific steps without needing to rerun the entire test, saving significant time during debugging and refinement.

How It Works:

  • If the AI agent clicks the wrong button or takes an unintended action, simply update your test instructions.
  • Use the new "Preview" button in the tracer panel to instantly see what the agent would do differently based on your updated instructions.

Important Note:

  • The preview shows the intended action but doesn't actually replay it on the website. This means you'll see precisely what the agent plans to do, but it won't perform the action again live.

Find this feature in the tracer panel within the edit view.

Read more

Run with Dependencies in Test Edit View

31.03.2025

We've introduced a new feature that lets you easily rerun a test along with all its dependencies directly from the test edit view. In situations where running a single test isn't sufficient—such as needing to recreate an item before deleting it—you can now use the "Run w. Dependencies" button. Normally we use the latest execution of the dependency as starting point.

How it Works:

  • Convenient Execution: Click "Run w. Dependencies" from the test edit view to automatically execute the current test and all the tests it depends on.
  • Ensures Proper Setup: Useful in scenarios where prior tests, like item creation, must be executed first to ensure accurate results for dependent tests, like item deletion.

This enhancement streamlines testing workflows, reducing manual steps and ensuring your tests run in the correct context every time.

Read more

Enhanced Tracer: View Page and Agent Data

31.03.2025

Debugging automated tests often involves understanding exactly what data the AI agent is processing. To simplify this, we've enhanced the tracer to show all the data the agent receives—including the current page and agent context - providing clearer visibility into what's happening during test execution.

How It Works

  • Complete Visibility: Easily view the exact data being provided to the AI agent, including:
    • Page Data: See exactly what the agent sees on the webpage during each test step.
    • Agent Context: Access insights into the agent's current understanding and state.
  • Simplified Debugging: Quickly pinpoint issues and validate data inputs, making it easier to refine test goals and steps.
  • Improved Goal Clarity: Understand exactly what data is available for reference, enabling more precise and effective test scripting.

Getting Started

Access these enhanced details directly from your tracer view during any test run. Simply expand the tracer panel to view the complete data provided to your agent.

Read more

GitHub Status Badges

31.03.2025

Now you can add a QA.tech status badge directly to your GitHub README to quickly display the status of your automated tests. The badge clearly indicates whether tests are passing or failing and provides insight into when the last test run occurred. This helps your team easily monitor test health at a glance, streamlining development workflows and improving visibility.

How It Works:

  • Live Status: Instantly shows if your tests are passing or failing.
  • Recent Activity: Includes a timestamp to indicate when tests were last executed.
  • Easy Integration: Quickly add badges with a simple markdown snippet directly into your README.

You find all the instructions you need on Settings -> Integrations

Read more

Improved Network and Console Logging

24.03.2025

When debugging test automation, tracking events across a full test session can be overwhelming, especially when identifying exactly which actions correspond to each step. With our improved network and console logging, you now have crystal-clear visibility into the flow of events, making it significantly simpler and quicker to pinpoint issues.

How It Works

  • Full Session Visibility:
    Easily view network and console events from the entire test session in one comprehensive log, giving you context and continuity.
  • Current Step Highlighting:
    Events associated with the currently executed step are distinctly highlighted, letting you instantly spot the relevant actions and responses.
  • Future Steps Greyed Out:
    Upcoming events for future steps appear greyed out, keeping your focus firmly on the present step without distractions.

Benefits

  • Rapidly locate problematic events, speeding up debugging.
  • Clearly distinguish between current, previous, and upcoming actions.
  • Effortlessly maintain context as you navigate through test execution logs.

This update makes debugging test automation smoother and more intuitive, allowing you to efficiently zero in on the exact points of interest without getting lost in the noise.

Read more

Highlight Interactive Elements on the Page

19.03.2025

It's frustrating when your automated tests fail because the AI agent misses elements it should interact with. To solve this, we've introduced a simple way to visually confirm exactly what elements your agent sees and can interact with on your site.

How It Works

  • Visual Highlights: Simply click the eye icon in the tracer to instantly highlight all interactive elements on the page that the AI agent recognizes.
  • Real-Time Debugging: Quickly identify elements the agent might have missed or overlooked, making debugging faster and simpler.
  • Under the Hood: We extract these interactive elements using some JavaScript magic combined with precise HTML parsing, ensuring accurate representation of your site's interactivity.

When to Use This

  • You're unsure if a button or input field was properly identified.
  • The agent misses an important action during test execution.
  • You want a quick visual sanity check for complex UI interactions.

Try it out now and see exactly how your AI agent views your site. Debugging interactive tests just got simpler!

Read more

Version History for Tests

14.03.2025

Introduced a new feature that tracks all changes made to test cases over time—test steps and configurations. Teams can now review who made changes, when they were made, and what was changed. This addition ensures a complete audit trail for each test and offers a simple way to restore previous versions if needed.

Key Highlights:
Comprehensive Change Tracking: Every update to a test—be it step revisions, data tweaks, or configuration swaps—is recorded in a historical log.
Auditable History: View contributor names and timestamps on test modifications. Great for collaboration and accountability.
Enhanced Collaboration: Teams get deeper insights into when, how, and why a test evolved, creating transparency and reducing test maintenance overhead.

This feature is accessible from the test's detail page, where you'll find a "History" clock icon that provides a chronological list of versions and their modifications.

Read more

AI-Suggested Test Cases

13.03.2025

AI-Suggested Test Cases

QA.tech now intelligently suggests valuable test cases by automatically analyzing your website. Our AI-powered agent can:

  1. Log into your website (if authentication details are provided).
  2. Crawl your site's structure to identify key features.
  3. Instantly suggest high-value test cases tailored to your application.
  4. Automatically set up and execute each suggested test without any manual scripting.

Benefits:

  • Generate actionable tests rapidly (over 20 tests in under 30 minutes).
  • Significantly reduce manual test scripting effort.
  • Ensure comprehensive coverage with AI-driven insights.
Read more

Test Execution Insights

05.03.2025

QA.tech now provides insights into your test executions. With clear visual analytics, you'll instantly understand your test performance and trends:

What's Included:

  • Test Execution Trend: Easily track the number of tests executed, including pass/fail status, over any period.
  • Execution Time Distribution: Understand how long your tests take to execute, highlighting the 95th percentile performance (p95).
Read more

Automatically Test and Verify File Downloads

14.02.2025

Testing File Downloads

QA.tech's agent can now easily test the file download and export functionality of your web applications. When running a test involving file downloads, the agent will:

  1. Detect the file download trigger
  2. Wait for the download to finish
  3. Confirm the file downloaded successfully
  4. Display a clear success message with file details

Just write your test as usual, and the agent handles the rest. For example, writing "Click the export button" will automatically trigger and verify the file download.

Things to Keep in Mind

  • Downloads have a 30-second completion limit
  • Maximum file size supported: 100MB
  • Specialized downloads (e.g., streaming media) aren't supported
  • Downloaded files are temporary and can't be re-uploaded
  • Need to verify file formats? Reach out to our support
  • Email attachment downloads are not supported—contact us for assistance with such tests
Read more