Changelog

New updates and improvements to QA.tech

Subscribe to updates

Test Dependency Graph

Test Dependency Graph

Understanding complex test setups can be challenging, especially when tests share browser states and pass information between each other. With this update, we’ve made it simple to visualize test dependencies and interactions clearly.

How It Works

  • Visualize Dependencies: Click on the new graph icon in any previous test result to instantly see a graphical representation of your test’s dependency structure.
  • Easier Maintenance: Quickly identify dependencies between tests, enabling streamlined updates and efficient troubleshooting.
  • Enhanced Understanding: Clearly view how browser states and data are reused and passed between test cases, offering deeper insights into your test flows.

Simply navigate to a test result and click on the graph menu to explore your test dependencies visually.

Run with Dependencies in Test Edit View

Run with Dependencies in Test Edit View

We’ve introduced a new feature that lets you easily rerun a test along with all its dependencies directly from the test edit view. In situations where running a single test isn’t sufficient—such as needing to recreate an item before deleting it—you can now use the “Run w. Dependencies” button. Normally we use the latest execution of the dependency as starting point.

How it Works:

  • Convenient Execution: Click “Run w. Dependencies” from the test edit view to automatically execute the current test and all the tests it depends on.
  • Ensures Proper Setup: Useful in scenarios where prior tests, like item creation, must be executed first to ensure accurate results for dependent tests, like item deletion.

This enhancement streamlines testing workflows, reducing manual steps and ensuring your tests run in the correct context every time.

Preview Test Changes in Edit View

Preview Test Changes in Edit View

We’ve introduced a new feature that lets you preview how your changes would impact the AI agent directly within the edit view. This allows you to quickly retry specific steps without needing to rerun the entire test, saving significant time during debugging and refinement.

How It Works:

  • If the AI agent clicks the wrong button or takes an unintended action, simply update your test instructions.
  • Use the new “Preview” button in the tracer panel to instantly see what the agent would do differently based on your updated instructions.

Important Note:

  • The preview shows the intended action but doesn’t actually replay it on the website. This means you’ll see precisely what the agent plans to do, but it won’t perform the action again live.

Find this feature in the tracer panel within the edit view.

GitHub Status Badges

GitHub Status Badges

Now you can add a QA.tech status badge directly to your GitHub README to quickly display the status of your automated tests. The badge clearly indicates whether tests are passing or failing and provides insight into when the last test run occurred. This helps your team easily monitor test health at a glance, streamlining development workflows and improving visibility.

How It Works:

  • Live Status: Instantly shows if your tests are passing or failing.
  • Recent Activity: Includes a timestamp to indicate when tests were last executed.
  • Easy Integration: Quickly add badges with a simple markdown snippet directly into your README.

You find all the instructions you need on Settings -> Integrations

Enhanced Tracer: View Page and Agent Data

Enhanced Tracer: View Page and Agent Data

Debugging automated tests often involves understanding exactly what data the AI agent is processing. To simplify this, we’ve enhanced the tracer to show all the data the agent receives—including the current page and agent context – providing clearer visibility into what’s happening during test execution.

How It Works

  • Complete Visibility: Easily view the exact data being provided to the AI agent, including:
    • Page Data: See exactly what the agent sees on the webpage during each test step.
    • Agent Context: Access insights into the agent’s current understanding and state.
  • Simplified Debugging: Quickly pinpoint issues and validate data inputs, making it easier to refine test goals and steps.
  • Improved Goal Clarity: Understand exactly what data is available for reference, enabling more precise and effective test scripting.

Getting Started

Access these enhanced details directly from your tracer view during any test run. Simply expand the tracer panel to view the complete data provided to your agent.

Improved Network and Console Logging

Improved Network and Console Logging

When debugging test automation, tracking events across a full test session can be overwhelming, especially when identifying exactly which actions correspond to each step. With our improved network and console logging, you now have crystal-clear visibility into the flow of events, making it significantly simpler and quicker to pinpoint issues.

How It Works

  • Full Session Visibility:
    Easily view network and console events from the entire test session in one comprehensive log, giving you context and continuity.

  • Current Step Highlighting:
    Events associated with the currently executed step are distinctly highlighted, letting you instantly spot the relevant actions and responses.

  • Future Steps Greyed Out:
    Upcoming events for future steps appear greyed out, keeping your focus firmly on the present step without distractions.

Benefits

  • Rapidly locate problematic events, speeding up debugging.

  • Clearly distinguish between current, previous, and upcoming actions.

  • Effortlessly maintain context as you navigate through test execution logs.

This update makes debugging test automation smoother and more intuitive, allowing you to efficiently zero in on the exact points of interest without getting lost in the noise.

Highlight Interactive Elements on the Page

Highlight Interactive Elements on the Page

It’s frustrating when your automated tests fail because the AI agent misses elements it should interact with. To solve this, we’ve introduced a simple way to visually confirm exactly what elements your agent sees and can interact with on your site.

How It Works

  • Visual Highlights: Simply click the eye icon in the tracer to instantly highlight all interactive elements on the page that the AI agent recognizes.

  • Real-Time Debugging: Quickly identify elements the agent might have missed or overlooked, making debugging faster and simpler.

  • Under the Hood: We extract these interactive elements using some JavaScript magic combined with precise HTML parsing, ensuring accurate representation of your site’s interactivity.

When to Use This

  • You’re unsure if a button or input field was properly identified.

  • The agent misses an important action during test execution.

  • You want a quick visual sanity check for complex UI interactions.

Try it out now and see exactly how your AI agent views your site. Debugging interactive tests just got simpler!

Version History for Tests

Version History for Tests

Introduced a new feature that tracks all changes made to test cases over time—test steps and configurations. Teams can now review who made changes, when they were made, and what was changed. This addition ensures a complete audit trail for each test and offers a simple way to restore previous versions if needed.

Key Highlights:
Comprehensive Change Tracking: Every update to a test—be it step revisions, data tweaks, or configuration swaps—is recorded in a historical log.
Auditable History: View contributor names and timestamps on test modifications. Great for collaboration and accountability.
Enhanced Collaboration: Teams get deeper insights into when, how, and why a test evolved, creating transparency and reducing test maintenance overhead.

This feature is accessible from the test’s detail page, where you’ll find a “History” clock icon that provides a chronological list of versions and their modifications.

AI-Suggested Test Cases

AI-Suggested Test Cases

AI-Suggested Test Cases

QA.tech now intelligently suggests valuable test cases by automatically analyzing your website. Our AI-powered agent can:

  1. Log into your website (if authentication details are provided).
  2. Crawl your site’s structure to identify key features.
  3. Instantly suggest high-value test cases tailored to your application.
  4. Automatically set up and execute each suggested test without any manual scripting.

Benefits:

  • Generate actionable tests rapidly (over 20 tests in under 30 minutes).
  • Significantly reduce manual test scripting effort.
  • Ensure comprehensive coverage with AI-driven insights.

Test Execution Insights

Test Execution Insights

QA.tech now provides insights into your test executions. With clear visual analytics, you’ll instantly understand your test performance and trends:

What’s Included:

  • Test Execution Trend: Easily track the number of tests executed, including pass/fail status, over any period.
  • Execution Time Distribution: Understand how long your tests take to execute, highlighting the 95th percentile performance (p95).

Project Contexts

Project Contexts

Provide data that the AI agent will use when running all tests. This feature ensures the agent understands key aspects of your site, enabling more accurate and reliable test execution.

How it Works:

  • Centralized Information: Define project-specific details that the AI agent should consider during testing.
  • Context-Aware Testing: The agent uses this data to adapt its actions based on your website’s behavior and requirements.

What Should You Add to Project Context?

Project Contexts should include information that helps the AI understand and interact with your application effectively. Examples include:

  • Website Behavior Rules: Define interactions such as requiring hover actions before clicking menu items.
  • Domain-Specific Constraints: Specify rules such as only using Swedish company data if your service handles Swedish businesses.
  • Testing Guidelines: Ensure that tests fail on specific conditions, such as obvious typos or interactions with restricted UI elements.

Examples of Project Context Data:

  • The service is a route planning tool for American railways.
  • When creating users, always use Swedish names.
  • Always fail tests when obvious typos are found.
  • If a button is red, it should never be clicked.

You find the Project Context settings if you go to Project Settings – Configs

Automatically Test and Verify File Downloads

Automatically Test and Verify File Downloads

Testing File Downloads

QA.tech’s agent can now easily test the file download and export functionality of your web applications. When running a test involving file downloads, the agent will:

  1. Detect the file download trigger
  2. Wait for the download to finish
  3. Confirm the file downloaded successfully
  4. Display a clear success message with file details

Just write your test as usual, and the agent handles the rest. For example, writing “Click the export button” will automatically trigger and verify the file download.

Things to Keep in Mind

  • Downloads have a 30-second completion limit
  • Maximum file size supported: 100MB
  • Specialized downloads (e.g., streaming media) aren’t supported
  • Downloaded files are temporary and can’t be re-uploaded
  • Need to verify file formats? Reach out to our support
  • Email attachment downloads are not supported—contact us for assistance with such tests

Easier Debugging

Easier Debugging

Understand Every Step of Your AI Agent

Why We Built This

Debugging test automation can be frustrating when you don’t know why a test failed or how the AI agent reached its decision. With this update, we’ve made it incredibly easy to see what went wrong, step by step.

How It Works

  • Step-by-Step Breakdown: Follow every action the AI agent takes in real time. See what elements it interacts with, what data it processes, and why it makes certain choices.
  • Clear Error Insights: When a test fails, you’ll get precise details on what went wrong—whether it’s a missing element, an unexpected page state, or an incorrect output.
  • Agent Thought Process: Understand not just what happened but why—see the AI’s reasoning behind every interaction, making debugging much faster.
  • Visual Logs & Reports: Quickly navigate logs with timestamps, screenshots, and structured explanations of the agent’s actions.

Quick Summaries for Failed Tests

Quick Summaries for Failed Tests

Quick Summaries for Failed Tests, making it much easier to diagnose and understand test failures at a glance.

How it Works:

  • Hover to View Details: Quickly see why a test failed by hovering over the status in the test list.
  • Concise Failure Summaries: Get an instant overview of errors without digging into logs.
  • Run Overview: View a high-level summary of failures for an entire test run, helping you pinpoint patterns and common issues.

This feature speeds up debugging by providing essential failure insights right where you need them. No more clicking through multiple logs—just hover and analyze!

 

Test Plans

Test Plans

Test Plans allow you to organize and manage groups of test cases that can be executed together as a single unit.

Overview

Test Plans are collections of test cases that you want to run together regularly. This feature provides flexibility in running tests through multiple methods: API triggers, scheduled runs, or manual execution through the UI.

Benefits:

  • Organizing related test cases into logical groups
  • Enabling automated execution on schedules
  • Supporting API-triggered test automation