Changelog

New updates and improvements to QA.tech

Subscribe to updates

AI-Suggested Test Cases

AI-Suggested Test Cases

AI-Suggested Test Cases

QA.tech now intelligently suggests valuable test cases by automatically analyzing your website. Our AI-powered agent can:

  1. Log into your website (if authentication details are provided).
  2. Crawl your site’s structure to identify key features.
  3. Instantly suggest high-value test cases tailored to your application.
  4. Automatically set up and execute each suggested test without any manual scripting.

Benefits:

  • Generate actionable tests rapidly (over 20 tests in under 30 minutes).
  • Significantly reduce manual test scripting effort.
  • Ensure comprehensive coverage with AI-driven insights.

Automatically Test and Verify File Downloads

Automatically Test and Verify File Downloads

Testing File Downloads

QA.tech’s agent can now easily test the file download and export functionality of your web applications. When running a test involving file downloads, the agent will:

  1. Detect the file download trigger
  2. Wait for the download to finish
  3. Confirm the file downloaded successfully
  4. Display a clear success message with file details

Just write your test as usual, and the agent handles the rest. For example, writing “Click the export button” will automatically trigger and verify the file download.

Things to Keep in Mind

  • Downloads have a 30-second completion limit
  • Maximum file size supported: 100MB
  • Specialized downloads (e.g., streaming media) aren’t supported
  • Downloaded files are temporary and can’t be re-uploaded
  • Need to verify file formats? Reach out to our support
  • Email attachment downloads are not supported—contact us for assistance with such tests

Test Execution Insights

Test Execution Insights

QA.tech now provides insights into your test executions. With clear visual analytics, you’ll instantly understand your test performance and trends:

What’s Included:

  • Test Execution Trend: Easily track the number of tests executed, including pass/fail status, over any period.
  • Execution Time Distribution: Understand how long your tests take to execute, highlighting the 95th percentile performance (p95).

Easier Debugging

Easier Debugging

Understand Every Step of Your AI Agent

Why We Built This

Debugging test automation can be frustrating when you don’t know why a test failed or how the AI agent reached its decision. With this update, we’ve made it incredibly easy to see what went wrong, step by step.

How It Works

  • Step-by-Step Breakdown: Follow every action the AI agent takes in real time. See what elements it interacts with, what data it processes, and why it makes certain choices.
  • Clear Error Insights: When a test fails, you’ll get precise details on what went wrong—whether it’s a missing element, an unexpected page state, or an incorrect output.
  • Agent Thought Process: Understand not just what happened but why—see the AI’s reasoning behind every interaction, making debugging much faster.
  • Visual Logs & Reports: Quickly navigate logs with timestamps, screenshots, and structured explanations of the agent’s actions.

Project Contexts

Project Contexts

Provide data that the AI agent will use when running all tests. This feature ensures the agent understands key aspects of your site, enabling more accurate and reliable test execution.

How it Works:

  • Centralized Information: Define project-specific details that the AI agent should consider during testing.
  • Context-Aware Testing: The agent uses this data to adapt its actions based on your website’s behavior and requirements.

What Should You Add to Project Context?

Project Contexts should include information that helps the AI understand and interact with your application effectively. Examples include:

  • Website Behavior Rules: Define interactions such as requiring hover actions before clicking menu items.
  • Domain-Specific Constraints: Specify rules such as only using Swedish company data if your service handles Swedish businesses.
  • Testing Guidelines: Ensure that tests fail on specific conditions, such as obvious typos or interactions with restricted UI elements.

Examples of Project Context Data:

  • The service is a route planning tool for American railways.
  • When creating users, always use Swedish names.
  • Always fail tests when obvious typos are found.
  • If a button is red, it should never be clicked.

You find the Project Context settings if you go to Project Settings – Configs

Quick Summaries for Failed Tests

Quick Summaries for Failed Tests

Quick Summaries for Failed Tests, making it much easier to diagnose and understand test failures at a glance.

How it Works:

  • Hover to View Details: Quickly see why a test failed by hovering over the status in the test list.
  • Concise Failure Summaries: Get an instant overview of errors without digging into logs.
  • Run Overview: View a high-level summary of failures for an entire test run, helping you pinpoint patterns and common issues.

This feature speeds up debugging by providing essential failure insights right where you need them. No more clicking through multiple logs—just hover and analyze!

 

Test Plans

Test Plans

Test Plans allow you to organize and manage groups of test cases that can be executed together as a single unit.

Overview

Test Plans are collections of test cases that you want to run together regularly. This feature provides flexibility in running tests through multiple methods: API triggers, scheduled runs, or manual execution through the UI.

Benefits:

  • Organizing related test cases into logical groups
  • Enabling automated execution on schedules
  • Supporting API-triggered test automation

File Uploads with Custom Files

File Uploads with Custom Files

We’ve enhanced file upload testing capabilities to support both default and custom files, making it easier to test different file import scenarios.

How it Works:

  • Default Test Files: QA.tech provides built-in test files, including test.pdf and test.jpg, that are automatically available for use in tests.
  • Custom File Uploads: Users can upload their own files for testing specific import features or custom requirements.
  • File Selector Integration: The system captures file selector events and allows automated uploads from pre-configured test files..
  • File Type Restrictions: The AI agent detects allowed file types from input elements and ensures only matching file types are used in tests.

This feature provides greater flexibility for file-related test cases, helping teams validate upload functionality with ease.

Multiple Configs for User Role Testing and More

Multiple Configs for User Role Testing and More

We’ve introduced the ability to define multiple configurations for test runs, allowing for more flexibility in how tests are executed. Now, you can create and manage different test setups based on user roles, file uploads, or other test-specific variations.

How it Works:

  • Role-Based Configurations: Define different login credentials and permissions for various user roles within a test.
  • Flexible File Handling: Set up different configurations to test with multiple file types, sizes, or formats.

This feature helps teams create more dynamic and comprehensive test cases, improving overall test coverage and reliability. Start using multiple configurations for enhanced test execution today!

Mouse Events

Mouse Events

We’ve added an Mouse Event Tracing feature to give you deeper insights into test execution. Now, you can visually track how the mouse moves, where it clicks, and where text is typed during test runs. This is part of QA.tech visions to not just click on elements but actually act as a user.

How it Works:

  • Live Mouse Tracking: See the exact path of the mouse during test execution.
  • Click and Type Insights: Identify where clicks occur and where text is entered, helping diagnose UI interaction issues.
  • Better Debugging: Quickly pinpoint potential issues by understanding how the test interacts with elements on the screen.

This feature enhances test analysis, making it easier to refine automated test cases and optimize interactions with your application. Start visualizing mouse activity in your tests today!

Jira Integration

Jira Integration

Alongside Trello, we’ve also introduced Jira integration to simplify bug tracking. Now, you can send bug reports directly to Jira, ensuring a smoother workflow for teams using the platform.

How it Works:

  • Direct Jira Reporting: Send bug reports to Jira without leaving the test environment.
  • Customizable Issues: Select the appropriate Jira project, issue type, and priority to keep your bug tracking structured.
  • Rich Bug Details: Automatically include relevant test data, steps to reproduce, and screenshots, reducing manual effort.

This integration helps teams efficiently track, prioritize, and resolve issues within Jira. Start streamlining your bug tracking with Jira integration today!

Trello Integration

Trello Integration

We’re have made it easier to track and manage bugs with our new Trello integration. Now, you can send bug reports directly to your Trello boards, streamlining collaboration and ensuring issues are addressed efficiently.

How it Works:

  • Seamless Reporting: Send bug reports directly from the test environment to Trello without switching between tools.
  • Customizable Workflow: Choose which Trello board and list the bug report should go to, making it easier to categorize and prioritize issues.
  • Automatic Details: Reports include relevant test details and screenshots, reducing manual input and improving bug tracking accuracy.

This integration helps keep your team aligned and makes bug tracking more efficient. Start sending bug reports to Trello with ease!

Suggestions for Test Steps

Suggestions for Test Steps

We’ve introduced a new feature that enhances the test creation and editing experience by providing intelligent suggestions for writing test steps and selecting the best tools for execution. This helps streamline the process and ensures more efficient, effective test automation.

How it Works:

  • Step Guidance: When creating or editing a test case, you’ll receive suggestions on how to structure each step for better clarity and accuracy.
  • Tool Recommendations: Based on the step’s context, the system will suggest appropriate tools or actions the agent can use to complete the task effectively.
  • Efficiency Boost: Reduce errors and enhance test reliability by leveraging AI-driven insights for test case optimization.

This feature simplifies test creation, reduces the need for trial-and-error, and improves overall test execution. Start creating better test cases with smart suggestions today!

Stop Test in Edit View

Stop Test in Edit View

We’ve introduced the ability to stop a test directly from the edit view, giving you more control during the test creation process. This is especially useful if a test isn’t running as expected and you need to make adjustments before trying again.

How it works:

  1. Pause During Editing: Stop a running test without leaving the edit view.
  2. Fix Steps: Make necessary adjustments to correct or improve the test.
  3. Run Again: Restart the test when you’re ready to continue.

This feature helps you quickly address issues and refine your tests for better results.

Duplicate tests easily

Duplicate tests easily

We’re introducing Test Duplication, a simple way to duplicate any existing tests. This feature is ideal for creating variations and saving time during test setup.

How it works:

  1. Open the Test Menu: Choose an existing test from your list.
  2. Duplicate with One Click: Use the “Duplicate” option to create a copy of the test.
  3. Customize as Needed: Adjust the duplicated test to explore variations or fine-tune specific parameters.
  4. Set Live: Activate the test when ready to include it in your workflows.

This update streamlines test creation, making it easier to experiment and optimize without starting from scratch.