Collection of resources for AI-driven software testing and automation, including research, articles, tools, and case studies to enhance testing efficiency and innovation.
Published in: 2023 6th International Conference on Information Systems and Computer Networks (ISCON)
Summary: This paper discusses how AI, specifically machine learning (ML) and deep learning (DL), improves software testing efficiency by reducing manual efforts. It compares techniques used for faster application testing. The paper suggests that AI enhances test efficiency, especially for complex, time-sensitive applications.
Published in: No specific journal listed, part of a systematic review study
Summary: This paper categorizes AI techniques applicable to various testing activities, including test case reusability, coverage, fault detection, and manual effort reduction. The paper discusses that AI-based methods improve efficiency in test automation, enhance fault detection, and enable wider test coverage.
Published in: International Journal for Research in Applied Science and Engineering Technology
Summary: The study explores generative AI for creating comprehensive test cases automatically and detecting bugs by analyzing codebases and execution traces. The paper suggests that generative AI significantly improves test coverage and efficiency but requires solutions for challenges like data quality and domain specificity.
Published in: Proceedings of the 2018 ACM/SPEC International Conference on Performance Engineering
Summary: This paper discusses AI's role in automating the development, operation, and analysis phases of software engineering, including defect prediction and log analysis. The paper suggests that AI enhances defect prediction, enables effective logging, and improves reliability prediction.
Published in: TENCON 2023 - IEEE Region 10 Conference
Summary: This systematic review analyzes 20 studies on AI's role in software testing, covering areas like test case generation, defect prediction, and prioritization.
Published in: IEEE International Conference on Artificial Intelligence Testing (AITest)
Summary: The paper reviews a panel discussion of industry experts, detailing visions and strategies for applying AI in testing, including testing AI systems and self-testing systems.
Summary: A comprehensive guide on implementing AI-driven test automation, covering practical aspects like choosing the right tools, setting up test environments, and integrating AI models into existing test frameworks. Includes code examples and real-world case studies.
Summary: Explores how to create resilient automated tests using ML algorithms that can adapt to UI changes. Details implementation strategies for self-healing mechanisms in test automation frameworks and discusses successful implementations at scale.
Summary: Demonstrates practical ways to leverage GPT models for test case generation, API testing, and test documentation. Includes examples of prompt engineering for testing scenarios and integration patterns with existing test suites.
Summary: A detailed walkthrough of implementing ML-based test case prioritization, including feature engineering, model selection, and integration with CI/CD pipelines. Provides code samples and performance metrics from real projects.
Summary: Covers advanced techniques in visual regression testing using AI, including handling dynamic content, cross-browser testing, and visual AI algorithms. Discusses practical implementation strategies and common challenges in visual testing.