Sagar Arora
#ATAITF: A Festival of AI Testing Tools & Techniques
About Speaker

Sagar Arora
Staff Engineer
Nagarro
With over 11 years in software testing and quality engineering, I thrive on pushing boundaries and redefining how QA and automation work. From mastering Selenium, Appium, Playwright, API Testing, and AI-driven testing to exploring the future of AI/ML in automation, cloud testing, and performance engineering, I’m always on the lookout for smarter, faster, and more efficient solutions.
At Nagarro, I’ve been at the forefront of designing automation frameworks, optimizing test strategies, implementing CI/CD pipelines, mentoring teams, and enhancing quality engineering practices. Driving transformation through innovation, I help teams elevate their game and embrace next-gen testing methodologies.
Passionate about making software testing not just better—but smarter. Let’s innovate, automate, and disrupt!
More Speakers
Topic – AI-Powered Dynamic Test Prioritization for Efficient Test Execution
In the fast-paced world of software development, ensuring that testing is efficient and effective is more critical than ever. With the growing size of test suites and increasing pressure to deliver high-quality software quickly, teams often face challenges in managing test execution. Traditional approaches to running tests may result in wasted time and resources, as they execute entire test suites regardless of the code changes or areas of risk.
This is where AI-Powered Dynamic Test Prioritization comes in. By leveraging machine learning models, this innovative approach dynamically prioritizes test cases based on realtime factors such as the recent changes in the code, historical test results, and risk assessments. The AI model makes intelligent decisions about which tests to run first, ensuring that the most relevant tests are prioritized, while reducing the overall test execution time.
Through this method, AI not only identifies which parts of the application are most impacted by the latest changes but also considers historical data on which tests have been prone to failure in the past. By analyzing both risk factors and business impact, the model ensures that tests for critical features are always executed first, providing faster feedback for teams.