Hi, I'm
Roshan Ali
Senior AI QA Engineer · ERP & AI Platform QA · API & Data · Agile Shift-Left QA
5+ Years of Quality Excellence
Engineering Quality in the Age of Intelligent Systems
ISTQB Certified Test Manager (CTAL-TM) and Lean Six Sigma Green Belt with 5+ years of hands-on experience architecting quality strategies for complex, non-deterministic ERP and AI scheduling platforms. Proven track record of orchestrating end-to-end validation across functional, performance, and security-aware testing while remaining deeply embedded in API, data, and system-level verification.
Drove zero-critical-defect releases by pioneering AI-assisted testing, optimising regression coverage, and accelerating root-cause resolution across sophisticated software-hardware implementations. Specialised in designing custom constraint-driven frameworks — rules, invariants, feasibility, cost, and impact checks — for non-deterministic AI output validation.
Recognised for taking full ownership of quality outcomes, mentoring senior QA engineers, and influencing cross-functional teams to adopt shift-left practices that prevent defects before they reach development.
Tools & Expertise
A comprehensive toolkit for end-to-end quality engineering across modern enterprise environments.
Test Management & Strategy
ERP & AI Platform QA
API & Data Validation
Functional & Regression Testing
Defect Lifecycle Governance
Delivery & Agile QA
Test Automation
Leadership & Communication
Experience Timeline
A decade of quality ownership and end-to-end strategy for enterprise-level products.
Architected end-to-end quality strategy for a non-deterministic ERP & AI PSO scheduling platform — single-handedly owning test strategy, risk-based prioritisation, and release readiness across 90+ sprints, consistently delivering on-time, high-confidence releases. Engineered and executed 2,500+ test cases across 510+ user stories spanning functional, regression, integration, smoke, and UAT, achieving full traceability from specs to outcomes with zero escapes to production.
Drove defect governance at enterprise scale: triaged and accelerated resolution for 1,200+ defects, enforcing reproducibility standards and severity discipline that cut average defect age by 35%. Led 80+ defect triage and quality gate reviews as the sole quality lead, aligning Dev, Product, and stakeholders. Pioneered constraint-driven validation for non-deterministic AI outputs — rules, invariants, feasibility, cost, and impact checks — that became the team's standard methodology. Mentored and upskilled 3 senior QAs, reducing review rework by 25%, and championed AI-assisted QA adoption across R&D.
Overhauled data collection and reporting processes, eliminating manual inconsistencies by 30% and boosting operational data integrity by 25% across tracking and reconciliation systems — directly improving decision-making accuracy. Established 10+ operational metrics and acceptance criteria for process changes, enforcing outcome validation before every rollout and maintaining 100% Integrated Management System compliance across health, safety, quality, and environmental standards.
Featured Projects & Artifacts
Test strategies, frameworks, and QA artifacts from real enterprise engagements.
Problem-Solving Case Studies
Complex defect investigations and high-impact resolutions that prevented critical production failures.
Problem
The AI PSO scheduling engine was producing plans that passed basic functional checks but violated real-world business constraints — over-allocated resources, infeasible task sequencing, and cost estimates outside acceptable bounds. Standard test cases couldn't catch it because no two outputs were identical.
Investigation
Existing test cases validated specific expected outputs — inapplicable for non-deterministic engines. Mapped the domain's hard constraints (resource capacity, task dependencies, cost ceilings) and designed invariant-based checks that held true regardless of the specific schedule produced.
Resolution
Pioneered a constraint-driven validation framework — rules, invariants, feasibility checks, cost bounds, and impact assessments — that validated outputs against business rules rather than exact values. Adopted as the team's standard methodology for all AI output testing.
Zero AI-related production escapes across 90+ sprints. Framework became the R&D standard for non-deterministic validation.
Problem
Functional UI tests on the IFS platform were consistently green, yet stakeholder-reported data discrepancies in scheduling outputs persisted across releases. Root cause was unknown and intermittent.
Investigation
Shifted focus below the UI layer. Used Postman for systematic API contract validation and deep SQL queries to verify data integrity across service boundaries. Identified a silent data transformation mismatch at an integration point that corrupted values before they surfaced in the UI.
Resolution
Introduced mandatory API + SQL validation layer as a standard gate in the release checklist. All cross-service data flows now verified at the data layer, not just through UI assertions.
Eliminated the class of silent data bugs. End-to-end data correctness verified across all services and integrations from that sprint forward.
Problem
Defects were accumulating across sprints with inconsistent severity classifications, unclear ownership, and no enforced resolution timelines. The release pipeline was being blocked by ambiguous defect states and stakeholders lacked confidence in quality reporting.
Investigation
Audited the full open defect backlog. Found that 60%+ of open items lacked reproducible steps, clear severity justification, or assigned resolution accountability — making triage decisions arbitrary and slow.
Resolution
Led 80+ defect triage and quality gate review sessions, establishing reproducibility standards, severity discipline, and clear Dev accountability. Built Jira & Xray dashboards tracking defect ageing, trends, and coverage — producing weekly stakeholder-ready release reports.
Average defect age cut by 35%. Go/no-go decisions became data-driven. Release pipeline unblocked and on-time delivery sustained across 90+ sprints.
Interactive Impact Dashboard
Real numbers from 3+ years of quality ownership at IFS R&D.
via Spec Reviews
Cut
Reduced (Mentoring)
90+ Sprints · 0 Defects
Sole QA lead across 90+ sprints on a non-deterministic ERP & AI scheduling platform — achieved full traceability from specs to outcomes with zero defects escaping to production across the entire engagement.
80+ defect triage and quality gate reviews conducted as sole QA lead — aligning Dev, Product, and stakeholders on every release decision with data-driven go/no-go reporting.
Reviewing 67+ tech specs before development flagged edge cases, testability gaps, and integration risks early — preventing an estimated 40% of defects from ever entering the codebase.
Structured triage governance — enforcing reproducibility standards, severity discipline, and clear Dev ownership — cut the average time defects remained open by 35%.
Mentoring and upskilling 3 senior QA engineers raised test design quality and defect-writing rigour, reducing the volume of work sent back for correction by 25%.
Led 80+ defect triage and quality gate sessions — producing stakeholder-ready Jira & Xray dashboards that directly informed go/no-go release decisions for enterprise products.
Beyond the Test Cases
Building quality cultures, leading teams, and earning industry recognition.
QA Mentorship
Mentored and upskilled 3 senior QA engineers at IFS, raising test design quality, defect-writing rigour, and execution discipline — reducing review rework by 25%.
Stakeholder Management
Produced stakeholder-ready release reports (coverage, defect trends, ageing, execution status) that directly informed go/no-go decisions for enterprise product releases at IFS.
Quality Gate Ownership
Led 80+ defect triage and quality gate review sessions as sole QA lead — aligning Dev, Product, and stakeholders on release priorities and systematically unblocking pipelines.
AI-Assisted QA Innovation
Championed AI-assisted QA adoption (test design acceleration, regression optimisation, defect clustering, root-cause support) and partnered with Architecture & R&D to modernise QA ways of working across the organisation.