Logo
A CUBED

Staff Software Engineer, Testing

A CUBED, Sunnyvale, California, United States, 94087


WAYFINDER

Our Wayfinder team is building scalable, certifiable autonomy systems to power the next generation of commercial aircraft. Our team of experts is driving the maturation of machine learning and other core technologies for autonomous flight; we are creating a reference architecture that includes hardware, software, and a data-driven development process to allow aircraft to perceive and react to their environment. Autonomous flight is transforming the transportation industry, and our team is at the heart of this revolution.

The Opportunity/Role Description

As a Staff Software Engineer, Testing on the Wayfinder team, you will play a critical role in developing and validating safety critical AI-based systems that empower commercial aircraft to perceive, understand, and react to their surroundings in real-time. This position focuses on ensuring that the machine learning (ML) algorithms and AI models used in our systems meet the stringent safety and reliability standards required for commercial aviation certification.

Your primary responsibility will be to lead and contribute to the design, development, and implementation of the testing infrastructure for our AI-driven safety-critical systems. You will collaborate with cross-functional teams of software engineers, AI/ML experts, safety engineers, and system architects to develop and execute comprehensive testing strategies, ensuring the highest levels of trust and confidence in our AI solutions.

Responsibilities:Lead the development of robust and scalable testing frameworks for ML algorithms and AI systems used in safety-critical environments.Design and execute rigorous verification and validation strategies to ensure AI systems meet aviation certification standards.Collaborate with systems engineers, AI/ML researchers and data engineers to develop tests, test, analyze, and improve the performance, robustness, and reliability of neural networks and other AI models.Build automated testing pipelines to evaluate AI system behavior under a wide range of operational conditions, ensuring they perform safely and reliably in real-world scenarios.Develop tools and metrics to assess the generalization, accuracy, and failure modes of AI models, identifying areas for improvement through better data sets, feature engineering, or architecture modifications.Work closely with the systems and certification teams to ensure the AI systems meet all performance, embeddability and compliance requirements.Conduct code reviews and mentor junior engineers to ensure best practices in testing, documentation, and code quality.Stay informed about advancements in AI safety testing, aviation standards, and certification processes, and help shape the testing strategies for future AI-based aviation systems.Requirements:

Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.11+ years of experience in software development and testing, with a focus on safety-critical systems.Strong expertise in developing testing frameworks for complex systems, particularly for machine learning algorithms and AI-based solutions.Proficiency in programming languages such as Python, C / C++, and experience with automation tools and CI/CD pipelines.Familiarity with ML/AI frameworks (TensorFlow, PyTorch, etc.) and testing methodologies for neural networks.Demonstrated ability to work in a collaborative, cross-functional environment.Strong problem-solving skills and a commitment to quality and safety.Strong sense of ownership.Preferred qualifications:

Experience with aviation safety certification standards, such as DO-178C, or equivalent, in aviation or other highly regulated industries.Experience in developing safety-critical systems for the aviation industry or other highly regulated sectors (e.g., automotive or healthcare).Knowledge of AI safety concerns, including adversarial attacks, robustness, explainability, and uncertainty estimation.Experience with cloud platforms and distributed testing environments.Strong understanding of data collection, augmentation, and synthetic data generation to improve testing outcomes

Compensation:

The estimated salary range for this position is $164,000 to $203,000 annually, plus a target bonus and a comprehensive benefits package including health insurance, 401(k), and flight training. Your exact compensation will be determined by your location and experience.

Why Join Us?

Be a part of a dynamic team that values creativity, collaboration, and innovation. At Acubed, your contributions will directly impact our digital future. We welcome diverse perspectives and are committed to fostering an inclusive environment .

* Please Note: Acubed does not offer sponsorship of employment-based nonimmigrant visa petitions for this role.