Agtonomy
Software Engineer, Perception
Agtonomy, San Francisco, California, United States, 94199
About UsAgtonomy is pioneering advanced automation and AI solutions to transform agriculture and beyond. Initially focused on specialty crops, our TeleFarmer platform addresses labor-intensive needs with automation, turning conventional equipment into autonomous machines. By partnering with leading manufacturers like Doosan Bobcat, we integrate smart technology into tractors and other machinery, enhancing safety and efficiency. As we expand into ground maintenance and other industrial applications, our expert team continues to address key challenges with labor shortages, sustainability and profitability across various industries.About the RoleAs a Perception / Machine Learning Engineer on the Autonomy Team, you will play a key role in solving challenging perception problems in outdoor vehicle automation. Leveraging your experience, you will implement state-of-the-art ML perception techniques to improve how Agtonomy’s tractors perceive and understand the environments where they operate. You will work closely with embedded, localization, and planning engineers on the team to design and evolve the upstream and downstream interfaces of the perception system. This role is perfect for someone who loves implementing ML to tackle real world problems and is excited about applying their experience to make robots perceive in rugged, agricultural environments.What You'll DoApplying machine learning to solve challenging perception problems for autonomous systems (e.g. object detection, semantic segmentation, instance segmentation, dense depth, optical flow, tracking, etc.).Driving the architecture, deployment, and performance characterization of our deep learning models.Refining and optimizing models for low-latency inference on embedded hardware.Designing and building cloud-based training and labeling pipelines.Collaborating with the hardware and embedded teams on sensor selection and vehicle packaging given safety requirements.Writing performant, well-tested software, and improving code quality of the entire Autonomy team through code and design reviews.What You'll Bring5+ years of experience in software development for problems involving computer vision, machine learning, and robotic perception techniques.Foundational understanding of deep learning: model layer design, loss function intuition, training best practices.Experience handling large datasets efficiently and organizing them for training and evaluation.Experience curating synthetic and real-world image datasets for training.Strong proficiency in modern C++ and Python and experience writing efficient algorithms for resource-constrained embedded systems.Ability to thrive in a fast-moving, collaborative, small team environment with lots of ownership.Excellent analytical, communication, and documentation skills with demonstrated ability to collaborate with interdisciplinary stakeholders outside of Autonomy.An eagerness to get your hands dirty by testing your code on real robots at real customer farms (gives “field testing” a whole new meaning!).What Makes You a Strong FitExperience architecting multi-sensor ML systems from scratch.Experience with compute-constrained pipelines: optimizing models to balance the accuracy vs. performance tradeoff, leveraging TensorRT, model quantization, etc.Experience implementing custom operations in CUDA.MS or PhD in Robotics, Computer Science, Computer Engineering, or a related field.Publications at top-tier perception/robotics conferences (e.g. CVPR, ICRA, etc.).Passion for sustainable agriculture and electric vehicles.Salary and BenefitsThe US base salary range for this full-time position is $160,000 to $220,000 + equity + benefits + unlimited PTO.Benefits:100% covered medical, dental, and vision for the employee (cost plus partner, children, or family is additional)Commuter BenefitsFlexible Spending Account (FSA)Life InsuranceShort- and Long-Term Disability401k PlanStock OptionsCollaborative work environment working alongside passionate mission-driven folks!Our interview process is generally conducted in five (5) phases:Phone Screen with People Operations (30 minutes)Video Interview with the Hiring Manager (45 minutes)Coding Challenge and Technical Challenge (1 hour with an Autonomy Engineer)Panel Interview (Video interviews scheduled with key stakeholders, each interview will be 30 to 45 minutes)Final Interviews (CEO, CFO, VP of Engineering, 30 minutes each)
#J-18808-Ljbffr
#J-18808-Ljbffr