Autodesk, Inc.
Principal Platform Engineer (Cloud Infrastructure - Big Data)
Autodesk, Inc., San Francisco, California, United States, 94199
Principal Engineer, Central Data Lake Lead
Job Requisition ID #
24WD77185Autodesk's Platform Services and Emerging Technology (PSET) team is hiring a Principal Software Engineer with experience automating large-scale cloud infrastructure services. In this exciting role, you will help us automate a robust and scalable data platform used by numerous teams across the company. Reporting to our Software Engineering Manager for Platform Infrastructure, you'll help us tackle tough problems and improve our platform's reliability, resiliency, and scalability. If you enjoy developing high quality and resilient systems and want to make a big impact with data at Autodesk, we want to meet you!As a Lead for our Central Data Lake, you will:Design, develop, and operate highly reliable large-scale data lakePartner closely with Product teams to understand requirements and design new capabilities that go directly into customer's handsBuild and scale data infrastructure that powers batch and real-time data processing of billions of records dailyAutomate and administer big data applications and services on cloudHelp drive observability into the health of our data infrastructure and understanding of system behaviorGuide initiatives to enable best practices across infrastructure, deployments, automation, and accessibilitySet technical directions and influence cross-functional teamsEnsure operational excellence of the services and meet the commitments to our customers regarding reliability, availability, and performanceDevelop and implement security best practices at the data, application, infrastructure, and network layersMinimum Qualifications10+ years of hands-on direct internal experience in large-scale data intensive distributed systems, especially in query engines, object storage, data warehouse, data lake, data analytics, SQL/NoSQL databases, distributed file systems and data platform infrastructureExperience with Apache Iceberg, Parquet, Spark, Hive, Presto, Trino, and AvroExperience provisioning and managing orchestration tools like AirflowExcellent coding skills and experience in Python, Unix shell scriptingExperience building, implementing and automating APIsExperience building code-driven infrastructure on AWS through end-to-end automation including integration tests and unit testsExperience with Self-service platform offering by building APIs to deploy Cloud resourcesDemonstrated experience with EKS, EMR, S3, IAM, Networking and other AWS servicesExperience with observability tools (such as, Hive, Spark, Presto, Jenkins, CloudOS, Datadog)Experience communicating solutions and resolutions with technical and non-technical audiencesWillingness to participate in professional development activities to stay current on industry knowledge and excitement about breaking the status quo by seeking innovative solutionsExperience with public clouds (AWS, Azure, GCP)Computer Science degree (or equivalent experience will be considered)
#J-18808-Ljbffr
Job Requisition ID #
24WD77185Autodesk's Platform Services and Emerging Technology (PSET) team is hiring a Principal Software Engineer with experience automating large-scale cloud infrastructure services. In this exciting role, you will help us automate a robust and scalable data platform used by numerous teams across the company. Reporting to our Software Engineering Manager for Platform Infrastructure, you'll help us tackle tough problems and improve our platform's reliability, resiliency, and scalability. If you enjoy developing high quality and resilient systems and want to make a big impact with data at Autodesk, we want to meet you!As a Lead for our Central Data Lake, you will:Design, develop, and operate highly reliable large-scale data lakePartner closely with Product teams to understand requirements and design new capabilities that go directly into customer's handsBuild and scale data infrastructure that powers batch and real-time data processing of billions of records dailyAutomate and administer big data applications and services on cloudHelp drive observability into the health of our data infrastructure and understanding of system behaviorGuide initiatives to enable best practices across infrastructure, deployments, automation, and accessibilitySet technical directions and influence cross-functional teamsEnsure operational excellence of the services and meet the commitments to our customers regarding reliability, availability, and performanceDevelop and implement security best practices at the data, application, infrastructure, and network layersMinimum Qualifications10+ years of hands-on direct internal experience in large-scale data intensive distributed systems, especially in query engines, object storage, data warehouse, data lake, data analytics, SQL/NoSQL databases, distributed file systems and data platform infrastructureExperience with Apache Iceberg, Parquet, Spark, Hive, Presto, Trino, and AvroExperience provisioning and managing orchestration tools like AirflowExcellent coding skills and experience in Python, Unix shell scriptingExperience building, implementing and automating APIsExperience building code-driven infrastructure on AWS through end-to-end automation including integration tests and unit testsExperience with Self-service platform offering by building APIs to deploy Cloud resourcesDemonstrated experience with EKS, EMR, S3, IAM, Networking and other AWS servicesExperience with observability tools (such as, Hive, Spark, Presto, Jenkins, CloudOS, Datadog)Experience communicating solutions and resolutions with technical and non-technical audiencesWillingness to participate in professional development activities to stay current on industry knowledge and excitement about breaking the status quo by seeking innovative solutionsExperience with public clouds (AWS, Azure, GCP)Computer Science degree (or equivalent experience will be considered)
#J-18808-Ljbffr