ResMed
Senior Data Engineer
ResMed, San Diego, California, United States, 92189
Digital Health Technology team powers digital experiences and engagement to enhance the lives of millions of people every day through connected care. We build, deliver and manage a portfolio of data management platforms and mobile offerings in support of our core businesses. We thrive on simple and elegant architecture and agility. You’ll be immersed in a dynamic high-growth environment and empowered to excel, take informed risks, and drive ingenuity across the enterprise.
Responsibilities:
Create, expand, optimize and maintain data and data pipeline architecture, as well as optimize data flow and collection for cross-functional teams.Support software developers, database architects, data analysts and data scientists on data initiatives to ensure optimal data delivery architecture is consistent throughout ongoing projects.Assemble large, complex data sets that meet functional and non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.Build infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Python and AWS 'big data' technologies like Glue, Lambda, EMR, and others.Work with stakeholders including Executive, Product, Data, and Design teams to assist with data-related technical issues and support data infrastructure needs.Work on data tools for analytics and data scientist team members.
Minimum Requirements
Master's degree or equivalent in Computer Science, Statistics, Informatics, Information Systems or a related field and6 years of experience in Data Engineering.
Applicants must have demonstrated experience with the following:3 years of experience in a full-cycle data engineering role;Python coding;building and working with AWS Data Lakes;AWS Data Pipeline and CI/CD processes;big data tools: Hadoop, Spark, and Kafka;data pipeline and workflow management tools: Airflow, and AWS Step functions;AWS cloud services: EMR, RDS, and Redshift;stream-processing systems: Kinesis, and Spark-Streaming; structured, semi-structured and unstructured datasets;building and optimizing 'big data' data pipelines, architecture and data sets;performing root cause analysis on internal and external data and processes to answer business questions and identify opportunities for improvement;building processes supporting data transformation, data structures, metadata, dependency, and workload management;manipulating, processing, and extracting value from large disconnected datasets;message queuing, stream processing, and building highly scalable 'big data' data stores;supporting and working with cross-functional teams in software development.
Any and all experience can be gained concurrently.
Position offered by ResMed Digital Health Inc. - 100% remote position reporting to: San Diego, CA.
#J-18808-Ljbffr
Responsibilities:
Create, expand, optimize and maintain data and data pipeline architecture, as well as optimize data flow and collection for cross-functional teams.Support software developers, database architects, data analysts and data scientists on data initiatives to ensure optimal data delivery architecture is consistent throughout ongoing projects.Assemble large, complex data sets that meet functional and non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability.Build infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Python and AWS 'big data' technologies like Glue, Lambda, EMR, and others.Work with stakeholders including Executive, Product, Data, and Design teams to assist with data-related technical issues and support data infrastructure needs.Work on data tools for analytics and data scientist team members.
Minimum Requirements
Master's degree or equivalent in Computer Science, Statistics, Informatics, Information Systems or a related field and6 years of experience in Data Engineering.
Applicants must have demonstrated experience with the following:3 years of experience in a full-cycle data engineering role;Python coding;building and working with AWS Data Lakes;AWS Data Pipeline and CI/CD processes;big data tools: Hadoop, Spark, and Kafka;data pipeline and workflow management tools: Airflow, and AWS Step functions;AWS cloud services: EMR, RDS, and Redshift;stream-processing systems: Kinesis, and Spark-Streaming; structured, semi-structured and unstructured datasets;building and optimizing 'big data' data pipelines, architecture and data sets;performing root cause analysis on internal and external data and processes to answer business questions and identify opportunities for improvement;building processes supporting data transformation, data structures, metadata, dependency, and workload management;manipulating, processing, and extracting value from large disconnected datasets;message queuing, stream processing, and building highly scalable 'big data' data stores;supporting and working with cross-functional teams in software development.
Any and all experience can be gained concurrently.
Position offered by ResMed Digital Health Inc. - 100% remote position reporting to: San Diego, CA.
#J-18808-Ljbffr