Amazon
Data Engineer II, Amazon Last Mile
Amazon, Seattle, Washington, us, 98127
As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will develop complex data engineering solutions using the AWS technology stack (S3, Glue, IAM, Redshift, Athena). You should have deep expertise and passion in working with large data sets, building complex data processes, performance tuning, bringing data from disparate data stores, and programmatically identifying patterns. You will work with business owners to develop and define key business questions and requirements. You will provide guidance and support for other engineers with industry best practices and direction. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast-paced environment are critical skills for this role.
Key job responsibilitiesDesign, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc.Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses.Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting.Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis.Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance.Drive programs and mentor resources to build scalable solutions aligning to team's long term strategy.
Minimum Requirements3+ years of data engineering experience.Experience with data modeling, warehousing and building ETL pipelines.Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions.Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases).
#J-18808-Ljbffr
Key job responsibilitiesDesign, implement, and support data warehouse / data lake infrastructure using AWS big data stack, Python, Redshift, Quicksight, Glue/lake formation, EMR/Spark/Scala, Athena etc.Extract huge volumes of structured and unstructured data from various sources (Relational /Non-relational/No-SQL database) and message streams and construct complex analyses.Develop and manage ETLs to source data from various systems and create unified data model for analytics and reporting.Perform detailed source-system analysis, source-to-target data analysis, and transformation analysis.Participate in the full development cycle for ETL: design, implementation, validation, documentation, and maintenance.Drive programs and mentor resources to build scalable solutions aligning to team's long term strategy.
Minimum Requirements3+ years of data engineering experience.Experience with data modeling, warehousing and building ETL pipelines.Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions.Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases).
#J-18808-Ljbffr