Logo
Maxar Intelligence Inc

Data Engineer

Maxar Intelligence Inc, Westminster, Colorado, United States, 80031


Please review the job details below.

Maxar is looking for an enthusiastic Senior Data Engineer or Senior Staff Data Engineer to join our Data Intelligence team. We are looking for someone energetic and talented to further our mission of delivering high-quality analytics solutions at Maxar. This position is hybrid with several days a week on-site with your colleagues in Westminster, CO or Longmont, CO.

Your Role:

The Data Intelligence team owns and maintains a variety of infrastructure, services, and analytics deliverables today. We have recently developed & deployed a new full-stack data lake house environment intended to allow for more Maxar teams to build analytics solutions. Your primary focus will be to learn our existing solutions, assist in migrating them to the data lake house, and then use your skills to develop data-pipeline best practices while helping other departments on-board their data to this new environment. This will be a time of rapid change for the Data Intelligence team, being flexible and adaptable is essential to success in this role.

What you'll do day-to-day:

Operate in a cloud-based AWS environment.

Work and communicate with Maxar employees as well as external consultants.

Gain exposure to one of the world's foremost repositories of geospatial data.

Design, develop, and support scalable, reliable cloud data solutions using open-source and COTS tooling.

Develop high-quality, resilient data pipelines and business solutions.

Analyze and interpret data from diverse systems for reporting applications.

Actively identify opportunities to improve our infrastructure and propose solutions to realize them.

Collaborate with a team of skilled DevOps Engineers, Data Engineers, and Business Intelligence Developers.

Minimum Requirements:

Must be a U. S. citizen and be willing & able to obtain a security clearance.

Bachelor's degree in computer science, geography, or related field. Four additional years of experience may be substituted for a degree.

Minimum of 5 years of technical experience.

Minimum of 5 years' experience with SQL including complex queries, SQL tuning, CTEs.

Minimum of 3 years' experience with Python.

Demonstrated experience building & orchestrating automated, production-level data pipelines and solutions (ETL/ELT).

Experience with file-based data storage, including Parquet or Iceberg.

Experience with data catalogs (ex. Hive, AWS Glue).

General understanding of key AWS services (e.g. EC2, S3, EKS, IAM, lambda).

Experience building and/or using data APIs.

Experience with Github.

Experience with structured, semi-structured, and unstructured data.

Demonstrated history of exploring and learning new technologies quickly.

Preferred Qualifications:

Active Secret clearance.

Experience with software development.

Experience with geospatial data.

Experience building data-streaming processes.

Experience using PostGIS.

Experience with any of the following: Apache-Hive, Trino, Presto, Starburst, OpenMetadata, Apache-SuperSet, Terraform, dbt, Tableau, Fivetran, Ai

Experience implementing resilient, scalable, and supportable systems in AWS.

Experience using a wide variety of open-source technologies and cloud services.

Experience developing multi-step ETLs including DAG creation and scheduling in tools such as Airflow.

Experience using containerization systems like Docker and Kubernetes.

Experience with CI/CD pipeline tools such as Jenkins.