Logo
Saxon Global

Remote Data Cloud Engineer

Saxon Global, Des Moines, Iowa, United States, 50319


Apex Systems is looking for a Remote Data Engineer. This is a contract role through the end of the year, with potential to extend.

Description: The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives

Qualifications for Data Engineer

Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

Experience building and optimizing 'big data' data pipelines, architectures and data sets.

Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

Strong analytic skills related to working with unstructured datasets.

Build processes supporting data transformation, data structures, metadata, dependency and workload management.

Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.

They should also have experience using the following software/tools:

Experience with relational SQL and NoSQL databases: RDS, DynamoDB, Experience with data pipeline and workflow management tools: Glue ETL , Step Functions, Airflow, etc.

Experience with AWS cloud compute services: EC2, Glue , Athena,,Quicksight, Lambda, Redshift Experience with AWS Cloud database/storage services: RDS, S3, Lake Formation Experience with object-oriented/object function scripting languages: Python, Java

Description: The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives

Qualifications for Data Engineer

Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

Experience building and optimizing 'big data' data pipelines, architectures and data sets.

Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

Strong analytic skills related to working with unstructured datasets.

Build processes supporting data transformation, data structures, metadata, dependency and workload management.

Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores.

They should also have experience using the following software/tools:

Experience with relational SQL and NoSQL databases: RDS, DynamoDB, Experience with data pipeline and workflow management tools: Glue ETL , Step Functions, Airflow, etc.

Experience with AWS cloud compute services: EC2, Glue , Athena,,Quicksight, Lambda, Redshift Experience with AWS Cloud database/storage services: RDS, S3, Lake Formation Experience with object-oriented/object function scripting languages: Python, Java

Required Skills : Ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. Experience with relational SQL and NoSQL databases: RDS, DynamoDB, Experience with data pipeline and workflow management tools: Glue ETL , Step Functions, Airflow, etc. Experience with AWS cloud compute services: EC2, Glue , Athena,,Quicksight, Lambda, Redshift Experience with AWS Cloud database/storage services: RDS, S3Basic Qualification :Additional Skills :Background Check :YesNotes :Selling points for candidate :Project Verification Info :Candidate must be your W2 Employee :NoExclusive to Apex :NoFace to face interview required :NoCandidate must be local :NoCandidate must be authorized to work without sponsorship ::NoInterview times set :YesType of project :Development/EngineeringMaster Job Title :Data AnalystBranch Code :Des Moines