Logo
Global Channel Management

Remote Lead Data Engineer

Global Channel Management, Dallas, Texas, United States, 75215


About the job Remote Lead Data Engineer

Remote Lead Data Engineer needs 7+ years experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)

Remote Lead Data Engineer requires:

Strong background in math, statistics, computer science, data science or related discipline

Advanced knowledge one of language: Java, Scala, Python, C#

Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake

Proficient with:

Data mining/programming tools (e.g. SAS, SQL, R, Python)

Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

Data visualization (e.g. Tableau, Looker, MicroStrategy)

Comfortable learning about and deploying new technologies and tools.

Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.

Good written and oral communication skills and ability to present results to non-technical audiences

Knowledge of business intelligence and analytical tools, technologies and techniques.

Familiarity and experience in the following is a plus:

AWS certification

Spark Streaming

Kafka Streaming / Kafka Connect

ELK Stack

Cassandra / MongoDB

CI/CD: Jenkins, GitLab, Jira, Confluence other related tools

Mandatory Skills Advanced knowledge one of language: Java, Scala, Python, C#? Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake

Proficient with Data mining/programming tools (e.g.?SAS, SQL, R, Python) Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)

Remote Lead Data Engineer duties:

Create and manage cloud resources in AWS

Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies

Data processing/transformation using various technologies such as Spark and Cloud Services.

Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations