Technogen International Company
AWS Data Engineer
Technogen International Company, Atlanta, GA
Title: AWS Data Engineer
Location: Remote
Type: Contract
Summary:
We are expanding our Data Engineering toolset and need an experienced AWS, Kafka streams, Python and SQL Data Engineer. This role will be required to design and implement our new data pipelines using AWS and Kafka streams, and train other team members. As a Sr. Data Engineer on our Data Engineering team, you will also design, write, scale, and maintain complex data pipelines using Python within our development framework. You will contribute to the organization's success by partnering with business, analytics, and
infrastructure teams to design and build datasets to facilitate the business' decisions and increase effectiveness. Collaborating across disciplines, you will identify internal and external data sources to design pipelines, table structures, define ETL strategies and automate error-handling and validation. This team works with various stakeholders and divisions, including the executive team, with the goal of providing timely, accurate, and reliable data to thousands of users. Your role will be critical in defining the appropriate architecture and processes to support our Snowflake data infrastructure, that is flexible, agile, reliable, responsive, and scalable. You will be a member of a small, high-performing team with responsibilities to support the data warehouse and analytics teams throughout the company. As a member of the Data Engineering team, you will report to the Manager of Data Engineering.
Primary Responsibilities:
• Implement and manage AWS tools and environment.
• Implement and manage Apache Kafka streams.
• Using Python, build and maintain multiple source data connections to APIs, SQL databases, AWS S3, and other sources feeding into a Snowflake Clouds database system.
• Work with internal and external users and providers to build datasets that add value to the business and allow for informed business decisions.
• Ensure data consistency, accuracy and reliability as data and business requirements change.
Required Skills:
• 10 years of AWS Console, S3, Lambda, Kinesis, Glue, and CloudWatch.
• 8+ years of Kafka streams experience.
• 8+ years of data engineering, data pipeline development, and ETL experience using Python, SQL, and Snowflake
• Experience requesting, transforming, and ingesting data from REST and SOAP APIs
• Proficiency in the Python scripting language, SQL, Cloud databases, and ETL development processes & tools
• Strong understanding of traditional relational databases, data and dimensional modeling principles, and data normalization techniques
• Ability to initiate, drive, and manage projects with competing priorities.
• Ability to communicate effectively with business leaders, IT leadership, and engineers.
• Must have a passion for data and helping the business turn data into information and action.
Bonus Skills:
• Experience with data streaming technologies like PySpark
• Experience with pipeline technologies like DBT, Apache Airflow or FiveTran
• Experience with MPP technologies and databases
• Experience with data visualization tools like Tableau or Sigma
• Experience with container orchestration tools like Docker or Kubernetes
• Experience with Azure data product offerings and platform
• Experience working with Salesforce and SAP data.
• Experience using Terraform or other infrastructure as code tools.
Required Education/Experience:
• Bachelor's degree in information systems, computer science, or related technical field
Location: Remote
Type: Contract
Summary:
We are expanding our Data Engineering toolset and need an experienced AWS, Kafka streams, Python and SQL Data Engineer. This role will be required to design and implement our new data pipelines using AWS and Kafka streams, and train other team members. As a Sr. Data Engineer on our Data Engineering team, you will also design, write, scale, and maintain complex data pipelines using Python within our development framework. You will contribute to the organization's success by partnering with business, analytics, and
infrastructure teams to design and build datasets to facilitate the business' decisions and increase effectiveness. Collaborating across disciplines, you will identify internal and external data sources to design pipelines, table structures, define ETL strategies and automate error-handling and validation. This team works with various stakeholders and divisions, including the executive team, with the goal of providing timely, accurate, and reliable data to thousands of users. Your role will be critical in defining the appropriate architecture and processes to support our Snowflake data infrastructure, that is flexible, agile, reliable, responsive, and scalable. You will be a member of a small, high-performing team with responsibilities to support the data warehouse and analytics teams throughout the company. As a member of the Data Engineering team, you will report to the Manager of Data Engineering.
Primary Responsibilities:
• Implement and manage AWS tools and environment.
• Implement and manage Apache Kafka streams.
• Using Python, build and maintain multiple source data connections to APIs, SQL databases, AWS S3, and other sources feeding into a Snowflake Clouds database system.
• Work with internal and external users and providers to build datasets that add value to the business and allow for informed business decisions.
• Ensure data consistency, accuracy and reliability as data and business requirements change.
Required Skills:
• 10 years of AWS Console, S3, Lambda, Kinesis, Glue, and CloudWatch.
• 8+ years of Kafka streams experience.
• 8+ years of data engineering, data pipeline development, and ETL experience using Python, SQL, and Snowflake
• Experience requesting, transforming, and ingesting data from REST and SOAP APIs
• Proficiency in the Python scripting language, SQL, Cloud databases, and ETL development processes & tools
• Strong understanding of traditional relational databases, data and dimensional modeling principles, and data normalization techniques
• Ability to initiate, drive, and manage projects with competing priorities.
• Ability to communicate effectively with business leaders, IT leadership, and engineers.
• Must have a passion for data and helping the business turn data into information and action.
Bonus Skills:
• Experience with data streaming technologies like PySpark
• Experience with pipeline technologies like DBT, Apache Airflow or FiveTran
• Experience with MPP technologies and databases
• Experience with data visualization tools like Tableau or Sigma
• Experience with container orchestration tools like Docker or Kubernetes
• Experience with Azure data product offerings and platform
• Experience working with Salesforce and SAP data.
• Experience using Terraform or other infrastructure as code tools.
Required Education/Experience:
• Bachelor's degree in information systems, computer science, or related technical field