Tiger Analytics
Data Engineer - AWS
Tiger Analytics, Jersey City, New Jersey, United States, 07390
Tiger Analytics is a fast-growing advanced analytics consulting firm. Our consultants bring deep expertise in Data Science, Machine Learning and AI. We are the trusted analytics partner for multiple Fortune 500 companies, enabling them to generate business value from data. Our business value and leadership has been recognized by various market research firms, including Forrester and Gartner. We are looking for top-notch talent as we continue to build the best global analytics consulting team in the world.
As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Apache Airflow.
Key Responsibilities:
Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.Requirements
8+ years of experience building and deploying large-scale data processing pipelines in a production environment.Hands-on experience in designing and building data pipelines on AWS cloud infrastructure.Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.Strong experience with Databricks and Apache Spark for data processing and analytics.Hands-on experience with Apache Airflow for orchestrating and scheduling data pipelines.Solid understanding of data modeling, database design principles, and SQL and Spark SQL.Experience with version control systems (e.g., Git) and CI/CD pipelines.Excellent communication skills and the ability to collaborate effectively with cross-functional teams.Strong problem-solving skills and attention to detail.
Benefits
This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.
As an AWS Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines on AWS cloud infrastructure. You will work closely with cross-functional teams to support data analytics, machine learning, and business intelligence initiatives. The ideal candidate will have strong experience with AWS services, Databricks, and Apache Airflow.
Key Responsibilities:
Design, develop, and deploy end-to-end data pipelines on AWS cloud infrastructure using services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements.Build and maintain orchestration workflows using Apache Airflow to automate data pipeline execution, scheduling, and monitoring.Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver scalable data solutions.Optimize data pipelines for performance, reliability, and cost-effectiveness, leveraging AWS best practices and cloud-native technologies.Requirements
8+ years of experience building and deploying large-scale data processing pipelines in a production environment.Hands-on experience in designing and building data pipelines on AWS cloud infrastructure.Strong proficiency in AWS services such as Amazon S3, AWS Glue, AWS Lambda, Amazon Redshift, etc.Strong experience with Databricks and Apache Spark for data processing and analytics.Hands-on experience with Apache Airflow for orchestrating and scheduling data pipelines.Solid understanding of data modeling, database design principles, and SQL and Spark SQL.Experience with version control systems (e.g., Git) and CI/CD pipelines.Excellent communication skills and the ability to collaborate effectively with cross-functional teams.Strong problem-solving skills and attention to detail.
Benefits
This position offers an excellent opportunity for significant career development in a fast-growing and challenging entrepreneurial environment with a high degree of individual responsibility.