W3Global
BIGDATA ENGINEER (Lead Data Engineer) - Onsite/Hybrid
W3Global, Atlanta, Georgia, United States, 30383
Job Description
Must have Skills: Cloud architecture (Strong), Cloud development (Strong), Python (Strong), Snowflake, AWS Lambda.
We are looking for a strong Data Engineer cum Architect, to create new modern data ingestion pipelines using latest technologies like AWS Athena, Lambdas, Python, Spark.
You'll be working on data pipelines and tools to provide the underlying data ingestion framework.Technical Skills
FrontrunnerBe inclined towards process automations & improvements, Identifying & automating repetitive tasks.Must be able to handle Data engineering operations / enhancement projects with a technical consultant mindset.SQL, Python, PySpark, S3, Lambda, EMR, Glue, Athena, EC2, IAM, Redshift, DMS, Airflow, Jenkins, Snowflake.End-to-end data solutions (ingest, storage, integration, processing, access) on AWS.Migrate data from traditional relational database systems to AWS relational databases such as Amazon RDS, Aurora, and Redshift.12+ years IT experience. Background and experience in data engineering/analytics.Should have very good hands-on experience in Cloud DB platforms (Snowflake is preferable), Building data pipelines & SQL, Python for Data Engineering.Experience in performing, supporting, and leading all aspects of Data Engineering strategy.Excellent root cause analysis skills.Ensure effective data pipeline engineering, deployment, ongoing operations, and continuous improvement.Manage and perform data operations and data engineering requirements including automation and optimization.Highly motivated, a self-starter, ability to work in a fast-paced environment while managing competing priorities.Creative problem solver and highly collaborative teammate who is comfortable working as a key contributor.Certification in Data Engineering and/or Cloud Platforms are a plus.Good written and verbal communication skills, and comfortable presenting findings to Sr. Management.
#J-18808-Ljbffr
Must have Skills: Cloud architecture (Strong), Cloud development (Strong), Python (Strong), Snowflake, AWS Lambda.
We are looking for a strong Data Engineer cum Architect, to create new modern data ingestion pipelines using latest technologies like AWS Athena, Lambdas, Python, Spark.
You'll be working on data pipelines and tools to provide the underlying data ingestion framework.Technical Skills
FrontrunnerBe inclined towards process automations & improvements, Identifying & automating repetitive tasks.Must be able to handle Data engineering operations / enhancement projects with a technical consultant mindset.SQL, Python, PySpark, S3, Lambda, EMR, Glue, Athena, EC2, IAM, Redshift, DMS, Airflow, Jenkins, Snowflake.End-to-end data solutions (ingest, storage, integration, processing, access) on AWS.Migrate data from traditional relational database systems to AWS relational databases such as Amazon RDS, Aurora, and Redshift.12+ years IT experience. Background and experience in data engineering/analytics.Should have very good hands-on experience in Cloud DB platforms (Snowflake is preferable), Building data pipelines & SQL, Python for Data Engineering.Experience in performing, supporting, and leading all aspects of Data Engineering strategy.Excellent root cause analysis skills.Ensure effective data pipeline engineering, deployment, ongoing operations, and continuous improvement.Manage and perform data operations and data engineering requirements including automation and optimization.Highly motivated, a self-starter, ability to work in a fast-paced environment while managing competing priorities.Creative problem solver and highly collaborative teammate who is comfortable working as a key contributor.Certification in Data Engineering and/or Cloud Platforms are a plus.Good written and verbal communication skills, and comfortable presenting findings to Sr. Management.
#J-18808-Ljbffr