Future-Proof Consulting, LLC
Senior Data Engineer
Future-Proof Consulting, LLC, Washington, District of Columbia, us, 20022
SUMMARY OF DUTIES
Design, develop, and maintain scalable
ETL data pipelines
for batch and streaming data, effectively handling both structured and unstructured formats. Utilize advanced analytics tools and methodologies to supplement and enhance the analytics platform, contributing to our journey toward advanced analytical capabilities. Collaborate with business and technical teams to gather requirements and design comprehensive data solutions that align with organizational goals and DT strategies. Implement complex data operations using
Python, Java, Spark, and SQL, optimizing ETL/ELT processes
for performance and efficiency across diverse data types. Manage data across various platforms, leveraging cloud services, big data technologies Skills/Experience: Minimum Skillset Requirements:
Experience with
Agile project management
methodologies Expertise in one or more:
Python, PowerShell, Bash, or JavaScript. Working knowledge of
Linux, MacOS, Windows, and mobile operating systems , platforms and internals Working knowledge of modern computer networking technologies
Minimum Qualifications
7+ years experience in building
ETL data pipelines for data ingestion from relational and non-relational databases Strong experience on
Pyspark, AWS, and Databricks 2+ Years in building and loading data into
data lakehouse 2+ years experience in managing or cataloging data assets 2+ years experience in
data modeling 2+ years experience in
providing L3 support for data pipelines
Minimum Knowledge SQL, Python, Informatica, AWS, or related ETL tools
Communications And Interpersonal Skills: Must have excellent oral and written communication and presentation skills.
This hybrid role requires you to be in the office during quarterly PI planning sessions.
Design, develop, and maintain scalable
ETL data pipelines
for batch and streaming data, effectively handling both structured and unstructured formats. Utilize advanced analytics tools and methodologies to supplement and enhance the analytics platform, contributing to our journey toward advanced analytical capabilities. Collaborate with business and technical teams to gather requirements and design comprehensive data solutions that align with organizational goals and DT strategies. Implement complex data operations using
Python, Java, Spark, and SQL, optimizing ETL/ELT processes
for performance and efficiency across diverse data types. Manage data across various platforms, leveraging cloud services, big data technologies Skills/Experience: Minimum Skillset Requirements:
Experience with
Agile project management
methodologies Expertise in one or more:
Python, PowerShell, Bash, or JavaScript. Working knowledge of
Linux, MacOS, Windows, and mobile operating systems , platforms and internals Working knowledge of modern computer networking technologies
Minimum Qualifications
7+ years experience in building
ETL data pipelines for data ingestion from relational and non-relational databases Strong experience on
Pyspark, AWS, and Databricks 2+ Years in building and loading data into
data lakehouse 2+ years experience in managing or cataloging data assets 2+ years experience in
data modeling 2+ years experience in
providing L3 support for data pipelines
Minimum Knowledge SQL, Python, Informatica, AWS, or related ETL tools
Communications And Interpersonal Skills: Must have excellent oral and written communication and presentation skills.
This hybrid role requires you to be in the office during quarterly PI planning sessions.