Intellibridge
Senior Data Engineer
Intellibridge, Washington, District of Columbia, us, 20022
Job Description:
(6+) years of experience designing, building, and optimizing our data infrastructure on AWS. Work closely with the Enterprise Data Architect and cross-functional teams to develop scalable, high-performance data solutions. Ensuring data integrity, availability, and accessibility to support business analytics and decision-making processes with an emphasis on data governance.
Location:
Hybrid, DC, Chantilly office Tuesdays and Rosslyn office Wednesdays.
Clearance:
Active Secret Clearance
Responsibilities:Design, develop, and maintain robust data pipelines and ETL processes to ingest, transform, and load data from various sources into our AWS data platform.Collaborate with the Enterprise Data Architect to implement and optimize data models, databases, and data warehouses.Ensure data quality, integrity, and consistency by implementing comprehensive data validation and cleansing procedures.Optimize data storage and retrieval for performance, cost-efficiency, and scalability using AWS services such as Redshift, RDS, S3, Glue, and Athena.Develop and implement automation scripts and tools for data processing, monitoring, and maintenance.Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver efficient data solutions.Troubleshoot and resolve data-related issues, ensuring minimal disruption to data operations.Implement data security and compliance measures to protect sensitive information and adhere to industry regulations.Provide technical guidance and mentorship to junior data engineers and other team membersRequired Skills:
Proficiency in designing and implementing ETL processes and data pipelines using AWS services such as Glue, Data Pipeline, and Lambda.Extensive experience with SQL and database technologies, including Redshift, RDS, and DynamoDB.Strong programming skills in languages such as Python, Java, or Scala.Experience with data governance best practices and implementation.Knowledge of data modeling, data warehousing, and data architecture principles.Experience with big data technologies such as Hadoop, Spark, and Kafka.Solid understanding of data security, privacy, and compliance best practices.Secondary Skills:
CloudAWSData PlatformsMicroservices
(6+) years of experience designing, building, and optimizing our data infrastructure on AWS. Work closely with the Enterprise Data Architect and cross-functional teams to develop scalable, high-performance data solutions. Ensuring data integrity, availability, and accessibility to support business analytics and decision-making processes with an emphasis on data governance.
Location:
Hybrid, DC, Chantilly office Tuesdays and Rosslyn office Wednesdays.
Clearance:
Active Secret Clearance
Responsibilities:Design, develop, and maintain robust data pipelines and ETL processes to ingest, transform, and load data from various sources into our AWS data platform.Collaborate with the Enterprise Data Architect to implement and optimize data models, databases, and data warehouses.Ensure data quality, integrity, and consistency by implementing comprehensive data validation and cleansing procedures.Optimize data storage and retrieval for performance, cost-efficiency, and scalability using AWS services such as Redshift, RDS, S3, Glue, and Athena.Develop and implement automation scripts and tools for data processing, monitoring, and maintenance.Work closely with data scientists, analysts, and business stakeholders to understand data requirements and deliver efficient data solutions.Troubleshoot and resolve data-related issues, ensuring minimal disruption to data operations.Implement data security and compliance measures to protect sensitive information and adhere to industry regulations.Provide technical guidance and mentorship to junior data engineers and other team membersRequired Skills:
Proficiency in designing and implementing ETL processes and data pipelines using AWS services such as Glue, Data Pipeline, and Lambda.Extensive experience with SQL and database technologies, including Redshift, RDS, and DynamoDB.Strong programming skills in languages such as Python, Java, or Scala.Experience with data governance best practices and implementation.Knowledge of data modeling, data warehousing, and data architecture principles.Experience with big data technologies such as Hadoop, Spark, and Kafka.Solid understanding of data security, privacy, and compliance best practices.Secondary Skills:
CloudAWSData PlatformsMicroservices