Robert Half
Robert Half is hiring: Data Engineer in Baltimore
Robert Half, Baltimore, MD, US
Job Description
Job Description
We are offering an exciting opportunity at our location in BALTIMORE, Maryland for a Data Engineer. The ideal candidate will be tasked with designing and maintaining data pipelines, integrating data from various sources, and ensuring our databases meet our business objectives. This role involves working in a diverse team, focusing on improving efficiency and scalability, while ensuring data security and compliance.
Responsibilities:
• Develop, maintain, and manage robust, efficient data pipelines to handle large amounts of data.
• Integrate data from various sources such as APIs and third-party datasets into a unified and accessible format.
• Design, establish, and manage databases, data warehouses, and other data storage solutions to meet business goals.
• Implement and manage Extract, Transform, Load (ETL/ELT) processes to ensure accurate and efficient data flow between systems.
• Ensure high data quality by implementing appropriate monitoring, testing, and validation techniques.
• Collaborate with data integration engineers, business analysts, and software engineers to understand and meet their data requirements.
• Automate manual data processes and workflows to enhance efficiency and reduce errors.
• Adhere to data security, privacy, and regulatory compliance requirements (e.g., GDPR, HIPAA).
• Continuously monitor and optimize the performance of data systems to increase efficiency and scalability.• Proficiency in Apache Kafka is a must for real-time data processing
• Familiarity with Apache Pig for scripting and dataflow programming
• Extensive knowledge of Apache Spark for large-scale data processing
• Demonstrable experience with Cloud Technologies for data storage and management
• Proficient in Data Visualization to represent data in a meaningful way
• Capable of Algorithm Implementation for solving complex data problems
• Strong background in Analytics for interpreting trends and patterns
• Mastery of Apache Hadoop for distributed data processing
• Experience in API Development for seamless data integration
• Proficient in AWS Technologies for cloud-based data solutions
Responsibilities:
• Develop, maintain, and manage robust, efficient data pipelines to handle large amounts of data.
• Integrate data from various sources such as APIs and third-party datasets into a unified and accessible format.
• Design, establish, and manage databases, data warehouses, and other data storage solutions to meet business goals.
• Implement and manage Extract, Transform, Load (ETL/ELT) processes to ensure accurate and efficient data flow between systems.
• Ensure high data quality by implementing appropriate monitoring, testing, and validation techniques.
• Collaborate with data integration engineers, business analysts, and software engineers to understand and meet their data requirements.
• Automate manual data processes and workflows to enhance efficiency and reduce errors.
• Adhere to data security, privacy, and regulatory compliance requirements (e.g., GDPR, HIPAA).
• Continuously monitor and optimize the performance of data systems to increase efficiency and scalability.• Proficiency in Apache Kafka is a must for real-time data processing
• Familiarity with Apache Pig for scripting and dataflow programming
• Extensive knowledge of Apache Spark for large-scale data processing
• Demonstrable experience with Cloud Technologies for data storage and management
• Proficient in Data Visualization to represent data in a meaningful way
• Capable of Algorithm Implementation for solving complex data problems
• Strong background in Analytics for interpreting trends and patterns
• Mastery of Apache Hadoop for distributed data processing
• Experience in API Development for seamless data integration
• Proficient in AWS Technologies for cloud-based data solutions