ESR Healthcare
Data Engineer Chantilly, VA ref
ESR Healthcare, Chantilly, Virginia, United States, 22021
Data Engineer Chantilly, VA ref
Send resume if a fit.
BASIC QUALIFICATIONS:
Bachelor's degree in Engineering, Computer Science, Statistics, Applied Math is required plus a minimum of 2 years relevant experience or Master’s degree in a related technical discipline.
CLEARANCE REQUIREMENTS:
A TS/SCI security clearance with the ability to obtain a Polygraph is required at time of hire. Candidates must be able to obtain the Polygraph within a reasonable amount of time from date of hire. Applicants selected will be subject to a U.S. Government security investigation and must meet eligibility requirements for access to classified information. Due to the nature of work performed within our facilities, U.S. citizenship is required.
As a Data Engineer, you’ll lead model and simulation activities as you participate in requirements analysis and management, functional analysis, performance analysis, system design, trade studies, systems integration and test (verification). It’s your chance to step up to the challenge and prove you’re ready to lead the world.
REPRESENTATIVE DUTIES AND TASKS:
Support the Insider Threat mission by working with various security system data owners to automate data integration and collection strategies.
Work closely with the data science team to ensure data cleanliness and accuracy.
Design, develop and implement scalable ETL processes for disparate datasets into a Hadoop infrastructure.
Develop processes to identify data drift and malformed records.
Develop technical documentation and standard operating procedures.
Lead technical tasks for small teams or projects.
KNOWLEDGE SKILLS AND ABILITIES:
Working knowledge of entity resolution systems.
Experience with messaging systems like Kafka.
Experience with NoSQL and/or graph databases like MongoDB or ArangoDB.
Experience with SQL, MongoDB, Oracle, Postgres.
Working experience with ETL processing.
Working experience with data workflow products like StreamSets or NiFi.
Working experience with Python RESTful API services, JDBC.
Experience with Hadoop and Hive/Impala.
Experience with Cloudera Data Science Workbench is a plus.
Understanding of pySpark.
Creative thinker with the ability to multi-task.
Excellent understanding of data engineering concepts, principles, and theories.
#J-18808-Ljbffr
Send resume if a fit.
BASIC QUALIFICATIONS:
Bachelor's degree in Engineering, Computer Science, Statistics, Applied Math is required plus a minimum of 2 years relevant experience or Master’s degree in a related technical discipline.
CLEARANCE REQUIREMENTS:
A TS/SCI security clearance with the ability to obtain a Polygraph is required at time of hire. Candidates must be able to obtain the Polygraph within a reasonable amount of time from date of hire. Applicants selected will be subject to a U.S. Government security investigation and must meet eligibility requirements for access to classified information. Due to the nature of work performed within our facilities, U.S. citizenship is required.
As a Data Engineer, you’ll lead model and simulation activities as you participate in requirements analysis and management, functional analysis, performance analysis, system design, trade studies, systems integration and test (verification). It’s your chance to step up to the challenge and prove you’re ready to lead the world.
REPRESENTATIVE DUTIES AND TASKS:
Support the Insider Threat mission by working with various security system data owners to automate data integration and collection strategies.
Work closely with the data science team to ensure data cleanliness and accuracy.
Design, develop and implement scalable ETL processes for disparate datasets into a Hadoop infrastructure.
Develop processes to identify data drift and malformed records.
Develop technical documentation and standard operating procedures.
Lead technical tasks for small teams or projects.
KNOWLEDGE SKILLS AND ABILITIES:
Working knowledge of entity resolution systems.
Experience with messaging systems like Kafka.
Experience with NoSQL and/or graph databases like MongoDB or ArangoDB.
Experience with SQL, MongoDB, Oracle, Postgres.
Working experience with ETL processing.
Working experience with data workflow products like StreamSets or NiFi.
Working experience with Python RESTful API services, JDBC.
Experience with Hadoop and Hive/Impala.
Experience with Cloudera Data Science Workbench is a plus.
Understanding of pySpark.
Creative thinker with the ability to multi-task.
Excellent understanding of data engineering concepts, principles, and theories.
#J-18808-Ljbffr