CereCore
Sr. Data Engineer
CereCore, Nashville, Tennessee, United States, 37247
Classification:
Contract
Contract Length:
1 year
Address: Nashville, TN
Job ID:
16525770
CereCore® provides EHR implementations, IT and application support, IT managed services, technical staffing, strategic IT consulting, and advisory services to hospitals and health systems nationwide. Our heritage is in the hallways of some of America’s top-performing hospitals. We have served as leaders in finance, operations, technology, and as clinicians turned power users and innovators. At CereCore, we know firsthand the power that aligned technology can provide in delivering care. As a wholly-owned subsidiary of HCA Healthcare, we are committed to bringing the expertise we have gained as operators to deliver IT services that emphatically address the needs of health systems across the United States. Our team of over 600 clinical and technical professionals has implemented EHR systems in more than 400 facilities and provides managed services support to tens of thousands of health system employees. We work tirelessly to provide healthcare organizations specialized IT services that support the delivery of patient care. The Link to Life-Saving Care.
Position Summary
The Data Engineer serves as a primary development resource for writing code, test, implementation, document functionality, and maintain of NextGen solutions for the GCP Cloud enterprise data initiatives. The role requires working closely with data teams, frequently in a matrixed environment as part of a broader project team. Due to the emerging and fast-evolving nature of GCP/Hadoop technology and practice, the position requires that one stay well-informed of technological advancements and be proficient at putting new innovations into effective practice.
In addition, this position requires a candidate who can analyze business requirements, perform design tasks, construct, test, and implement solutions with minimal supervision.
This candidate will have a record of accomplishment of participation in successful projects in a fast-paced, mixed team environment.
Responsibilities
This role will provide application development for specific business environments.
Responsible for building and supporting a GCP based ecosystem designed for enterprise-wide analysis of structured, semi-structured, and unstructured data.
Bring new data sources into GCP, transform, and load to databases and support regular requests to move data from one cluster to another.
Develop a strong understanding of relevant product area, codebase, and/or systems.
Demonstrate proficiency in data analysis, programming, and software engineering.
Work closely with the Lead Architect and Product Owner to define, design, and build new features and improve existing products.
Produce high quality code with good test coverage, using modern abstractions and frameworks.
Work independently, and complete tasks on-schedule by exercising strong judgment and problem-solving skills.
Closely collaborates with team members to successfully execute development initiatives using Agile practices and principles.
Participates in the deployment, change, configuration, management, administration and maintenance of deployment process and systems.
Proven experience effectively prioritizing workload to meet deadlines and work objectives.
Works in an environment with rapidly changing business requirements and priorities
Work collaboratively with Data Scientists and business and IT leaders throughout the company to understand their needs and use cases.
Work closely with management, architects, and other teams to develop and implement the projects.
Actively participate in technical group discussions and adopt any new technologies to improve the development and operations.
Requirements
Good understanding of best practices and standards for GCP Data process design and implementation.
Two plus Years of hands-on experience with GCP platform and experience with many of the following components:
Cloud Run, GKE, Cloud Functions
Spark Streaming, Kafka, Pub/Sub
Bigtable, Firestore, Cloud SQL, Cloud Spanner
JSON, Avro, Parquet
Python, Java, Terraform
BigQuery, Dataflow,
Data Fusion
Cloud Composer, DataProc, CI/CD, Cloud Logging
Vertex AI, NLP, GitHub
Ability to multitask and to balance competing priorities.
Ability to define and utilize best practice techniques and to impose order in a fast-changing environment.
Must have strong problem-solving skills.
Strong verbal, written, and interpersonal skills, including a desire to work within a highly matrixed, team-oriented environment.
Preferred
A successful candidate may have:
Experience in Healthcare Domain
Experience in Patient Data
Hardware/Operating Systems:
Linux, UNIX
GCP
Distributed, highly scalable processing environments.
Databases :
RDBMS – MS SQL Server/Teradata/Oracle
NoSQL, Hbase, Cassandra, MongoDB, In-memory, Columnar, other emerging technologies
Source Control Systems – Git, Mercurial
Continuous Integration Systems – Jenkins or Bamboo
Certifications (a plus, but not required):
GCP Cloud Professional Data Engineer
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Contract
Contract Length:
1 year
Address: Nashville, TN
Job ID:
16525770
CereCore® provides EHR implementations, IT and application support, IT managed services, technical staffing, strategic IT consulting, and advisory services to hospitals and health systems nationwide. Our heritage is in the hallways of some of America’s top-performing hospitals. We have served as leaders in finance, operations, technology, and as clinicians turned power users and innovators. At CereCore, we know firsthand the power that aligned technology can provide in delivering care. As a wholly-owned subsidiary of HCA Healthcare, we are committed to bringing the expertise we have gained as operators to deliver IT services that emphatically address the needs of health systems across the United States. Our team of over 600 clinical and technical professionals has implemented EHR systems in more than 400 facilities and provides managed services support to tens of thousands of health system employees. We work tirelessly to provide healthcare organizations specialized IT services that support the delivery of patient care. The Link to Life-Saving Care.
Position Summary
The Data Engineer serves as a primary development resource for writing code, test, implementation, document functionality, and maintain of NextGen solutions for the GCP Cloud enterprise data initiatives. The role requires working closely with data teams, frequently in a matrixed environment as part of a broader project team. Due to the emerging and fast-evolving nature of GCP/Hadoop technology and practice, the position requires that one stay well-informed of technological advancements and be proficient at putting new innovations into effective practice.
In addition, this position requires a candidate who can analyze business requirements, perform design tasks, construct, test, and implement solutions with minimal supervision.
This candidate will have a record of accomplishment of participation in successful projects in a fast-paced, mixed team environment.
Responsibilities
This role will provide application development for specific business environments.
Responsible for building and supporting a GCP based ecosystem designed for enterprise-wide analysis of structured, semi-structured, and unstructured data.
Bring new data sources into GCP, transform, and load to databases and support regular requests to move data from one cluster to another.
Develop a strong understanding of relevant product area, codebase, and/or systems.
Demonstrate proficiency in data analysis, programming, and software engineering.
Work closely with the Lead Architect and Product Owner to define, design, and build new features and improve existing products.
Produce high quality code with good test coverage, using modern abstractions and frameworks.
Work independently, and complete tasks on-schedule by exercising strong judgment and problem-solving skills.
Closely collaborates with team members to successfully execute development initiatives using Agile practices and principles.
Participates in the deployment, change, configuration, management, administration and maintenance of deployment process and systems.
Proven experience effectively prioritizing workload to meet deadlines and work objectives.
Works in an environment with rapidly changing business requirements and priorities
Work collaboratively with Data Scientists and business and IT leaders throughout the company to understand their needs and use cases.
Work closely with management, architects, and other teams to develop and implement the projects.
Actively participate in technical group discussions and adopt any new technologies to improve the development and operations.
Requirements
Good understanding of best practices and standards for GCP Data process design and implementation.
Two plus Years of hands-on experience with GCP platform and experience with many of the following components:
Cloud Run, GKE, Cloud Functions
Spark Streaming, Kafka, Pub/Sub
Bigtable, Firestore, Cloud SQL, Cloud Spanner
JSON, Avro, Parquet
Python, Java, Terraform
BigQuery, Dataflow,
Data Fusion
Cloud Composer, DataProc, CI/CD, Cloud Logging
Vertex AI, NLP, GitHub
Ability to multitask and to balance competing priorities.
Ability to define and utilize best practice techniques and to impose order in a fast-changing environment.
Must have strong problem-solving skills.
Strong verbal, written, and interpersonal skills, including a desire to work within a highly matrixed, team-oriented environment.
Preferred
A successful candidate may have:
Experience in Healthcare Domain
Experience in Patient Data
Hardware/Operating Systems:
Linux, UNIX
GCP
Distributed, highly scalable processing environments.
Databases :
RDBMS – MS SQL Server/Teradata/Oracle
NoSQL, Hbase, Cassandra, MongoDB, In-memory, Columnar, other emerging technologies
Source Control Systems – Git, Mercurial
Continuous Integration Systems – Jenkins or Bamboo
Certifications (a plus, but not required):
GCP Cloud Professional Data Engineer
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.