Dexian
F1-OPT Junior Data Engineer
Dexian, Houston, Texas, United States, 77246
This is a remote position.F1-OPT Junior Data Engineer (1 year experience, hybrid)F1-OPT visa holders onlyDISCLAIMER: We're building our future team! Submit your application and we'll keep your skills in mind for upcoming opportunities that match your experience. While we can't guarantee immediate placement, US/Canada residents will get priority during screening. Thanks for your interest!Hiring Type:
Full-TimeBase Salary:
$56K-$66K Per Annum.Position Summary
Join the fast-paced, innovative, and collaborative environment focused on providing an AIOps platform that enhances the intelligence of the Healthcare infrastructure. Work closely with subject matter experts and colleagues to build and scale out machine learning and AI solutions that will detect, predict, and recommend solutions to correct issues before system impact and enhance the efficiency, reliability, and performance of CVS Health’s IT operations.Key Responsibilities include:
Data pipeline development: Designed, implemented, and managed data pipelines for extracting, transforming, and loading data from various sources into data lakes for processing, analytics, and correlation.Data modeling: Create and maintain data models ensuring data quality, scalability, and efficiency.Develop and automate processes to clean, transform, and prepare data for analytics, ensuring data accuracy and consistency.Data Integration: Integrate data from disparate sources, both structured and unstructured to provide a unified view of key infrastructure platform and application data.Utilize big data technologies such as Kafka to process and analyze large volumes of data efficiently.Implement data security measures to protect sensitive information and ensure compliance with data and privacy regulation.Create/maintain documentation for data processes, data flows, and system configurations.Performance Optimization: Monitor and optimize data pipelines and systems for performance, scalability, and cost-effectiveness.Characteristics of this role:
Team Player: Willing to teach, share knowledge, and work with others to make the team successful.Communication: Exceptional verbal, written, organizational, presentation, and communication skills.Creativity: Ability to take written and verbal requirements and come up with other innovative ideas.Attention to detail: Systematically and accurately research future solutions and current problems.Strong work ethic: The innate drive to do work extremely well.Passion: A drive to deliver better products and services than expected to customers.Required Qualifications
2+ years of programming experience in languages such as Python, Java, SQL.2+ years of experience with ETL tools and database management (relational, non-relational).2+ years of experience in data modeling techniques and tools to design efficient scalable data structures.Skills in data quality assessment, data cleansing, and data validation.Preferred Qualifications
Knowledge of big data technologies and cloud platforms.Experience with technologies like PySpark, Databricks, and Azure Synapse.Education
Bachelor’s degree in Computer Science, Information Technology, or related field, or equivalent working experience.
#J-18808-Ljbffr
Full-TimeBase Salary:
$56K-$66K Per Annum.Position Summary
Join the fast-paced, innovative, and collaborative environment focused on providing an AIOps platform that enhances the intelligence of the Healthcare infrastructure. Work closely with subject matter experts and colleagues to build and scale out machine learning and AI solutions that will detect, predict, and recommend solutions to correct issues before system impact and enhance the efficiency, reliability, and performance of CVS Health’s IT operations.Key Responsibilities include:
Data pipeline development: Designed, implemented, and managed data pipelines for extracting, transforming, and loading data from various sources into data lakes for processing, analytics, and correlation.Data modeling: Create and maintain data models ensuring data quality, scalability, and efficiency.Develop and automate processes to clean, transform, and prepare data for analytics, ensuring data accuracy and consistency.Data Integration: Integrate data from disparate sources, both structured and unstructured to provide a unified view of key infrastructure platform and application data.Utilize big data technologies such as Kafka to process and analyze large volumes of data efficiently.Implement data security measures to protect sensitive information and ensure compliance with data and privacy regulation.Create/maintain documentation for data processes, data flows, and system configurations.Performance Optimization: Monitor and optimize data pipelines and systems for performance, scalability, and cost-effectiveness.Characteristics of this role:
Team Player: Willing to teach, share knowledge, and work with others to make the team successful.Communication: Exceptional verbal, written, organizational, presentation, and communication skills.Creativity: Ability to take written and verbal requirements and come up with other innovative ideas.Attention to detail: Systematically and accurately research future solutions and current problems.Strong work ethic: The innate drive to do work extremely well.Passion: A drive to deliver better products and services than expected to customers.Required Qualifications
2+ years of programming experience in languages such as Python, Java, SQL.2+ years of experience with ETL tools and database management (relational, non-relational).2+ years of experience in data modeling techniques and tools to design efficient scalable data structures.Skills in data quality assessment, data cleansing, and data validation.Preferred Qualifications
Knowledge of big data technologies and cloud platforms.Experience with technologies like PySpark, Databricks, and Azure Synapse.Education
Bachelor’s degree in Computer Science, Information Technology, or related field, or equivalent working experience.
#J-18808-Ljbffr