VDart
Data Engineer
VDart, Seattle, Washington, us, 98127
Job Title : Data Engineer
Duration : 6 months + 6-month extensions after long term contract
Location : Miami, FL / Seattle, WA 3 days onsite/ 2 days at home
Job Description:
As a data engineer, you will play a crucial role in building and maintaining data infrastructure and pipelines for our revenue management systems. You will collaborate closely with data science and business stakeholders to ensure their data requirements are met consistently while also ensuring that the data pipelines are robust, scalable and efficient. Your expertise in modern software development practices and data engineering frameworks will be instrumental in implementing our data workflows and machine learning models in production
Key responsibilities:Develop and maintain robust + scalable data pipelines to ingest, process and transform large data sets using variety of tools like Python, Snowflake and Bash, but not limited to those.ML Ops Integration: Implement ML Ops practices to streamline the deployment, monitoring, and management of machine learning models in production environmentsCloud Services: Utilize AWS or other cloud services as required for Data Engineering, Data Science and ML modellingDev Ops Practices: Apply modern software development practices including CI / CD, Version Control and automated testing to ensure efficient and reliable data engineering processCollaboration: Work closely with cross-functional teams, including data scientists, analysts and business stakeholders, to understand data requirements and deliver robust, scalable data pipelines based on those requirementsDocumentation: Maintain comprehensive documentation of data workflows, pipelines and processesPerformance Optimization: Continuously monitor and optimize data processes and systems for performance, scalability and cost-efficiencyQualifications:
Education : Bachelor's or master's degree in computer science, Engineering or related fieldExperience : Minimum 5 years of experience in data engineering or related role, with a proven record of building and maintaining data pipelines and infrastructureSQL Skills : Advanced proficiency in SQL and experience with relational databases like Snowflake, OracleProgramming : Strong programming skills in Python or similar language, with experience in developing data processing scripts and applicationsScripting : Proficiency in bash scripting for automating data workflows and processesCloud Services : Hands-on experience with AWS or other cloud platforms, including services like S3, EC2, RDS, Lambda and GlueDev Ops and ML Ops : Hands on experience with modern software development practices, CI / CD pipelines for data and machine learning workflowsAnalytical Skills : Strong analytical and problem-solving skills, with ability to translate business requirements into technical solutionsCommunication : Excellent communication and collaboration skills, with the ability to work effectively in a remote and distributed team environmentPreferred Qualifications:
Visualization : Experience with data visualization tools like Power BI, Tableau or similar platformsMachine Learning : Experience with machine learning frameworks and libraries such as TensorFlow, PyTorch or Scikit-learnCertifications : Relevant certifications in Snowflake, AWS, Data Engineering, or machine Learning
Duration : 6 months + 6-month extensions after long term contract
Location : Miami, FL / Seattle, WA 3 days onsite/ 2 days at home
Job Description:
As a data engineer, you will play a crucial role in building and maintaining data infrastructure and pipelines for our revenue management systems. You will collaborate closely with data science and business stakeholders to ensure their data requirements are met consistently while also ensuring that the data pipelines are robust, scalable and efficient. Your expertise in modern software development practices and data engineering frameworks will be instrumental in implementing our data workflows and machine learning models in production
Key responsibilities:Develop and maintain robust + scalable data pipelines to ingest, process and transform large data sets using variety of tools like Python, Snowflake and Bash, but not limited to those.ML Ops Integration: Implement ML Ops practices to streamline the deployment, monitoring, and management of machine learning models in production environmentsCloud Services: Utilize AWS or other cloud services as required for Data Engineering, Data Science and ML modellingDev Ops Practices: Apply modern software development practices including CI / CD, Version Control and automated testing to ensure efficient and reliable data engineering processCollaboration: Work closely with cross-functional teams, including data scientists, analysts and business stakeholders, to understand data requirements and deliver robust, scalable data pipelines based on those requirementsDocumentation: Maintain comprehensive documentation of data workflows, pipelines and processesPerformance Optimization: Continuously monitor and optimize data processes and systems for performance, scalability and cost-efficiencyQualifications:
Education : Bachelor's or master's degree in computer science, Engineering or related fieldExperience : Minimum 5 years of experience in data engineering or related role, with a proven record of building and maintaining data pipelines and infrastructureSQL Skills : Advanced proficiency in SQL and experience with relational databases like Snowflake, OracleProgramming : Strong programming skills in Python or similar language, with experience in developing data processing scripts and applicationsScripting : Proficiency in bash scripting for automating data workflows and processesCloud Services : Hands-on experience with AWS or other cloud platforms, including services like S3, EC2, RDS, Lambda and GlueDev Ops and ML Ops : Hands on experience with modern software development practices, CI / CD pipelines for data and machine learning workflowsAnalytical Skills : Strong analytical and problem-solving skills, with ability to translate business requirements into technical solutionsCommunication : Excellent communication and collaboration skills, with the ability to work effectively in a remote and distributed team environmentPreferred Qualifications:
Visualization : Experience with data visualization tools like Power BI, Tableau or similar platformsMachine Learning : Experience with machine learning frameworks and libraries such as TensorFlow, PyTorch or Scikit-learnCertifications : Relevant certifications in Snowflake, AWS, Data Engineering, or machine Learning