NR Consulting
Machine Learning Engineer IV
NR Consulting, Cincinnati, Ohio, United States, 45208
Title: Machine Learning Engineer IV
Location: Cincinnati, OH
Type: Contract
Duration: Long Term
Description: Join our Data Science Enablement squad as a Senior Machine Learning Engineer. You will use an existing batch inference model to establish a secure, automated deployment pipeline. This role involves both engineering and change management, including architecture and training, with a focus on educating data scientists and other Data Science Enablement members on MLOps. Once the foundational deployment framework is in place, you will enable additional MLOps capabilities such as MLFlow, A/B testing, real-time endpoints, and further automation with Model Risk Management (MRM).
Key Responsibilities:
Develop and implement a secure, automated deployment pipeline. Educate and mentor team members on MLOps practices. Balance engineering tasks with change management and training. Enhance MLOps capabilities with advanced tools and techniques.
Preferred Experience:
Experience in highly regulated industries like banking, finance, or healthcare.
Qualifications:
Minimum of 3-5+ years of experience in machine learning and MLOps. Proven experience with AWS Sagemaker and building end-to-end machine learning models. Experience with data integration and management using IBM DB2 and Snowflake (or like databases) Strong understanding of CI/CD pipelines and automation tools. Proficiency in programming languages such as Python, R, SQL and/or Java. Use of Fifth Third standard DevOps tools such as Jira, Terraform, GitHub, Jenkins Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
Squad outcomes:
Future (2025 & Beyond) - Utilize AWS Sagemaker to expand Feature Store, introduce Model Registry, CI/CD, Real-Time models for our large data science credit models. The squad is currently working on an in-house build of Feature Store to help speed up modeling process for our Data Science department. Combination of Snowflake, Cloud Pak for Data. (More on this later) Currently, data scientist build model features (attributes) about customers in their own Jupyter notebook that feed into their models and never reuseable for others... aka reason for Feature Store They are also working on building real time scoring framework for our loan/card application process. Right now it's batch and can be almost 31 days behind. Technology used: Docker, Kafka, Snowflake, Feature Store
Technical Skills: Must Have
mazon SageMaker CI/CD Dev Op tools like Jira, Terraform, GitHub, or Jenkins Docker Experience in a highly regulated industry such as Banking/Financial/Healthcare GitHub Python SQL Terraform
Nice To Have
DBT Java Snowflake
Description: Join our Data Science Enablement squad as a Senior Machine Learning Engineer. You will use an existing batch inference model to establish a secure, automated deployment pipeline. This role involves both engineering and change management, including architecture and training, with a focus on educating data scientists and other Data Science Enablement members on MLOps. Once the foundational deployment framework is in place, you will enable additional MLOps capabilities such as MLFlow, A/B testing, real-time endpoints, and further automation with Model Risk Management (MRM).
Key Responsibilities:
Develop and implement a secure, automated deployment pipeline. Educate and mentor team members on MLOps practices. Balance engineering tasks with change management and training. Enhance MLOps capabilities with advanced tools and techniques.
Preferred Experience:
Experience in highly regulated industries like banking, finance, or healthcare.
Qualifications:
Minimum of 3-5+ years of experience in machine learning and MLOps. Proven experience with AWS Sagemaker and building end-to-end machine learning models. Experience with data integration and management using IBM DB2 and Snowflake (or like databases) Strong understanding of CI/CD pipelines and automation tools. Proficiency in programming languages such as Python, R, SQL and/or Java. Use of Fifth Third standard DevOps tools such as Jira, Terraform, GitHub, Jenkins Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes).
Squad outcomes:
Future (2025 & Beyond) - Utilize AWS Sagemaker to expand Feature Store, introduce Model Registry, CI/CD, Real-Time models for our large data science credit models. The squad is currently working on an in-house build of Feature Store to help speed up modeling process for our Data Science department. Combination of Snowflake, Cloud Pak for Data. (More on this later) Currently, data scientist build model features (attributes) about customers in their own Jupyter notebook that feed into their models and never reuseable for others... aka reason for Feature Store They are also working on building real time scoring framework for our loan/card application process. Right now it's batch and can be almost 31 days behind. Technology used: Docker, Kafka, Snowflake, Feature Store
Technical Skills: Must Have
mazon SageMaker CI/CD Dev Op tools like Jira, Terraform, GitHub, or Jenkins Docker Experience in a highly regulated industry such as Banking/Financial/Healthcare GitHub Python SQL Terraform
Nice To Have
DBT Java Snowflake