The Hartford
Staff Data Platform Engineer- GenAI
The Hartford, Little Ferry, New Jersey, us, 07643
At the Hartford, we are seeking a
GEN AI Data Engineer
who is responsible for building fault-tolerant infrastructure to support the Generative AI applications, and also designing, developing, and deploying data pipelines to solve complex problems and drive innovation at scale.We are driven by a strong determination to create a meaningful impact and take pride in being an insurance company that extends far beyond the realms of policies and coverages. When you choose to be a part of our team, you open the door to endless opportunities for personal and professional growth, as well as the chance to empower others in reaching their aspirations. You will help bring the transformative power of Generative AI capabilities to re-imagine the ‘art of possible’ and serve our internal customers and transform the businesses.We are founding a dedicated Generative AI platform engineering team to build our internal developer platform and are looking for an experienced
Staff Data Platform Engineer - Generative AI
to help us build the foundation of our Generative AI capability. You will work on a wide range of initiatives, whether that’s building ETL pipelines, training a retrieval re-ranker, working with the DevSecOps team to build the CICD pipeline, designing a Generative AI Infrastructure that conforms to our strict security standards/guardrails, or collaborating with the data science team to improve the accuracy of the LLM models.This role requires versatility and expertise across a wide range of skills. Someone with a diverse background and experience, and who is an engineer at heart, will fit into this role seamlessly.The Generative AI team is comprised of a multiple cross-functional group that works in unison to ensure a sound move from our research activities to scalable solutions. You will collaborate closely with our cloud, security, infrastructure, enterprise architecture, and data science teams to conceive and execute essential functionalities.This role can have a Hybrid or Remote work arrangement. Candidates who live near one of our office locations will have the expectation of working in an office 3 days a week (Tuesday through Thursday). Candidates who do not live near an office will have a remote work arrangement, with the expectation of coming into an office as business needs arise.Candidates must be eligible to work in the US without sponsorship now or in the futureResponsibilities:Design and build fault-tolerant infrastructure to support the Generative AI architecture (RAG, Summarization, Agent, etc.).Ensure code is delivered without vulnerabilities by enforcing engineering practices, code scanning, etc.Build and maintain IAC (Terraform/CloudFormation), CICD (Jenkins) scripts, CodePipeline, uDeploy, & GitHub Actions.Partner with our shared service teams like Architecture, Cloud, Security, etc. to design and implement platform solutions.Collaborate with the Data Science team to develop a self-service internal developer Generative AI platform.Design and build the Data ingestion pipeline for fine-tuning LLM Models.Create templates (Architecture As Code) implementing architecture application’s topology.Build a feedback system using HITL for Supervised fine-tuning.Qualifications:Bachelor's degree in Computer Science, Computer Engineering, or a technical field.4+ years of experience with AWS cloud.At least 8 years of experience designing and building data-intensive solutions using distributed computing.8+ years building and shipping software and/or platform infrastructure solutions for enterprises.Experience with CI/CD pipelines, Automated Testing, Automated Deployments, Agile methodologies, Unit Testing, and Integration Testing tools.Experience with building scalable serverless applications (real-time/batch) on AWS stack (Lambda + Step Function).Knowledge of distributed NoSQL database systems.Experience with data engineering, ETL technology, and conversational UX is a plus.Experience with HPCs, vector embedding, and Hybrid/Semantic search technologies.Experience with AWS OpenSearch, Step/Lambda Functions, SageMaker, API Gateways, ECS/Docker is a plus.Proficiency in customization techniques across various stages of the RAG pipeline, including model fine-tuning, retrieval re-ranking, and hierarchical navigable small-world graph (HNSW) is a plus.Strong proficiency in embeddings, ANN/KNN, vector stores, database optimization, & performance tuning.Extensive programming experience with Python and Java.Experience with LLM orchestration frameworks like Langchain, LlamaIndex, etc.Foundational understanding of Natural Language Processing and Deep Learning.Excellent problem-solving skills and the ability to work in a collaborative team environment.Excellent communication skills.
#J-18808-Ljbffr
GEN AI Data Engineer
who is responsible for building fault-tolerant infrastructure to support the Generative AI applications, and also designing, developing, and deploying data pipelines to solve complex problems and drive innovation at scale.We are driven by a strong determination to create a meaningful impact and take pride in being an insurance company that extends far beyond the realms of policies and coverages. When you choose to be a part of our team, you open the door to endless opportunities for personal and professional growth, as well as the chance to empower others in reaching their aspirations. You will help bring the transformative power of Generative AI capabilities to re-imagine the ‘art of possible’ and serve our internal customers and transform the businesses.We are founding a dedicated Generative AI platform engineering team to build our internal developer platform and are looking for an experienced
Staff Data Platform Engineer - Generative AI
to help us build the foundation of our Generative AI capability. You will work on a wide range of initiatives, whether that’s building ETL pipelines, training a retrieval re-ranker, working with the DevSecOps team to build the CICD pipeline, designing a Generative AI Infrastructure that conforms to our strict security standards/guardrails, or collaborating with the data science team to improve the accuracy of the LLM models.This role requires versatility and expertise across a wide range of skills. Someone with a diverse background and experience, and who is an engineer at heart, will fit into this role seamlessly.The Generative AI team is comprised of a multiple cross-functional group that works in unison to ensure a sound move from our research activities to scalable solutions. You will collaborate closely with our cloud, security, infrastructure, enterprise architecture, and data science teams to conceive and execute essential functionalities.This role can have a Hybrid or Remote work arrangement. Candidates who live near one of our office locations will have the expectation of working in an office 3 days a week (Tuesday through Thursday). Candidates who do not live near an office will have a remote work arrangement, with the expectation of coming into an office as business needs arise.Candidates must be eligible to work in the US without sponsorship now or in the futureResponsibilities:Design and build fault-tolerant infrastructure to support the Generative AI architecture (RAG, Summarization, Agent, etc.).Ensure code is delivered without vulnerabilities by enforcing engineering practices, code scanning, etc.Build and maintain IAC (Terraform/CloudFormation), CICD (Jenkins) scripts, CodePipeline, uDeploy, & GitHub Actions.Partner with our shared service teams like Architecture, Cloud, Security, etc. to design and implement platform solutions.Collaborate with the Data Science team to develop a self-service internal developer Generative AI platform.Design and build the Data ingestion pipeline for fine-tuning LLM Models.Create templates (Architecture As Code) implementing architecture application’s topology.Build a feedback system using HITL for Supervised fine-tuning.Qualifications:Bachelor's degree in Computer Science, Computer Engineering, or a technical field.4+ years of experience with AWS cloud.At least 8 years of experience designing and building data-intensive solutions using distributed computing.8+ years building and shipping software and/or platform infrastructure solutions for enterprises.Experience with CI/CD pipelines, Automated Testing, Automated Deployments, Agile methodologies, Unit Testing, and Integration Testing tools.Experience with building scalable serverless applications (real-time/batch) on AWS stack (Lambda + Step Function).Knowledge of distributed NoSQL database systems.Experience with data engineering, ETL technology, and conversational UX is a plus.Experience with HPCs, vector embedding, and Hybrid/Semantic search technologies.Experience with AWS OpenSearch, Step/Lambda Functions, SageMaker, API Gateways, ECS/Docker is a plus.Proficiency in customization techniques across various stages of the RAG pipeline, including model fine-tuning, retrieval re-ranking, and hierarchical navigable small-world graph (HNSW) is a plus.Strong proficiency in embeddings, ANN/KNN, vector stores, database optimization, & performance tuning.Extensive programming experience with Python and Java.Experience with LLM orchestration frameworks like Langchain, LlamaIndex, etc.Foundational understanding of Natural Language Processing and Deep Learning.Excellent problem-solving skills and the ability to work in a collaborative team environment.Excellent communication skills.
#J-18808-Ljbffr