SherlockTalent
AI DevOps Data Ingestion Engineer
SherlockTalent, California, Missouri, United States, 65018
Job Title: AIDevOps Data Ingestion Engineer
Location: Bay Area, CA
Job Type: Founding Level SWE, Full Time
Salary: Founders-level equity and $200K+
Job ID: 7330
SherlockTalent is Hiring: AI DevOps Data Ingestion Engineer
Are you passionate about designing scalable data pipelines and ensuring robust infrastructure for seamless data flow?
SherlockTalent is seeking an AI DevOps Data Ingestion Engineer to bridge the gap between infrastructure and data engineering, building and maintaining data ingestion pipelines while automating deployment processes.
Key Responsibilities:
Design, develop, and maintain scalable and efficient data ingestion pipelines to collect and process data from diverse sources such as CRMs, marketing automation platforms, and sales systems. Automate infrastructure provisioning and pipeline deployments using Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. Implement and manage CI/CD pipelines to support the deployment and monitoring of data systems. Collaborate with data scientists and analysts to ensure data integrity, normalization, and transformation meet analytics needs. Optimize data ingestion workflows for performance, scalability, and reliability in cloud environments like AWS, GCP, or Azure. Monitor data pipelines, troubleshoot issues, and maintain high availability and fault tolerance across systems. Ensure security and compliance for data systems by implementing best practices for access control, encryption, and monitoring. Provide support to cross-functional teams by ensuring data infrastructure aligns with organizational goals. Required Skills:
Strong experience with cloud platforms (AWS, GCP, Azure) and associated data services (e.g., S3, BigQuery, Redshift, or similar). Expertise in building data ingestion pipelines using tools like Apache Kafka, Apache NiFi, or AWS Glue. Hands-on experience with CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Proficiency in scripting and programming languages such as Python, Bash, or Go for automation. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Strong knowledge of database systems (SQL and NoSQL) and data transformation techniques. Solid understanding of monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Preferred Qualifications:
Experience with data lake or data warehouse architectures. Knowledge of big data technologies such as Spark, Hadoop, or Flink. Background in implementing data security best practices in cloud environments. Why Join Us? As an AI DevOps Data Ingestion Engineer at SherlockTalent, you’ll have the opportunity to work on complex and impactful projects, ensuring that our systems are scalable, secure, and optimized for data-driven success. Join a team that thrives on innovation, collaboration, and delivering excellence. Apply Now: Be part of a dynamic environment where your expertise in DevOps and data engineering will shape the future of our data infrastructure. Join SherlockTalent today!
#J-18808-Ljbffr
Design, develop, and maintain scalable and efficient data ingestion pipelines to collect and process data from diverse sources such as CRMs, marketing automation platforms, and sales systems. Automate infrastructure provisioning and pipeline deployments using Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. Implement and manage CI/CD pipelines to support the deployment and monitoring of data systems. Collaborate with data scientists and analysts to ensure data integrity, normalization, and transformation meet analytics needs. Optimize data ingestion workflows for performance, scalability, and reliability in cloud environments like AWS, GCP, or Azure. Monitor data pipelines, troubleshoot issues, and maintain high availability and fault tolerance across systems. Ensure security and compliance for data systems by implementing best practices for access control, encryption, and monitoring. Provide support to cross-functional teams by ensuring data infrastructure aligns with organizational goals. Required Skills:
Strong experience with cloud platforms (AWS, GCP, Azure) and associated data services (e.g., S3, BigQuery, Redshift, or similar). Expertise in building data ingestion pipelines using tools like Apache Kafka, Apache NiFi, or AWS Glue. Hands-on experience with CI/CD tools such as Jenkins, GitLab CI/CD, or GitHub Actions. Proficiency in scripting and programming languages such as Python, Bash, or Go for automation. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Strong knowledge of database systems (SQL and NoSQL) and data transformation techniques. Solid understanding of monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Preferred Qualifications:
Experience with data lake or data warehouse architectures. Knowledge of big data technologies such as Spark, Hadoop, or Flink. Background in implementing data security best practices in cloud environments. Why Join Us? As an AI DevOps Data Ingestion Engineer at SherlockTalent, you’ll have the opportunity to work on complex and impactful projects, ensuring that our systems are scalable, secure, and optimized for data-driven success. Join a team that thrives on innovation, collaboration, and delivering excellence. Apply Now: Be part of a dynamic environment where your expertise in DevOps and data engineering will shape the future of our data infrastructure. Join SherlockTalent today!
#J-18808-Ljbffr