Logo
CV Library

Staff Engineer - Cloudera-Hadoop - Big Data - Federal - 2nd Shift

CV Library, Santa Clara, California, us, 95053


Job Description

Please Note: This position will include supporting our US Federal Government Cloud Infrastructure.

This position requires passing a ServiceNow background screening, US FedPASS (US Federal Personnel Authorization Screening Standards). This includes a credit check, criminal/misdemeanor check and taking a drug test. Any employment is contingent upon passing the screening. Due to Federal requirements, only US citizens, US naturalized citizens, or US Permanent residents holding a green card will be considered.

As a Staff DevOps Engineer - Hadoop Admin on our Big Data Federal Team, you will help deliver 24x7 support for our Government Cloud infrastructure.

The Federal Big Data Team has 3 shifts that provide 24x7 production support for our Big Data Government cloud infrastructure.

Below are some highlights:

4 Day work week (Wednesday to Saturday OR Sunday to Wednesday)No on-call rotationShift Bonuses for 2nd and 3rd shifts

This is a 2nd Shift position with work hours from 3pm - 2am Pacific Time.

The Big Data team plays a critical and strategic role in ensuring that ServiceNow can exceed the availability and performance SLAs of the ServiceNow Platform powered Customer instances - deployed across the ServiceNow cloud and Azure cloud. Our mission is to:

Deliver state-of-the-art Monitoring, Analytics and Actionable Business Insights by employing new tools, Big Data systems, Enterprise Data Lake, AI, and Machine Learning methodologies that improve efficiencies across a variety of functions in the company: Cloud Operations, Customer Support, Product Usage Analytics, Product Upsell Opportunities enabling to have a significant impact both on the top-line and bottom-line growth. The Big Data team is responsible for:

Collecting, storing, and providing real-time access to large amounts of data.Providing real-time analytic tools and reporting capabilities for various functions including:Monitoring, alerting, and troubleshootingMachine Learning, Anomaly detection and Prediction of P1sCapacity planningData analytics and deriving Actionable Business Insights

What you get to do in this role:

Responsible for deploying, production monitoring, maintaining and supporting of Big Data infrastructure, Applications on ServiceNow Cloud and Azure environments.Architect and drive the end-to-end Big Data deployment automation from vision to delivering the automation of Big Data foundational modules (Cloudera CDP), prerequisite components and Applications leveraging Ansible, Puppet, Terraform, Jenkins, Docker, Kubernetes to deliver end-to-end deployment automation across all ServiceNow environments.Automate Continuous Integration / Continuous Deployment (CI/CD) data pipelines for applications leveraging tools such as Jenkins, Ansible, and Docker.Performance tuning and troubleshooting of various Hadoop components and other data analytics tools in the environment: HDFS, YARN, Hive, HBase, Spark, Kafka, RabbitMQ, Impala, Kudu, Redis, Hue, Kerberos, Tableau, Grafana, MariaDB, and Prometheus.Provide production support to resolve critical Big Data pipelines and application issues and mitigating or minimizing any impact on Big Data applications. Collaborate closely with Site Reliability Engineers (SRE), Customer Support (CS), Developers, QA and System engineering teams in replicating complex issues leveraging broad experience with UI, SQL, Full-stack and Big Data technologies.Responsible for enforcing data governance policies in Commercial and Regulated Big Data environments.

#J-18808-Ljbffr