Ampcus
Ampcus Inc. is a certified global provider of a broad range of Technology and Business consulting services. We are in search of a highly motivated candidate to join our talented Team.
Job Title:
Information Technology
Location(s):
Reston, VA
Experience Summary:
This is a Cloudera Big Data senior administrator position and not a developer position. Experience with building Cloudera cluster, setting up NiFi, SOLR, HBase, Kafka, Knox in Cloud using CDP Public Cloud v7.2.17 or higher. Setting up the High Availability of the Services like Hue, Hive, HBase REST, SOLR and IMPALA on top of the all-new clusters that were build on the BDPaaS Platform. Be able to write the shell scripts to monitor the health check of services and respond accordingly to any warning or failure conditions. Monitoring the health of all the services running in the production cluster using the Cloudera Manager. Performing/Accessing the databases, metastore tables and writing Hive, Impala queries using HUE. Responsible for monitoring the health of the Services on top of all clusters. Working closely with different teams like Application development team, Security team, Platform Support to identify and implement the Configurational changes that are needed on top of the cluster for better performance of the services.
Skills Must Require:
Cloudera CDP Public Cloud v7.2.17 or higher
Apache Kafka - strong Administration & troubleshooting skills
Kafka Streams API
stream processing with KStreams & Ktables
Kafka integration with IBM MQ
Kafka broker management
Topic/ offset management
Apache Nifi - Administration
Flow management
registry server management
controller service management
NiFi to Kafka /HBase /SOLR integration
Hbase - administration
database management
troubleshooting
SOLR - administration
managing Logging level
managing shards & high availability
Collection management
Rectify resource intensive & long running SOLR queries
Additional Must Have Skills includes: Proficient with handling AWS EC2, S3, EBS, EFS Ensure Cloudera installation and configuration is at optimal specifications (CDP, CDSW, Hive, Spark, NiFi). Design and implement big data pipelines and automated data flows using Python/R and NiFi. Assist and provide expertise as it pertains to automating the entire project lifecycle. Perform incremental updates and upgrades to the Cloudera environment with newer version. Assist with new use cases (i.e., analytics/ML, data science, data ingest and processing), Infrastructure (including new cluster deployments, cluster migration, expansion, major upgrades, COOP/DR, and security). Assist in testing, governance, data quality, training, and documentation efforts. Move data and use YARN to allocate resources and schedule jobs. Manage job workflows with Hue. Implement comprehensive security policies across the Hadoop cluster using Ranger. Troubleshoot potential issues with Kerberos, TLS/SSL, Models, and Experiments, as well as other workload issues that data scientists might encounter once the application is running. Supporting the Big Data / Hadoop databases throughout the development and production lifecycle. Troubleshooting and resolving database integrity issues, performance issues, blocking and deadlocking issues, replication issues, log shipping issues, connectivity issues, security issues, performance tuning, query optimization, using monitoring and troubleshooting tools. Create, test, and implement scripting for automation support. Experience in working with Kafka ecosystem (Kafka Brokers, Connect, Zookeeper) in production is ideal. Implement and support streaming technologies such as Kafka, Spark & Kudu.
Ampcus is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identify, national origin, age, protected veterans or individuals with disabilities.
Job Title:
Information Technology
Location(s):
Reston, VA
Experience Summary:
This is a Cloudera Big Data senior administrator position and not a developer position. Experience with building Cloudera cluster, setting up NiFi, SOLR, HBase, Kafka, Knox in Cloud using CDP Public Cloud v7.2.17 or higher. Setting up the High Availability of the Services like Hue, Hive, HBase REST, SOLR and IMPALA on top of the all-new clusters that were build on the BDPaaS Platform. Be able to write the shell scripts to monitor the health check of services and respond accordingly to any warning or failure conditions. Monitoring the health of all the services running in the production cluster using the Cloudera Manager. Performing/Accessing the databases, metastore tables and writing Hive, Impala queries using HUE. Responsible for monitoring the health of the Services on top of all clusters. Working closely with different teams like Application development team, Security team, Platform Support to identify and implement the Configurational changes that are needed on top of the cluster for better performance of the services.
Skills Must Require:
Cloudera CDP Public Cloud v7.2.17 or higher
Apache Kafka - strong Administration & troubleshooting skills
Kafka Streams API
stream processing with KStreams & Ktables
Kafka integration with IBM MQ
Kafka broker management
Topic/ offset management
Apache Nifi - Administration
Flow management
registry server management
controller service management
NiFi to Kafka /HBase /SOLR integration
Hbase - administration
database management
troubleshooting
SOLR - administration
managing Logging level
managing shards & high availability
Collection management
Rectify resource intensive & long running SOLR queries
Additional Must Have Skills includes: Proficient with handling AWS EC2, S3, EBS, EFS Ensure Cloudera installation and configuration is at optimal specifications (CDP, CDSW, Hive, Spark, NiFi). Design and implement big data pipelines and automated data flows using Python/R and NiFi. Assist and provide expertise as it pertains to automating the entire project lifecycle. Perform incremental updates and upgrades to the Cloudera environment with newer version. Assist with new use cases (i.e., analytics/ML, data science, data ingest and processing), Infrastructure (including new cluster deployments, cluster migration, expansion, major upgrades, COOP/DR, and security). Assist in testing, governance, data quality, training, and documentation efforts. Move data and use YARN to allocate resources and schedule jobs. Manage job workflows with Hue. Implement comprehensive security policies across the Hadoop cluster using Ranger. Troubleshoot potential issues with Kerberos, TLS/SSL, Models, and Experiments, as well as other workload issues that data scientists might encounter once the application is running. Supporting the Big Data / Hadoop databases throughout the development and production lifecycle. Troubleshooting and resolving database integrity issues, performance issues, blocking and deadlocking issues, replication issues, log shipping issues, connectivity issues, security issues, performance tuning, query optimization, using monitoring and troubleshooting tools. Create, test, and implement scripting for automation support. Experience in working with Kafka ecosystem (Kafka Brokers, Connect, Zookeeper) in production is ideal. Implement and support streaming technologies such as Kafka, Spark & Kudu.
Ampcus is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identify, national origin, age, protected veterans or individuals with disabilities.