Photon
Hadoop Data Engineer
Photon, Delaware, New Jersey, United States,
Hadoop Data Engineer is a hybrid role with experiences in database management, clustered compute, operating system integration, cloud concepts, storage solutions, application processing, and advanced monitoring techniques. The resource is usually has experience in multiple disciplines including Cloud, Linux as well as Hadoop. Must be able to lead complex projects and competing priorities with a high level of technical acumen and strong communication skills.
Job Description:Required Skills:Experience with multiple large-scale Enterprise Hadoop or Big Data, Data Bricks, Cloudera, HD Insights, or other environments focused on operations, design, capacity planning, cluster set up, security, performance tuning and monitoring
Experience with the full Cloudera CDH/CDP distribution to install, configure and monitor all services in the Cloudera stack
Strong understanding of core Hadoop services such as HDFS, MapReduce, Kafka, Spark and Spark-Streaming, Hive, Impala, HBASE, Kudu, Sqoop, and Oozie
Experience in administering, and supporting RHEL Linux operating systems, databases, and hardware in an enterprise environment
Expertise in typical system administration and programming skills such as storage capacity management, debugging, performance tuning
Proficient in shell scripting (e.g. Bash,ksh,etc)
Experience in setup, configuration and management of security for Hadoop clusters using Kerberos with integration with LDAP/AD at an Enterprise level
Experience with large enterprise scale, separation of resource concepts and the physical nature for those environments to operate (storage, memory, network, and compute)
Desired Skills:Experience in version control systems (Git)
Experience with Spectrum Conductor or Databricks with Apache Spark
Experience in different programming languages (Python, Java, R)
Enterprise Database Administration Platform Experience
Experience In Large Analytic Tools including SAS, Search, Machine Learning, Log Aggregation
Experience with Hadoop distributions in the Cloud is a plus, AWS, Azure, Google
Experience with the full Cloudera CDH distribution to install, configure and monitor all services in the CDH stack#J-18808-Ljbffr
Job Description:Required Skills:Experience with multiple large-scale Enterprise Hadoop or Big Data, Data Bricks, Cloudera, HD Insights, or other environments focused on operations, design, capacity planning, cluster set up, security, performance tuning and monitoring
Experience with the full Cloudera CDH/CDP distribution to install, configure and monitor all services in the Cloudera stack
Strong understanding of core Hadoop services such as HDFS, MapReduce, Kafka, Spark and Spark-Streaming, Hive, Impala, HBASE, Kudu, Sqoop, and Oozie
Experience in administering, and supporting RHEL Linux operating systems, databases, and hardware in an enterprise environment
Expertise in typical system administration and programming skills such as storage capacity management, debugging, performance tuning
Proficient in shell scripting (e.g. Bash,ksh,etc)
Experience in setup, configuration and management of security for Hadoop clusters using Kerberos with integration with LDAP/AD at an Enterprise level
Experience with large enterprise scale, separation of resource concepts and the physical nature for those environments to operate (storage, memory, network, and compute)
Desired Skills:Experience in version control systems (Git)
Experience with Spectrum Conductor or Databricks with Apache Spark
Experience in different programming languages (Python, Java, R)
Enterprise Database Administration Platform Experience
Experience In Large Analytic Tools including SAS, Search, Machine Learning, Log Aggregation
Experience with Hadoop distributions in the Cloud is a plus, AWS, Azure, Google
Experience with the full Cloudera CDH distribution to install, configure and monitor all services in the CDH stack#J-18808-Ljbffr