Cloud Analytics Technologies LLC
Sr. Hadoop ETL Developer
Cloud Analytics Technologies LLC, Raleigh, North Carolina, United States, 27601
Work Authorization: US Citizen, Green Card, H-1B, GC-EAD, OPT-EAD, L2-EAD, TN Visa
Local Candidates Preferred. Non-local candidates must be willing to pay for your own interview travel expenses and relocation costs.Job DescriptionRole: Sr. Hadoop ETL DeveloperDuration: 9+ MonthsRate: $60/hr on C2C (Max)Responsibilities:Hadoop ETL Developer with Hadoop and other Software as a Service (SaaS) experience.BS Degree in Computer Science or equivalent.8 years of extensive IT experience with Java, MapReduce ETL.Minimum 4 years of experience with Hadoop, developing Big Data/Hadoop applications.Languages: Java and scripting.Unix/Linux Environment - scripting skills.Hortonworks or Cloudera commercial release experience.Streaming ingest (JSON entities) over HTTPS utilizing Flume.Batch ingest of Domain data into HBASE utilizing SFTP.Excellent working experience with the Hadoop Ecosystem including MapReduce, HDFS, Sqoop, Pig scripts, Hive, HBase, Flume, Impala, Kafka, Spark, Oozie, and Zookeeper.Experience with distributed systems, large scale non-relational data stores, MapReduce systems, data modeling, and big data systems.Experience in writing MapReduce programs & UDFs for both Hive & Pig in Java.Experience with Hive queries, Pig Latin scripts, and MapReduce programs for data analysis and to process the data and loading into databases for visualization.Experience in working with agile methodologies and suggesting process improvements in agile.Expert knowledge in troubleshooting and performance tuning at various levels such as source, mapping, target, and sessions.Experience with performing real-time analytics on NoSQL databases like HBase and Cassandra.Experience working with Oozie workflow engine to schedule time-based jobs to perform multiple actions.Experience with importing and exporting data from Relational databases to HDFS, Hive, and HBase using Sqoop.Used Flume to channel data from different sources to HDFS.RDBMS Experience: Oracle, SQL Server, MySQL.Experience writing Apache Spark API or Impala on Big Data distribution in the active cluster environment.Experience coding and testing the Standardization, Normalization, Load, Extract, and AVRO models to filter/massage the data and its validation.Highly adept at promptly and thoroughly mastering new technologies with a keen awareness of new industry developments and the evolution of next-generation programming solutions.Preferred Qualifications:Open stack or AWS experience preferred.Equal Opportunity Employer
Cloud Big Data Technologies is an equal opportunity employer inclusive of female, minority, disability, and veterans (M/F/D/V). Hiring, promotion, transfer, compensation, benefits, discipline, termination, and all other employment decisions are made without regard to race, color, religion, sex, sexual orientation, gender identity, age, disability, national origin, citizenship/immigration status, veteran status, or any other protected status. Cloud Big Data Technologies will not make any posting or employment decision that does not comply with applicable laws relating to labor and employment, equal opportunity, employment eligibility requirements, or related matters. Nor will Cloud Big Data Technologies require in a posting or otherwise U.S. citizenship or lawful permanent residency in the U.S. as a condition of employment except as necessary to comply with law, regulation, executive order, or federal, state, or local government contract.
#J-18808-Ljbffr
Local Candidates Preferred. Non-local candidates must be willing to pay for your own interview travel expenses and relocation costs.Job DescriptionRole: Sr. Hadoop ETL DeveloperDuration: 9+ MonthsRate: $60/hr on C2C (Max)Responsibilities:Hadoop ETL Developer with Hadoop and other Software as a Service (SaaS) experience.BS Degree in Computer Science or equivalent.8 years of extensive IT experience with Java, MapReduce ETL.Minimum 4 years of experience with Hadoop, developing Big Data/Hadoop applications.Languages: Java and scripting.Unix/Linux Environment - scripting skills.Hortonworks or Cloudera commercial release experience.Streaming ingest (JSON entities) over HTTPS utilizing Flume.Batch ingest of Domain data into HBASE utilizing SFTP.Excellent working experience with the Hadoop Ecosystem including MapReduce, HDFS, Sqoop, Pig scripts, Hive, HBase, Flume, Impala, Kafka, Spark, Oozie, and Zookeeper.Experience with distributed systems, large scale non-relational data stores, MapReduce systems, data modeling, and big data systems.Experience in writing MapReduce programs & UDFs for both Hive & Pig in Java.Experience with Hive queries, Pig Latin scripts, and MapReduce programs for data analysis and to process the data and loading into databases for visualization.Experience in working with agile methodologies and suggesting process improvements in agile.Expert knowledge in troubleshooting and performance tuning at various levels such as source, mapping, target, and sessions.Experience with performing real-time analytics on NoSQL databases like HBase and Cassandra.Experience working with Oozie workflow engine to schedule time-based jobs to perform multiple actions.Experience with importing and exporting data from Relational databases to HDFS, Hive, and HBase using Sqoop.Used Flume to channel data from different sources to HDFS.RDBMS Experience: Oracle, SQL Server, MySQL.Experience writing Apache Spark API or Impala on Big Data distribution in the active cluster environment.Experience coding and testing the Standardization, Normalization, Load, Extract, and AVRO models to filter/massage the data and its validation.Highly adept at promptly and thoroughly mastering new technologies with a keen awareness of new industry developments and the evolution of next-generation programming solutions.Preferred Qualifications:Open stack or AWS experience preferred.Equal Opportunity Employer
Cloud Big Data Technologies is an equal opportunity employer inclusive of female, minority, disability, and veterans (M/F/D/V). Hiring, promotion, transfer, compensation, benefits, discipline, termination, and all other employment decisions are made without regard to race, color, religion, sex, sexual orientation, gender identity, age, disability, national origin, citizenship/immigration status, veteran status, or any other protected status. Cloud Big Data Technologies will not make any posting or employment decision that does not comply with applicable laws relating to labor and employment, equal opportunity, employment eligibility requirements, or related matters. Nor will Cloud Big Data Technologies require in a posting or otherwise U.S. citizenship or lawful permanent residency in the U.S. as a condition of employment except as necessary to comply with law, regulation, executive order, or federal, state, or local government contract.
#J-18808-Ljbffr