Jobsbridge
Senior Hadoop Developer
Jobsbridge, Newark, New Jersey, us, 07175
Jobs Bridge Inc is among the fastest growing IT staffing / professional services organization with its own job portal. Jobs Bridge works extremely closely with a large number of IT organizations in the most in-demand technology skill sets.Job Description
Skills:
Hadoop, MapReduce, Hive, HiveQL, AWS, PL/SQL, Spark, ScalaLocation:
Newark, NJTotal Experience:
8 yrs.Max Salary:
Not MentionedEmployment Type:
Direct Jobs (Full Time)Domain:
AnyResponsibilities
Developer will be responsible for analyzing requirements, prototyping data analysis solutions (primarily in HiveQL or Spark and UNIX scripting), designing, developing and unit testing solutions, and facilitating solution deployment and support.Qualifications
Candidates need to have strong capabilities in HiveQL and UNIX scripting.Candidates should have experience with the Hadoop ecosystem and working with large data sets.The system will consist of batch analytic processing on large sets of data.Experience with Spark is preferred.Required Skills/Qualifications:
Strong HiveQL and SQL Development skillsPerformance Tuning Map Reduce/HiveProgramming Experience with Hadoop / Map-Reduce with JavaUNIX / Shell scriptingDesigning distributed solutions for parallel processing of large dataFull SDLC Experience (requirements analysis, design, development, unit testing, deployment, support)Experience with Spark/Scala programmingExperience with AWS Cloud TechnologiesExperience with Agile Development MethodologiesExperience with Big-Data Technologies in Hadoop Eco System
#J-18808-Ljbffr
Skills:
Hadoop, MapReduce, Hive, HiveQL, AWS, PL/SQL, Spark, ScalaLocation:
Newark, NJTotal Experience:
8 yrs.Max Salary:
Not MentionedEmployment Type:
Direct Jobs (Full Time)Domain:
AnyResponsibilities
Developer will be responsible for analyzing requirements, prototyping data analysis solutions (primarily in HiveQL or Spark and UNIX scripting), designing, developing and unit testing solutions, and facilitating solution deployment and support.Qualifications
Candidates need to have strong capabilities in HiveQL and UNIX scripting.Candidates should have experience with the Hadoop ecosystem and working with large data sets.The system will consist of batch analytic processing on large sets of data.Experience with Spark is preferred.Required Skills/Qualifications:
Strong HiveQL and SQL Development skillsPerformance Tuning Map Reduce/HiveProgramming Experience with Hadoop / Map-Reduce with JavaUNIX / Shell scriptingDesigning distributed solutions for parallel processing of large dataFull SDLC Experience (requirements analysis, design, development, unit testing, deployment, support)Experience with Spark/Scala programmingExperience with AWS Cloud TechnologiesExperience with Agile Development MethodologiesExperience with Big-Data Technologies in Hadoop Eco System
#J-18808-Ljbffr