CyberTec
Sr. Data Engineer Hybrid ( Local Preferred )
CyberTec, Wilmington, North Carolina, United States, 28412
Job Title : Sr. Data Engineer - Hybrid ( Local Preferred ) Location: Charlotte, NC (hybrid in office) Job Type : Hybrid Duration : 12 months Opening : 1 Opening Make sure to Vet the candidate well before submitting Summary :- About the Role: Project is Supply chain network planning effort to determine the optimized channel. Skill Requirements: Mandatory skills: Hadoop, Hive, hbase, etl, spark and Talend Secondary skills: Java Job Description: Maintain and migrate ETL Talend jobs into Spark, Maintain Hadoop Jobs, and Redesign the data streaming from legacy using Java. Qualifications Minimum Qualifications Bachelor's Degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field) 7-8 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering 4 years of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC) Skills & Abilities Good to have : Experience with cloud managed service for Client/Ops tech stack like Vertex AI, Client OPS etc Well versed with feature engineering tools / techniques like PCA , featureTools, Pyfeat etc. Well versed with Model development / techniques to use collaborative filtering or look a like models. Strong communication and data presentation skills; ability to communicate with data-driven stories Ability to quickly adapt to new technologies, tools and techniques Flexible and responsive; able to perform in a fast paced, dynamic work environment and meet aggressive deadlines Ability to work with technical and non-technical team members Preferred Qualifications Master's Degree in Computer Science, CIS, or related field 5 years of IT experience developing and implementing business systems within an organization 5 years of experience working with defect or incident tracking software 5 years of experience writing technical documentation in a software development environment 3 years of experience working with an IT Infrastructure Library (ITIL) framework 3 years of experience leading teams, with or without direct reports 5 years of experience working with source code control systems Experience working with Continuous Integration/Continuous Deployment tools 5 years of experience in systems analysis, including defining technical requirements and performing high level design for complex solutions Data Engineering 7 years of experience in Hadoop or any Cloud Bigdata components (specific to the Data Engineering role) Expertise in Java/Scala/Python, SQL, Scripting, Teradata, Hadoop (Sqoop, Hive, Pig, Map Reduce), Spark (Spark Streaming, MLib), Kafka or equivalent Cloud Bigdata components (specific to the Data Engineering role)