Logo
H-E-B Grocery

Staff Software Engineer

H-E-B Grocery, Dallas, Texas, United States, 75215


Responsibilities

Company Name: H-E-B, LPJob Location: 3890 W. Northwest Hwy., Suite 400, Dallas, TX 75220Job title: Staff Software EngineerMinimum Salary: $149,781Education: Bachelor's degree in Electronics and Communication Engineering, Computer Science, or related.SOC Code: 15-1252SOC Occupation Title: Software DevelopersDuration: Regular HireWork week: Full-timeSupervision Experience Required: NoTravel Required: No - Employer will allow remote/telecommuting throughout the US.

Experience: 7 years of experience with Data Engineering, or related. Requires the following skills: 7 years of experience in developing Big data pipelines using Spark, Scala, Map Reduce, Pig, Sqoop, and Hive. Transferring and analyzing data to Big Query from various sources. Working with cloud technologies GCP Dataproc and cloud storage. Must have hands-on experience in Code Migration and Data Migration for Extracting, Transforming, and Loading data using ETL tools (Sync sort, DMExpress) on UNIX and Windows. Creating BI reports for business users using Looker. Developing Teradata SQL Scripts through various procedures, functions, and packages to implement the business logics. Scheduling ETL workflows using Oozie, Autosys, Atomic, Crontab, and Apache Airflow. Data modeling and performance tuning using versioning tools GITHUB and TortoiseSVN.

Job duties: Research, design, and develop computer and network software or specialized utility programs for a statewide supermarket chain. Analyze user needs and develop software solutions, applying principles and techniques of computer science, engineering, and mathematical analysis. Update software or enhance existing software capabilities. Design, develop, and modify software systems using scientific analysis and mathematical models to predict and measure the outcomes and consequences of the design. Develop multiple automated shell scripts for data transfer from other sources to Hadoop. Develop Spark RDD's and Data Frames to utilize the capability of built-in memory processing improving the performance of the application. Work with Hadoop admin in setting up edge nodes and installing the required software on cluster. Design and implement a data pipeline enabling near real time systems such as Micro Services Architecture for the business decision making system. Analyze data to design the scalable algorithm using Spark. Develop the generic applications such as data pull and setup the notebooks in cluster.