United Software Group
Data Engineer - w2 contract Texas local only apply
United Software Group, Dallas, TX, United States
Data Engineer - w2 contract Texas resident apply
Kindly Note:
Data Engineer
ETL
Must be Proficient
At least 7 years
PL/SQL
Must be Proficient
Data Lake Exp
Preferred
Snowflake
Preferred
Goldman Sachs Experience with Alloy Services
Plus
Responsibilities:
• Develop, maintain, and optimize data pipelines to extract, transform, and load large datasets from diverse sources into our data ecosystem.
• Design and implement efficient and scalable data models that align with business requirements, ensuring data integrity and performance.
• Collaborate with cross-functional teams to understand data needs and deliver solutions that meet those requirements.
• Work closely with data scientists, analysts, and software engineers to ensure seamless integration of data solutions into larger systems.
• Identify and resolve data quality issues, ensuring accuracy, reliability, and consistency of the data infrastructure.
• Continuously monitor and improve data pipelines and processes, identifying opportunities for automation and optimization.
• Stay updated with emerging trends, technologies, and best practices in data engineering, data modeling, and backend Java engineering.
• Provide technical guidance and mentorship to junior team members, fostering their growth and development.
Requirements:
• Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
• 5+years of hands-on experience as a Data Engineer, working on complex data projects and implementing data modeling solutions.
Data Engineering
Must have:
• Solid understanding of SQL and expertise in working with relational databases (e.g., PostgreSQL, MySQL).
• In-depth knowledge of data modeling techniques and experience with data modeling tools
• Proficiency in designing and optimizing data pipelines using ETL/ELT frameworks and tools (e.g., Informatica, Apache Spark, Airflow, AWS Glue).
• Working knowledge on Data warehousing
• Familiarity with cloud-based data platforms and services (e.g., Snowflake, AWS, Google Cloud, Azure).
• Experience with version control systems (e.g., Git) and agile software development methodologies.
• Strong communication skills to effectively convey technical concepts to both technical and non-technical stakeholders.
• Excellent problem-solving skills and the ability to work independently and collaboratively in a fast-paced environment.
Good to Have:
• JAVA 8, REST APIs, and microservices, Spring Boot framework
• UNIX scripting
Primary Skillset: Data Engineering
Solid understanding of SQL and expertise in working with relational databases (e.g., DB2, MySQL).
Data Modelling knowledge
cloud-based data platforms like Snowflake
Working knowledge on Data warehousing
Cloud-based data platforms and services (e.g., AWS, Google Cloud, Azure)
Alteryx (good to have)
ETL/ELT tools like Informatica, Apache Spark
UNIX scripting
Good to have:
GS specific tools like Alloy Registry, LEGEND tool for Alloy data modelling, PURE, Data Browser, Data Lake etc
Kindly Note:
- 1st interview to be conducted on Zoom.
- Client interviews will be conducted in-person.
Data Engineer
ETL
Must be Proficient
At least 7 years
PL/SQL
Must be Proficient
Data Lake Exp
Preferred
Snowflake
Preferred
Goldman Sachs Experience with Alloy Services
Plus
Responsibilities:
• Develop, maintain, and optimize data pipelines to extract, transform, and load large datasets from diverse sources into our data ecosystem.
• Design and implement efficient and scalable data models that align with business requirements, ensuring data integrity and performance.
• Collaborate with cross-functional teams to understand data needs and deliver solutions that meet those requirements.
• Work closely with data scientists, analysts, and software engineers to ensure seamless integration of data solutions into larger systems.
• Identify and resolve data quality issues, ensuring accuracy, reliability, and consistency of the data infrastructure.
• Continuously monitor and improve data pipelines and processes, identifying opportunities for automation and optimization.
• Stay updated with emerging trends, technologies, and best practices in data engineering, data modeling, and backend Java engineering.
• Provide technical guidance and mentorship to junior team members, fostering their growth and development.
Requirements:
• Bachelor's or Master's degree in Computer Science, Engineering, or a related field.
• 5+years of hands-on experience as a Data Engineer, working on complex data projects and implementing data modeling solutions.
Data Engineering
Must have:
• Solid understanding of SQL and expertise in working with relational databases (e.g., PostgreSQL, MySQL).
• In-depth knowledge of data modeling techniques and experience with data modeling tools
• Proficiency in designing and optimizing data pipelines using ETL/ELT frameworks and tools (e.g., Informatica, Apache Spark, Airflow, AWS Glue).
• Working knowledge on Data warehousing
• Familiarity with cloud-based data platforms and services (e.g., Snowflake, AWS, Google Cloud, Azure).
• Experience with version control systems (e.g., Git) and agile software development methodologies.
• Strong communication skills to effectively convey technical concepts to both technical and non-technical stakeholders.
• Excellent problem-solving skills and the ability to work independently and collaboratively in a fast-paced environment.
Good to Have:
• JAVA 8, REST APIs, and microservices, Spring Boot framework
• UNIX scripting
Primary Skillset: Data Engineering
Solid understanding of SQL and expertise in working with relational databases (e.g., DB2, MySQL).
Data Modelling knowledge
cloud-based data platforms like Snowflake
Working knowledge on Data warehousing
Cloud-based data platforms and services (e.g., AWS, Google Cloud, Azure)
Alteryx (good to have)
ETL/ELT tools like Informatica, Apache Spark
UNIX scripting
Good to have:
GS specific tools like Alloy Registry, LEGEND tool for Alloy data modelling, PURE, Data Browser, Data Lake etc