Logo
Tata Consultancy Services

Senior Data Engineer

Tata Consultancy Services, Elkhorn, Nebraska, United States, 68022


Technical/Functional Skills

Staff Data Engineering role - someone with hands on experience with cloud infrastructure, PySpark/Spark(strong), CICD, basic software engineering, data engineering, etc. This role will be responsible for handling multiple products and multiple stake holders, so we are looking a candidate with lots of hands on experience and who enjoys learning and sharing the latest trends in the tech space related to DE.

Tech StackSnowflake, Databricks, Python, pyspark, Databricks, SQL and Azure.

Experience Required

As a Staff Data Engineer, candidate will be part of a Data Engineering team that is focused on making critical data at available to our business teams.Using the agile framework, candidate will build end-to-end pipelines based on rigorous engineering standards and coding practices to deliver data that is accessible and of the highest quality.A Staff Data Engineer will also contribute to the modernization of our architecture and tools to help increase our output, scalability, and speed.

Roles & Responsibilities

A Staff Data Engineer will design and develop highly scalable and extensible data pipelines which enable collection, storage, distribution, modeling, and analysis of large data sets from many channels.This position requires an innovative software engineer who is passionate about data & data quality.The ideal candidate will have strong data warehousing and API integration experience and the ability to develop scalable data pipelines that make data management and analytics/reporting faster, more insightful, and more efficient.•Lead the Data engineering Team to Develop, test, document and support scalable data pipelines.•Build out new data integrations including APIs to support continuing increases in data volume and complexity.•Establish and follow data governance processes and guidelines to ensure data availability, usability, consistency, integrity, and security.•Build and implement scalable solutions that align to our data governance standards and architectural road maps for data integrations, data storage, reporting, and analytic solutions.•Collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility, and fostering data-driven decision making across the organization.•Design and develop data integrations and a data quality framework. Write unit/integration/functional tests and document work.•Design, implement, and automate deployment of our distributed system for collecting and processing streaming events from multiple sources•Perform data analysis needed to troubleshoot data-related issues and aid in the resolution of data issues.•Guide and mentor junior engineers on coding best practices and optimization.

Qualifications:•Education: 4-year college degree or equivalent combination of education and experience.Prefer an academic background in Computer Science, Mathematics, Statistics, or related technical field.•8 years of relevant work experience in analytics, data engineering, business intelligence or related field.•Skilled in object-oriented programming (Python in particular).•Strong Experience in Python, PySpark and SQL.•Strong Experience in Databricks & Snowflake.•Experience developing integrations across multiple systems and APIs.•Experience with or knowledge of Agile software development methodologies.•Experience with cloud-based databases, specifically Azure technologies (e.g., Azure data lake, ADF, Azure DevOps and Azure Functions).•Experience using SQL queries as well as writing and perfecting SQL queries in a business environment with large-scale, complex datasets.•Experience with data warehouse technologies. Experience creating ETL and/or ELT jobs.•Excellent problem solving and troubleshooting skills.•Process oriented with great documentation skills.•Experience designing data schemas and operating SQL/NoSQL database systems is a plus.