Paladin Consulting
Sr. Data Engineer (Richardson, TX)
Paladin Consulting, Richardson, Texas, United States, 75080
Paladin Consulting is currently hiring a Sr. Data Engineer to join our team working onsite at our client's office located in Richardson, TX.
We work with companies that offer environments for our employees to contribute, learn, and advance their career. We treat you like you are part of the family.
Job Title: Sr. Data EngineerWork Location: Richardson, TX; Hybrid 3 days per week onsiteDuration: Long-term contract with option to hireEducation/Experience Required: Bachelor's degree (BA/BS) in a related field such as information systems, mathematics, or computer science.
Job Description & Responsibilities :Plan & analyze, develops, maintains, and enhances client systems as well as supports systems of moderate to high complexity.Participates in the design, specification, implementation, and maintenance of systems.Designs, codes, tests, and documents software programs of moderate complexity as per the requirement specifications.Design, develop, and maintain scalable data pipelines using Snowflake, dbt, Snaplogic and ETL tools.Participates in design reviews and technical briefings for specific applications.Integrate data from various sources, ensuring consistency, accuracy, and reliability.Develop and manage ETL/ELT processes to support data warehousing and analytics.ssists in preparation of requirement specifications, Analyzing the data, design and develop data driven applications including documenting and revising user procedures and/or manuals.Involved with resolution of Medium to severe complexity software development issues that may arise in a production environment.Utilize Python for data manipulation, automation, and integration tasks.ssemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalabilityBuild the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL Server, PostgreSQL, SSIS, T-SQL, PL/SQLWork with stakeholders including the Product, Data, Design, Frontend and Backend teams to assist with data-related technical issues and support their data infrastructure needsWrite complex SQL, T-SQL, PL/SQL queries, stored procedures, functions, cursors in SQL Server and PostgreSQL. Peer review other team members codenalyze the long running queries/functions/procures, design and develop performance optimization strategy.Create and manage SSIS packages and/or Informatica to perform day to day ETL activities. Use variety of strategies for complex data transformations using an ETL toolPerform DBA activities like maintaining the systems health and performance tuning, manage database access, deployments to higher environments, on-call support, shell scripting and python scripting is a plusParticipate in employing the Continuous Deliver and Continuous Deployment (CI/CD) tools for optimal productivity.Collaborate with scrum team members during daily standup and actively engage in sprint refinement, planning, review and retrospective.nalyzes, reviews, and alters program to increase operating efficiency or adapt to new requirements.Writes documentation to describe program development, logic, coding, and corrections.Skills & Qualifications :
Bachelor's degree (BA/BS) in a related field such as information systems, mathematics, or computer science.Typically has 6 years of relevant work experience. Consideration given to equivalent combination of education and experience.Excellent written and verbal communication skills. Strong organizational and analytical skills.Expertise in Data Extraction, Transformation, Loading, Data Analysis, Data Profiling, and SQL Tuning.Expertise in Relational & Dimensional Databases in engines like SQL Server, Postgres, Oracle...Strong experience in designing and developing enterprise scale data warehouse systems using Snowflake.Strong expertise in designing and developing reusable and scalable Data products with data quality scores and integrity checks.Strong expertise in developing end to end complex data workflows using Data ingestion tools such as Snaplogic, ADF, Matallion etc.Experience with cloud platforms AWS / Azure cloud technologies, Agile methodologies and DevOps is a big plus.Experience in architecting cloud native solutions across multiple B2B and B2B2C data domains.Experience in architecture of modern APIs for the secure sharing of data across internal application components as well as external technology partners.Experience in Data orchestration tools like Apache Airflow, Chronos with Mesos cluster etc.Expertise in designing and developing data transformation models in DBT.Comparing and analyzing provided statistical information to identify patterns, relationships, and problems; and using this information to design conceptual and logical data models and flowcharts to present to management.Experience with developing CICD pipelines in Jenkins or Azure DevOps.Knowledge of Python for data manipulation and automation.Knowledge of data governance frameworks and best practices.Knowledge in integrating with source code versioning tools like Git Hub.For more information or to view other opportunities, visit us at
www.paladininc.com.
Paladin Consulting is an EEOC employer.
We work with companies that offer environments for our employees to contribute, learn, and advance their career. We treat you like you are part of the family.
Job Title: Sr. Data EngineerWork Location: Richardson, TX; Hybrid 3 days per week onsiteDuration: Long-term contract with option to hireEducation/Experience Required: Bachelor's degree (BA/BS) in a related field such as information systems, mathematics, or computer science.
Job Description & Responsibilities :Plan & analyze, develops, maintains, and enhances client systems as well as supports systems of moderate to high complexity.Participates in the design, specification, implementation, and maintenance of systems.Designs, codes, tests, and documents software programs of moderate complexity as per the requirement specifications.Design, develop, and maintain scalable data pipelines using Snowflake, dbt, Snaplogic and ETL tools.Participates in design reviews and technical briefings for specific applications.Integrate data from various sources, ensuring consistency, accuracy, and reliability.Develop and manage ETL/ELT processes to support data warehousing and analytics.ssists in preparation of requirement specifications, Analyzing the data, design and develop data driven applications including documenting and revising user procedures and/or manuals.Involved with resolution of Medium to severe complexity software development issues that may arise in a production environment.Utilize Python for data manipulation, automation, and integration tasks.ssemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalabilityBuild the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL Server, PostgreSQL, SSIS, T-SQL, PL/SQLWork with stakeholders including the Product, Data, Design, Frontend and Backend teams to assist with data-related technical issues and support their data infrastructure needsWrite complex SQL, T-SQL, PL/SQL queries, stored procedures, functions, cursors in SQL Server and PostgreSQL. Peer review other team members codenalyze the long running queries/functions/procures, design and develop performance optimization strategy.Create and manage SSIS packages and/or Informatica to perform day to day ETL activities. Use variety of strategies for complex data transformations using an ETL toolPerform DBA activities like maintaining the systems health and performance tuning, manage database access, deployments to higher environments, on-call support, shell scripting and python scripting is a plusParticipate in employing the Continuous Deliver and Continuous Deployment (CI/CD) tools for optimal productivity.Collaborate with scrum team members during daily standup and actively engage in sprint refinement, planning, review and retrospective.nalyzes, reviews, and alters program to increase operating efficiency or adapt to new requirements.Writes documentation to describe program development, logic, coding, and corrections.Skills & Qualifications :
Bachelor's degree (BA/BS) in a related field such as information systems, mathematics, or computer science.Typically has 6 years of relevant work experience. Consideration given to equivalent combination of education and experience.Excellent written and verbal communication skills. Strong organizational and analytical skills.Expertise in Data Extraction, Transformation, Loading, Data Analysis, Data Profiling, and SQL Tuning.Expertise in Relational & Dimensional Databases in engines like SQL Server, Postgres, Oracle...Strong experience in designing and developing enterprise scale data warehouse systems using Snowflake.Strong expertise in designing and developing reusable and scalable Data products with data quality scores and integrity checks.Strong expertise in developing end to end complex data workflows using Data ingestion tools such as Snaplogic, ADF, Matallion etc.Experience with cloud platforms AWS / Azure cloud technologies, Agile methodologies and DevOps is a big plus.Experience in architecting cloud native solutions across multiple B2B and B2B2C data domains.Experience in architecture of modern APIs for the secure sharing of data across internal application components as well as external technology partners.Experience in Data orchestration tools like Apache Airflow, Chronos with Mesos cluster etc.Expertise in designing and developing data transformation models in DBT.Comparing and analyzing provided statistical information to identify patterns, relationships, and problems; and using this information to design conceptual and logical data models and flowcharts to present to management.Experience with developing CICD pipelines in Jenkins or Azure DevOps.Knowledge of Python for data manipulation and automation.Knowledge of data governance frameworks and best practices.Knowledge in integrating with source code versioning tools like Git Hub.For more information or to view other opportunities, visit us at
www.paladininc.com.
Paladin Consulting is an EEOC employer.