Saxon Global
Data Engineer Level II
Saxon Global, Union, Kentucky, United States, 41091
Job Description
As a Senior Data Engineer, you will have the opportunity to build solutions that ingest, store, and distribute our big data to be consumed by data scientists and our products. Our data engineers use Python, Hadoop, Databricks, PySpark, Hive, and other data engineering technologies and visualization tools to deliver data capabilities and services to our scientists, products, and tools.
Requirements•5+ years proven ability of professional Data Development experience•3+ years proven ability of developing with Hadoop/HDFS/Databricks and SQL/NoSQL (Oracle, SQL Server, Mongo)•3+ years of experience with PySpark/Spark•3+ years of experience developing with either Python, Java, or Scala•Full understanding of ETL concepts and Data Warehousing concepts•Exposure to VCS (Git, SVN)•Exposure to CI/CD pipelines•Strong understanding of Agile Principles (Scrum)•Bachelor's Degree (Computer Science, Management Information Systems, Mathematics, Business Analytics, or STEM)
Preferred Skills - Experience in the following•Experience with Azure•Exposure to NoSQL (Mongo, Cassandra)•Experience with Databricks•Exposure to Service Oriented Architecture•Exposure to BI Tooling (Tableau, Power BI, Cognos, etc.)•Proficient with Relational Data Modeling and/or Data Mesh principles•Experience with CI/CD - Continuous Integration/Continuous Delivery
Key Responsibilities•Participate in design and development of Hadoop and Cloud-based solutions•Perform unit and integration testing•Hands-on programming based on TDD, usually in a pair programming environment.•Design and develop applications for all data warehousing components, including Real-Time Data Ingestion techniques, Transformations, Aggregations, and related data quality strategy.•Design and implementation of multi-source data channels and ETL processes.•Collaborate with architecture and lead engineers to ensure consistent development practices•Provide mentoring to junior engineers•Participate in retrospective reviews•Participate in the estimation process for new work and releases•Collaborate with other engineers to solve and bring new perspectives to complex problems•Drive improvements in people, practices, and procedures•Embrace new technologies and an ever-changing environment
Required Skills : PythonBackground Check :YesDrug Screen :YesNotes :Selling points for candidate :Project Verification Info :Candidate must be your W2 Employee :YesExclusive to Apex :NoFace to face interview required :NoCandidate must be local :NoCandidate must be authorized to work without sponsorship ::NoInterview times set : :NoType of project :Development/EngineeringMaster Job Title :OtherBranch Code :Cincinnati
As a Senior Data Engineer, you will have the opportunity to build solutions that ingest, store, and distribute our big data to be consumed by data scientists and our products. Our data engineers use Python, Hadoop, Databricks, PySpark, Hive, and other data engineering technologies and visualization tools to deliver data capabilities and services to our scientists, products, and tools.
Requirements•5+ years proven ability of professional Data Development experience•3+ years proven ability of developing with Hadoop/HDFS/Databricks and SQL/NoSQL (Oracle, SQL Server, Mongo)•3+ years of experience with PySpark/Spark•3+ years of experience developing with either Python, Java, or Scala•Full understanding of ETL concepts and Data Warehousing concepts•Exposure to VCS (Git, SVN)•Exposure to CI/CD pipelines•Strong understanding of Agile Principles (Scrum)•Bachelor's Degree (Computer Science, Management Information Systems, Mathematics, Business Analytics, or STEM)
Preferred Skills - Experience in the following•Experience with Azure•Exposure to NoSQL (Mongo, Cassandra)•Experience with Databricks•Exposure to Service Oriented Architecture•Exposure to BI Tooling (Tableau, Power BI, Cognos, etc.)•Proficient with Relational Data Modeling and/or Data Mesh principles•Experience with CI/CD - Continuous Integration/Continuous Delivery
Key Responsibilities•Participate in design and development of Hadoop and Cloud-based solutions•Perform unit and integration testing•Hands-on programming based on TDD, usually in a pair programming environment.•Design and develop applications for all data warehousing components, including Real-Time Data Ingestion techniques, Transformations, Aggregations, and related data quality strategy.•Design and implementation of multi-source data channels and ETL processes.•Collaborate with architecture and lead engineers to ensure consistent development practices•Provide mentoring to junior engineers•Participate in retrospective reviews•Participate in the estimation process for new work and releases•Collaborate with other engineers to solve and bring new perspectives to complex problems•Drive improvements in people, practices, and procedures•Embrace new technologies and an ever-changing environment
Required Skills : PythonBackground Check :YesDrug Screen :YesNotes :Selling points for candidate :Project Verification Info :Candidate must be your W2 Employee :YesExclusive to Apex :NoFace to face interview required :NoCandidate must be local :NoCandidate must be authorized to work without sponsorship ::NoInterview times set : :NoType of project :Development/EngineeringMaster Job Title :OtherBranch Code :Cincinnati