Saxon Global
Sr. Data Engineer
Saxon Global, Cincinnati, Ohio, United States, 45208
Take ownership of features and drive them to completion through all phases of the entire 84.51° SDLC. This includes internal and external facing applications as well as process improvement activities:•Design and develop Azure solutions•Implement automated unit and integration testing•Collaborate with architecture and lead engineers to ensure consistent development practices•Participate in retrospective reviews•Participate in the estimation process for new work and releases•Collaborate with other engineers to solve and bring new perspectives to complex problems•Drive improvements in data engineering practices, procedures, and ways of working•Embrace new technologies and an ever-changing environment
Requirements•5+ years proven ability of professional Data Development experience•3+ years proven ability of developing with Azure and SQL (Oracle, SQL Server)•3+ years of experience with PySpark/Spark•2+ years of experience in Azure Data Factory and/or Azure Databricks•Experience in working with large-scale data sets and distributed systems•Full understanding of ETL concepts and Data Warehousing concepts•Exposure to version control software (Git, GitHub SaaS)•Strong understanding of Agile Principles (Scrum)•Bachelor's Degree (Computer Science, Management Information Systems, Mathematics, Business Analytics, or STEM)
Bonus Points for experience in the following•Proficient with Relational Data Modeling•Understanding of Data Mesh Principles•Experience with CI/CD - Continuous Integration/Continuous Delivery•Experience with Python Library Development•Experience with Structured Streaming (Spark or otherwise)•Experience with Kafka and/or Azure Event Hub•Experience with Github SaaS / Github Actions•Experience with Snowflake•Exposure to Service Oriented Architecture•Exposure to BI Tooling (Tableau, Power BI, Cognos, etc.)
Required Skills : Azure Databricks SQL PysparkBackground Check :YesDrug Screen :YesNotes :Selling points for candidate :Project Verification Info :Candidate must be your W2 Employee :YesExclusive to Apex :NoFace to face interview required :NoCandidate must be local :NoCandidate must be authorized to work without sponsorship ::NoInterview times set : :NoType of project :Development/EngineeringMaster Job Title :OtherBranch Code :Cincinnati
Requirements•5+ years proven ability of professional Data Development experience•3+ years proven ability of developing with Azure and SQL (Oracle, SQL Server)•3+ years of experience with PySpark/Spark•2+ years of experience in Azure Data Factory and/or Azure Databricks•Experience in working with large-scale data sets and distributed systems•Full understanding of ETL concepts and Data Warehousing concepts•Exposure to version control software (Git, GitHub SaaS)•Strong understanding of Agile Principles (Scrum)•Bachelor's Degree (Computer Science, Management Information Systems, Mathematics, Business Analytics, or STEM)
Bonus Points for experience in the following•Proficient with Relational Data Modeling•Understanding of Data Mesh Principles•Experience with CI/CD - Continuous Integration/Continuous Delivery•Experience with Python Library Development•Experience with Structured Streaming (Spark or otherwise)•Experience with Kafka and/or Azure Event Hub•Experience with Github SaaS / Github Actions•Experience with Snowflake•Exposure to Service Oriented Architecture•Exposure to BI Tooling (Tableau, Power BI, Cognos, etc.)
Required Skills : Azure Databricks SQL PysparkBackground Check :YesDrug Screen :YesNotes :Selling points for candidate :Project Verification Info :Candidate must be your W2 Employee :YesExclusive to Apex :NoFace to face interview required :NoCandidate must be local :NoCandidate must be authorized to work without sponsorship ::NoInterview times set : :NoType of project :Development/EngineeringMaster Job Title :OtherBranch Code :Cincinnati