Logo
RICEFW Technologies

Data Engineer - Databricks, Python, and SQL

RICEFW Technologies, Scottsdale, Arizona, us, 85261


Develop data pipelines to ingest, load, and transform data from multiple sources.Leverage Data Platform, running on Google Cloud, to design, optimize, deploy and deliver data solutions in support of scientific discoveryUse programming languages like Java, Scala, Python and Open-Source RDBMS and NoSQL databases and Cloud based data store services such as MongoDB, DynamoDB, Elasticache, and SnowflakeThe continuous delivery of technology solutions from product roadmaps adopting Agile and DevOps principlesCollaborate with digital product managers, and deliver robust cloud-based solutions that drive powerful experiencesDesign and develop data pipelines, including Extract, Transform, Load (ETL) programs to extract data from various sources and transform the data to fit the target modelTest and deploy data pipelines to ensure compliance with data governance and security policiesMoving implementation to ownership of real-time and batch processing and data governance and policiesMaintain and enforce the business contracts on how data should be represented and storedEnsures that technical delivery is fully compliant with Security, Quality and Regulatory standardsKeeps relevant technical documentation up to date in support of the lifecycle plan for audits/reviews.Pro-actively engages in experimentation and innovation to drive relentless improvement e.g., new data engineering tools/frameworksImplementing ETL processes, moving data between systems including S3, Snowflake, Kafka, and SparkWork closely with our Data Scientists, SREs, and Product Managers to ensure software is high quality and meets user requirementsRequired Qualifications

Bachelor's or Master's degree in Computer Science, Engineering, or related field.5+ years of experience as a data engineer building ETL/ELT data pipelines.Experience with data engineering best practices for the full software development life cycle, including coding standards, code reviews, source control management (GIT, continuous integrations, testing, and operations)Experience in programming language Python and SQL good to have Java, C#, C++, Go, Ruby, and RustExperience with Agile, DevOps & Automation [of testing, build, deployment, CI/CD, etc.], AirflowExperience with Docker, Kubernetes, Shell Scripting2+ years of experience with a public cloud (AWS, Microsoft Azure, Google Cloud)3+ years experience with distributed data/computing tools (MapReduce, Hadoop, Hive, EMR, Kafka, Spark, Gurobi, or MySQL)2+ years experience working on real-time data and streaming applications2+ years of experience with NoSQL implementations (DynamoDB, MongoDB, Redis, Elasticache)2+ years of data warehousing experience (Redshift, Snowflake, Databricks, etc.)2+ years of experience with UNIX/Linux including basic commands and shell scriptingExperienced with visualization tools like SSRS, Excel, PowerBI, Tableau, Google Looker, Azure Synapse

Required Skills : Python,SQLAdditional Skills :

Python Developer,Data EngineerBackground Check :Yes