Equiliem
Data Engineer
Equiliem, Myrtle Point, Oregon, United States, 97458
Job Description: Bachelor‘s degree or higher or combination of relevant education, experience, and training in Computer Science.
6+ years experience in Data Engineering.
4+ years of experience working with Python specifically related to data processing with proficiency in Python Libraries - Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, Json.
3+ years of experience in Data Warehouse technologies - Databricks and Snowflake.
Strong Data Engineering Fundamentals (ETL, Modelling, Lineage, Governance, Partitioning & Optimization, Migration).
Strong Databricks-specific skills (Apache Spark, DB SQL, Delta Lake, Delta Share, Notebooks, Workflows, RBAC, Unity Catalog, Encryption & Compliance).
Strong SQL (SQL, performance, Stored Procedures, Triggers, schema design) skills and knowledge of one of more RDBMS and NoSQL DBs like MSSQL/MySQL and DynamoDB/MongoDB/Redis.
Cloud Platform Expertise: AWS and/or Azure.
Experience in one or more ETL tools like Apache Airflow/AWS Glue/Azure Data Factory/Talend/Alteryx.
Excellent knowledge of coding and architectural design patterns.
Passion for troubleshooting, investigation and performing root-cause analysis.
Excellent written and verbal communication skills.
Ability to multitask in a high energy environment.
Agile methodologies and knowledge of Git, Jenkins, GitLab, Azure DevOps and tools like Jira/Confluence.
Nice to have:
Tools like - Collibra, Hackolade.
Migration Strategy and Tooling
Data Migration Tools: Experience with migration tools and frameworks or custom-built solutions to automate moving data from Snowflake to Databricks.
Testing and Validation: Ensuring data consistency and validation post-migration with testing strategies like checksums, row counts, and query performance benchmarks
Comments for Suppliers: Preferred Beaverton WHQ - Please note if local
If Remote, must work PST hours - please confirm in notes willingness to work PST hours
Will be working with 2 lead engineers, one FTE engineer and 4 other ETWs
6+ years experience in Data Engineering.
4+ years of experience working with Python specifically related to data processing with proficiency in Python Libraries - Pandas, NumPy, PySpark, PyOdbc, PyMsSQL, Requests, Boto3, SimpleSalesforce, Json.
3+ years of experience in Data Warehouse technologies - Databricks and Snowflake.
Strong Data Engineering Fundamentals (ETL, Modelling, Lineage, Governance, Partitioning & Optimization, Migration).
Strong Databricks-specific skills (Apache Spark, DB SQL, Delta Lake, Delta Share, Notebooks, Workflows, RBAC, Unity Catalog, Encryption & Compliance).
Strong SQL (SQL, performance, Stored Procedures, Triggers, schema design) skills and knowledge of one of more RDBMS and NoSQL DBs like MSSQL/MySQL and DynamoDB/MongoDB/Redis.
Cloud Platform Expertise: AWS and/or Azure.
Experience in one or more ETL tools like Apache Airflow/AWS Glue/Azure Data Factory/Talend/Alteryx.
Excellent knowledge of coding and architectural design patterns.
Passion for troubleshooting, investigation and performing root-cause analysis.
Excellent written and verbal communication skills.
Ability to multitask in a high energy environment.
Agile methodologies and knowledge of Git, Jenkins, GitLab, Azure DevOps and tools like Jira/Confluence.
Nice to have:
Tools like - Collibra, Hackolade.
Migration Strategy and Tooling
Data Migration Tools: Experience with migration tools and frameworks or custom-built solutions to automate moving data from Snowflake to Databricks.
Testing and Validation: Ensuring data consistency and validation post-migration with testing strategies like checksums, row counts, and query performance benchmarks
Comments for Suppliers: Preferred Beaverton WHQ - Please note if local
If Remote, must work PST hours - please confirm in notes willingness to work PST hours
Will be working with 2 lead engineers, one FTE engineer and 4 other ETWs