JobRialto
Snowflake Developer
JobRialto, Atlanta, Georgia, United States, 30383
Job Summary:
We are seeking an experienced Data Engineer with a strong focus on PySpark and Graph Databases (Neo4j, Neptune DB, or similar). The ideal candidate will have extensive experience in data engineering, development, and working with business and IT stakeholders. This role requires proficiency in big data technologies, cloud platforms, and data warehousing solutions.
Key Responsibilities:
•Develop and maintain data engineering solutions with a focus on PySpark and Graph Databases (Neo4j, Neptune DB, or similar).
•Collaborate with business and IT stakeholders to understand requirements and deliver solutions.
•Design and implement data models, database architecture, and schema design.
•Write and debug Python and Spark code.
•Utilize big data technologies such as Hadoop, Hive, and Kafka.
•Work with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
•Write and optimize SQL queries for relational databases (e.g., PostgreSQL, MySQL, SQL Server).
•Implement data warehousing solutions like Redshift, Snowflake, Databricks, or Google BigQuery.
•Design and manage data lake architectures and data storage solutions.
•Implement CI/CD pipelines and use version control systems (e.g., Git).
•Troubleshoot complex issues and provide effective solutions.
•Communicate and collaborate effectively within a team environment.
Required Qualifications:
•5-8 years of experience in data engineering, with a focus on PySpark and Graph Databases (Neo4j, Neptune DB, or similar).
•5-8 years of development experience working with business and IT stakeholders.
•Strong understanding of data modeling, database architecture, and schema design.
•Proficiency in Python and Spark, with strong coding and debugging skills.
•Experience with big data technologies such as Hadoop, Hive, and Kafka.
•Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
•Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server).
•Experience with data warehousing solutions like Redshift, Snowflake, Databricks, or Google BigQuery.
•Familiarity with data lake architectures and data storage solutions.
•Knowledge of CI/CD pipelines and version control systems (e.g., Git).
•Excellent problem-solving skills and the ability to troubleshoot complex issues.
•Strong communication and collaboration skills, with the ability to work effectively in a team environment.
Preferred Qualifications:
•Strong problem-solving skills and attention to detail.
•Ability to work effectively in a team environment.
•Excellent communication skills.
Certifications (if any):
•Relevant certifications in data engineering or related fields are a plus.
Education:
Bachelors Degree
We are seeking an experienced Data Engineer with a strong focus on PySpark and Graph Databases (Neo4j, Neptune DB, or similar). The ideal candidate will have extensive experience in data engineering, development, and working with business and IT stakeholders. This role requires proficiency in big data technologies, cloud platforms, and data warehousing solutions.
Key Responsibilities:
•Develop and maintain data engineering solutions with a focus on PySpark and Graph Databases (Neo4j, Neptune DB, or similar).
•Collaborate with business and IT stakeholders to understand requirements and deliver solutions.
•Design and implement data models, database architecture, and schema design.
•Write and debug Python and Spark code.
•Utilize big data technologies such as Hadoop, Hive, and Kafka.
•Work with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
•Write and optimize SQL queries for relational databases (e.g., PostgreSQL, MySQL, SQL Server).
•Implement data warehousing solutions like Redshift, Snowflake, Databricks, or Google BigQuery.
•Design and manage data lake architectures and data storage solutions.
•Implement CI/CD pipelines and use version control systems (e.g., Git).
•Troubleshoot complex issues and provide effective solutions.
•Communicate and collaborate effectively within a team environment.
Required Qualifications:
•5-8 years of experience in data engineering, with a focus on PySpark and Graph Databases (Neo4j, Neptune DB, or similar).
•5-8 years of development experience working with business and IT stakeholders.
•Strong understanding of data modeling, database architecture, and schema design.
•Proficiency in Python and Spark, with strong coding and debugging skills.
•Experience with big data technologies such as Hadoop, Hive, and Kafka.
•Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP).
•Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server).
•Experience with data warehousing solutions like Redshift, Snowflake, Databricks, or Google BigQuery.
•Familiarity with data lake architectures and data storage solutions.
•Knowledge of CI/CD pipelines and version control systems (e.g., Git).
•Excellent problem-solving skills and the ability to troubleshoot complex issues.
•Strong communication and collaboration skills, with the ability to work effectively in a team environment.
Preferred Qualifications:
•Strong problem-solving skills and attention to detail.
•Ability to work effectively in a team environment.
•Excellent communication skills.
Certifications (if any):
•Relevant certifications in data engineering or related fields are a plus.
Education:
Bachelors Degree