Logo
Glint Tech Solutions LLC

Data Engineer (Contractor)

Glint Tech Solutions LLC, Phoenix, Arizona, United States, 85003


About the job Data Engineer (Contractor)

Position Overview:

We are seeking an experienced Senior Data Engineer with a strong background in building, deploying, and supporting data ingestion and batch applications on Google Cloud. The ideal candidate will have extensive experience with BigQuery, Cloud Storage, Dataproc, and Cloud Composer/Airflow, along with proficiency in SQL, Python, PySpark, and Hive. You will work collaboratively with cross-functional teams to design and implement robust data solutions that drive business intelligence and analytics.

Key Responsibilities:

Design, build, and maintain scalable data pipelines for data ingestion and processing on Google Cloud Platform (GCP).

Leverage BigQuery, Cloud Storage, Dataproc, and Cloud Composer/Airflow to create and optimize data workflows and batch processing applications.

Develop and optimize complex SQL queries for data transformation and analysis.

Write and maintain production-level Python scripts and PySpark jobs for data processing and analysis.

Implement data governance and ensure data quality, accuracy, and security throughout the data lifecycle.

Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver actionable insights.

Monitor and troubleshoot data pipelines, ensuring timely data delivery and performance optimization.

Stay up-to-date with emerging technologies and best practices in big data engineering and analytics.

Qualifications:

Bachelors or Masters degree in Computer Science, Data Engineering, or a related field.

5+ years of experience in data engineering or a related role, with a focus on big data technologies.

Strong proficiency in Google Cloud services, particularly BigQuery, Cloud Storage, Dataproc, and Cloud Composer/Airflow.

Expertise in SQL and experience with data manipulation in Hive and other big data frameworks.

Solid programming skills in Python, with hands-on experience in PySpark.

Familiarity with Hadoop and Spark ecosystems.

Strong analytical and problem-solving skills with attention to detail.

Excellent communication and collaboration skills, with the ability to work effectively in a team environment.