Inficare
Data Engineer
Inficare, Phoenix, Arizona, United States, 85003
Role : Data Engineer
Location : Phoenix, AZ -(onsite)
Duration :long term Contract
Job Description :
Certified in GCP
Client is looking for someone who is good at:
1. Strong in GCP and Python
2. Knowledge on migration projects
3. experience on building re-usable idempotent pipelines
4. emphasize on data engineering principles on pipelines being built
5. comprehensive knowledge on some of the core GCP services
6+ years of experience in Data Engineering with an emphasis on Data Warehousing and Data Analytics.4+ years of experience with one of the leading public clouds.4+ years of experience in design and build of salable data pipelines that deal with extraction, transformation, and loading.4+ years of experience with Python, Scala with working knowledge on Notebooks.2+ years hands on experience on GCP Cloud data implementation projects (Dataflow, DataProc, Cloud Composer, Big Query, Cloud Storage, GKE, Airflow, etc.).At least 2 years of experience in Data governance and Metadata Management.Ability to work independently, solve problems, update the stake holders.Analyze, design, develop and deploy solutions as per business requirements.Strong understanding of relational and dimensional data modeling.Experience in DevOps and CI/CD related technologies.Excellent written, verbal communication skills, including experience in technical documentation and ability to communicate with senior business managers and executives.JOB SUMMARY & PRINCIPAL DUTIES:
A solid experience and understanding of considerations for large-scale solutioning and operationalization of data warehouses, data lakes and analytics platforms on GCP is a must.Monitors the Data Lake and Warehouse to ensure that the appropriate support teams are engaged at the right times.Design, build and test scalable data ingestion pipelines, perform end to end automation of ETL process for various datasets that are being ingested.Participate in peer review and provide feedback to the engineers keeping development best practices, business and technical requirements in viewDetermine best way to extract application telemetry data, structure it, send to proper tool for reporting (Kafka, Splunk).Work with business and cross-functional teams to gather and document requirements to meet business needs.Provide support as required to ensure the availability and performance of ETL/ELT jobs.Provide technical assistance and cross training to business and internal team members.Collaborate with business partners for continuous improvement opportunities.Requirements
Location : Phoenix, AZ -(onsite)
Duration :long term Contract
Job Description :
Certified in GCP
Client is looking for someone who is good at:
1. Strong in GCP and Python
2. Knowledge on migration projects
3. experience on building re-usable idempotent pipelines
4. emphasize on data engineering principles on pipelines being built
5. comprehensive knowledge on some of the core GCP services
6+ years of experience in Data Engineering with an emphasis on Data Warehousing and Data Analytics.4+ years of experience with one of the leading public clouds.4+ years of experience in design and build of salable data pipelines that deal with extraction, transformation, and loading.4+ years of experience with Python, Scala with working knowledge on Notebooks.2+ years hands on experience on GCP Cloud data implementation projects (Dataflow, DataProc, Cloud Composer, Big Query, Cloud Storage, GKE, Airflow, etc.).At least 2 years of experience in Data governance and Metadata Management.Ability to work independently, solve problems, update the stake holders.Analyze, design, develop and deploy solutions as per business requirements.Strong understanding of relational and dimensional data modeling.Experience in DevOps and CI/CD related technologies.Excellent written, verbal communication skills, including experience in technical documentation and ability to communicate with senior business managers and executives.JOB SUMMARY & PRINCIPAL DUTIES:
A solid experience and understanding of considerations for large-scale solutioning and operationalization of data warehouses, data lakes and analytics platforms on GCP is a must.Monitors the Data Lake and Warehouse to ensure that the appropriate support teams are engaged at the right times.Design, build and test scalable data ingestion pipelines, perform end to end automation of ETL process for various datasets that are being ingested.Participate in peer review and provide feedback to the engineers keeping development best practices, business and technical requirements in viewDetermine best way to extract application telemetry data, structure it, send to proper tool for reporting (Kafka, Splunk).Work with business and cross-functional teams to gather and document requirements to meet business needs.Provide support as required to ensure the availability and performance of ETL/ELT jobs.Provide technical assistance and cross training to business and internal team members.Collaborate with business partners for continuous improvement opportunities.Requirements