Hewlett Packard
Principal Data Engineer
Hewlett Packard, Vancouver, Washington, United States, 98662
Does Big Data, AI and Cloud Native Data Lakes/DW get you excited? Does the thought of building sustainable Big Data customer engagement solutions interest you? How about working with leading edge technologies like Databricks on AWS? Can you see yourself integrating AI solutions with the best data tools in the world? Do you want to build CRM solutions at enterprise scale? Are you excited to provide the best customer data privacy available and build HP’s engagement of trust with customers?
The Principal data engineer role is a team leader in setting the direction and then implementing a modern customer engagement solution for HP’s consumer subscription business. This role will work with external and internal business partners to capture business requirements and help develop data driven solutions to support the business. You will drive the work you're doing to completion with hands-on development responsibilities.
This position will apply developed subject matter knowledge to solve common and complex business issues and recommend appropriate alternatives. The daily work will consist of solving problems that are of diverse complexity and scope, while exercising judgment within generally defined policies and practices.
Responsibilities:
Works with other architects and data engineers to establish secure and performant data architectures, enhancements, updates, and programming changes for portions and subsystems of data platform, repositories or models for structured/unstructured data.
Writes and executes complete testing plans, protocols, and documentation for assigned portion of data system or component; identifies defects and creates solutions for issues with code and integration into data system architecture.
Typically interacts with high-level Contributors, Managers and Program Teams on a daily/weekly basis.
Helps define and drive portions of project requirements for data exchanges and business requirements with externals and internal teams.
AI Objectives:
Develop a framework and environment for data conditioning and modeling.
Implement ML trained on models developed to support efficient and effective campaign communications.
Integrate AI for orchestration of communications, products and services, channels, and touchpoints that are the most meaningful to the customer and their relationship with the company.
We are looking for world class talent that brings the following key skills and experience to this role:
Bachelor's or Master's degree in Computer Science, Information Systems, Engineering or equivalent.
7+ years of relevant experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools.
Cloud based DW such as Redshift, Snowflake etc.
Hadoop, SPARK, Hive & Delta Lake.
Databricks, AWS EMR, AWS Glue etc.
Leverage monitoring tools/frameworks, like Splunk, CloudWatch etc.
Docker, Kubernetes, ECR etc.
Parquet, Avro, Delta Lake, Spark.
CI/CD processes such as Jenkins, Codeway etc. and source control tools such as GitHub, etc.
Python, PySpark, Scala & Java.
Experience in building lambda, kappa, microservice and batch architecture.
Designing data systems/solutions to manage complex data.
Nice to Have:
Experience with transformation tools such as dbt.
Building real-time streaming data pipelines.
Pub/sub streaming technologies like Kafka, Kinesis, Spark Streaming etc.
Marketing Communications and Data Privacy.
The base pay range for this role is $137,000 to $211,000 annually with additional opportunities for pay in the form of bonus and/or equity (applies to US candidates only). Pay varies by work location, job-related knowledge, skills, and experience.
Benefits:
Health insurance
Dental insurance
Vision insurance
Long term/short term disability insurance
Employee assistance program
Flexible spending account
Life insurance
Generous time off policies, including:
4-12 weeks fully paid parental leave based on tenure
11 paid holidays
Additional flexible paid vacation and sick leave (US benefits overview)
The compensation and benefits information is accurate as of the date of this posting. The Company reserves the right to modify this information at any time, with or without notice, subject to applicable law.
#J-18808-Ljbffr
The Principal data engineer role is a team leader in setting the direction and then implementing a modern customer engagement solution for HP’s consumer subscription business. This role will work with external and internal business partners to capture business requirements and help develop data driven solutions to support the business. You will drive the work you're doing to completion with hands-on development responsibilities.
This position will apply developed subject matter knowledge to solve common and complex business issues and recommend appropriate alternatives. The daily work will consist of solving problems that are of diverse complexity and scope, while exercising judgment within generally defined policies and practices.
Responsibilities:
Works with other architects and data engineers to establish secure and performant data architectures, enhancements, updates, and programming changes for portions and subsystems of data platform, repositories or models for structured/unstructured data.
Writes and executes complete testing plans, protocols, and documentation for assigned portion of data system or component; identifies defects and creates solutions for issues with code and integration into data system architecture.
Typically interacts with high-level Contributors, Managers and Program Teams on a daily/weekly basis.
Helps define and drive portions of project requirements for data exchanges and business requirements with externals and internal teams.
AI Objectives:
Develop a framework and environment for data conditioning and modeling.
Implement ML trained on models developed to support efficient and effective campaign communications.
Integrate AI for orchestration of communications, products and services, channels, and touchpoints that are the most meaningful to the customer and their relationship with the company.
We are looking for world class talent that brings the following key skills and experience to this role:
Bachelor's or Master's degree in Computer Science, Information Systems, Engineering or equivalent.
7+ years of relevant experience with detailed knowledge of data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools.
Cloud based DW such as Redshift, Snowflake etc.
Hadoop, SPARK, Hive & Delta Lake.
Databricks, AWS EMR, AWS Glue etc.
Leverage monitoring tools/frameworks, like Splunk, CloudWatch etc.
Docker, Kubernetes, ECR etc.
Parquet, Avro, Delta Lake, Spark.
CI/CD processes such as Jenkins, Codeway etc. and source control tools such as GitHub, etc.
Python, PySpark, Scala & Java.
Experience in building lambda, kappa, microservice and batch architecture.
Designing data systems/solutions to manage complex data.
Nice to Have:
Experience with transformation tools such as dbt.
Building real-time streaming data pipelines.
Pub/sub streaming technologies like Kafka, Kinesis, Spark Streaming etc.
Marketing Communications and Data Privacy.
The base pay range for this role is $137,000 to $211,000 annually with additional opportunities for pay in the form of bonus and/or equity (applies to US candidates only). Pay varies by work location, job-related knowledge, skills, and experience.
Benefits:
Health insurance
Dental insurance
Vision insurance
Long term/short term disability insurance
Employee assistance program
Flexible spending account
Life insurance
Generous time off policies, including:
4-12 weeks fully paid parental leave based on tenure
11 paid holidays
Additional flexible paid vacation and sick leave (US benefits overview)
The compensation and benefits information is accurate as of the date of this posting. The Company reserves the right to modify this information at any time, with or without notice, subject to applicable law.
#J-18808-Ljbffr