Perpay Inc
Junior Data Engineer
Perpay Inc, Phila, Pennsylvania, United States, 19117
About the Role:As a Junior Data Engineer at Perpay, you will assist in building and maintaining our data pipelines and architectures, supporting our data products and insights. You will work alongside data scientists, analysts, and other team members to help drive Perpay’s mission of creating inclusive financial products that improve the lives of our members. With the launch of our credit card, the need for effective data engineering resources has increased to meet our growing modeling, reporting, and analytical requirements.You will gain exposure to a wide variety of projects across different business domains, ensuring our data infrastructure supports essential functions in risk, commerce, marketing, operations, and more. Your work will directly impact our customers by enabling automated and efficient data-driven services.We are looking for a Junior Data Engineer who is a quantitative, eager learner with a passion for data and a willingness to develop new skills. The ideal candidate has a foundational understanding of data engineering principles and is excited to grow their expertise in building and maintaining data pipelines, implementing ETL processes, and supporting data governance initiatives. You should be comfortable working in a fast-paced, entrepreneurial environment and handling multiple tasks with various stakeholders.
Why You’ll Love It Here:
Learning Opportunities:
Gain hands-on experience and learn from experienced professionals in the field.
Variety:
Work on a diverse set of projects that will expose you to different areas of data engineering and business functions.
Growth:
Opportunities for career advancement and professional development.
Collaborative Environment:
Join a team that values collaboration and continuous improvement.
Our greatest strength is our people and we’d love for you to be one of them!
Responsibilities:
Assist in the development and maintenance of ETL pipelines using tools like AWS Glue, Apache Airflow, and Fivetran
Support data producers in understanding data sources and contribute to the design and implementation of data models using Redshift and Snowflake
Implement basic data governance practices, including metadata management and data lineage tracking with tools such as Apache Atlas
Collaborate with team members to develop scalable data solutions, ensuring data quality and reliability
Help identify and resolve data-related issues, applying optimization techniques like indexing and partitioning
Learn and contribute to the ongoing development of a modern data architecture, gaining exposure to advanced data engineering practices
Stay current with industry trends and best practices, continuously developing technical skills in data engineering
What You’ll Bring:
Bachelor’s degree in a quantitative/technical field (Computer Science, Statistics, Engineering, Mathematics, Physics, Chemistry)
0-2 years of experience in data engineering or related fields, with a strong eagerness to learn and grow
Basic proficiency in SQL and Python, with a willingness to learn cloud data platforms such as AWS, GCP, or Azure
Familiarity with data warehouse solutions like Redshift or Snowflake and data orchestration tools like Apache Airflow is a plus
Understanding of data modeling and ETL processes, with a keen interest in data governance and quality practices
Strong problem-solving skills and the ability to work collaboratively in a team environment
Excellent communication skills and a proactive approach to learning and development
Hey,
we know not everybody checks all the boxes, so if you’re interested, please apply because you could be just what we’re looking for!
#J-18808-Ljbffr
Why You’ll Love It Here:
Learning Opportunities:
Gain hands-on experience and learn from experienced professionals in the field.
Variety:
Work on a diverse set of projects that will expose you to different areas of data engineering and business functions.
Growth:
Opportunities for career advancement and professional development.
Collaborative Environment:
Join a team that values collaboration and continuous improvement.
Our greatest strength is our people and we’d love for you to be one of them!
Responsibilities:
Assist in the development and maintenance of ETL pipelines using tools like AWS Glue, Apache Airflow, and Fivetran
Support data producers in understanding data sources and contribute to the design and implementation of data models using Redshift and Snowflake
Implement basic data governance practices, including metadata management and data lineage tracking with tools such as Apache Atlas
Collaborate with team members to develop scalable data solutions, ensuring data quality and reliability
Help identify and resolve data-related issues, applying optimization techniques like indexing and partitioning
Learn and contribute to the ongoing development of a modern data architecture, gaining exposure to advanced data engineering practices
Stay current with industry trends and best practices, continuously developing technical skills in data engineering
What You’ll Bring:
Bachelor’s degree in a quantitative/technical field (Computer Science, Statistics, Engineering, Mathematics, Physics, Chemistry)
0-2 years of experience in data engineering or related fields, with a strong eagerness to learn and grow
Basic proficiency in SQL and Python, with a willingness to learn cloud data platforms such as AWS, GCP, or Azure
Familiarity with data warehouse solutions like Redshift or Snowflake and data orchestration tools like Apache Airflow is a plus
Understanding of data modeling and ETL processes, with a keen interest in data governance and quality practices
Strong problem-solving skills and the ability to work collaboratively in a team environment
Excellent communication skills and a proactive approach to learning and development
Hey,
we know not everybody checks all the boxes, so if you’re interested, please apply because you could be just what we’re looking for!
#J-18808-Ljbffr