Logo
Argo AI

Sr. Data Engineer

Argo AI, Chicago, Illinois, United States, 60290


The Senior Data Engineer is a full-time/permanent position located on-site in our Chicago office. The ideal candidate will have a proven track record in implementing data ingestion and transformation pipelines for large scale organizations. The candidate should have 8+ years of experience and love being a problem solver, translating requirements into data metrics and schemas that are easy to understand. We are seeking someone with deep technical skills in a variety of technologies like Snowflake, Talend, AWS storage integration solutions, and DevOps. The candidate should also have strong interpersonal skills, enabling them to guide people through conversations to collect the information as needed. Who You Are You're someone who wants to see the impact of your work making a difference every day by driving outcomes and results to stakeholders and solving client needs. Your friends describe you as thorough, analytical, and detail-oriented, and someone who gets things done. You are someone with high standards who leads by example and who will take pride in Argo like we do. Most of all, you love solving difficult business and technical challenges to make a difference in the lives of other people. Qualifications: Core Skills: SQL • ETL • Talend • Snowflake • Scripting • Git • Batch and streaming technologies. 8+ years’ experience in Data Warehousing and Data Engineering. 3+ years’ strong experience with Snowflake and Talend. Strong SQL and PL/SQL skills and ability to write queries and data extracts. Experience working with different file formats like Parquet, Avro, JSON, etc. Good understanding of Snowflake database architecture and ability to design and build optimal data processing pipelines. Demonstrated skill in designing highly scalable ETL processes with complex data transformations, data formats including data cleansing, data quality assessment, error handling, and monitoring. Expertise in building and managing large volume data processing (both streaming and batch) platform is a must. Design, develop, manage, and monitor complex ETL data pipelines and support them through all environment runways. Experience with Python/JavaScript or other scripting languages is a plus. Proficient utilizing platforms and technologies which support DevOps and SDLC leveraging CI/CD principles and best practices. Ability to work with developers to build CI/CD pipelines and self-service build tools to automate deployment processes. Working knowledge and experience of using orchestration frameworks like Airflow, UC4, and AWS Step functions will be good. Knowledge of Containerization (Docker/Kubernetes) is a plus. Provide support and troubleshooting for data platforms. Must be willing to provide escalated on-call support for complicated and/or critical incidents. Work well within an Agile environment or have sound Agile Scrum Team knowledge. Manage and prioritize multiple assignments. Ability to work individually and as a team. Provide technical guidance and mentoring for other team members. Good communication and cross-functional skills.

#J-18808-Ljbffr