Logo
Qarik

Data Engineer (Contract)

Qarik, Chicago, Illinois, 60290


Qarik Overview Qarik Group, LLC is a technology consulting firm focused on combining senior-level expertise and experience to help clients see further and go faster, solving big business problems. We have a saying at Qarik that sums up our culture: 'Greatness grows greatness.' It reflects how we support each other by sharing expertise, experience and opportunity. One person's insight smooths the way for another to succeed. And it's not just about supporting each other. It's as much about how we help our client's businesses thrive. By using what we collectively know and mashing ideas together, breathtaking things happen. Not least of which is how your career's journey can go further and faster than you ever imagined possible. So, if you have greatness to give, we've got the perfect place to help it grow. About Our Work We work "with" our clients, not "for" them. We embrace agile instead of deliverables and milestones - establish a vision, create theme based roadmap and then execute. We work in the trenches with client's engineers to learn from each other's experiences and capabilities. We frequently have a stake in their success, ensuring that everyone is aiming towards the same goals. We check badges and egos at the door and bring incredible people together to achieve great outcomes. This allows people to bring their whole person to contribute to, and be part of the team. You should never feel like you have to be somebody else. Overview of the Role We are looking for an experienced Data Engineer to join our growing team. In this role, you will work closely with our clients to execute effective data strategies, encompassing production-quality ETL and streaming data processing pipelines, data lake design, data governance, and the development of reports and dashboards. Your responsibilities will include understanding clients' objectives and the insights they aim to derive, allowing you to tailor system designs and executions to meet these needs. Additionally, you will communicate relevant concepts to both technical and non-technical audiences, and architect and deliver innovative solutions to complex problems. Our ideal team member possesses strong mathematical and statistical expertise, a natural curiosity, a creative mindset, and proficiency with tools such as Databricks, Jupyter Notebooks, Pandas, scikit-learn, and experience with AWS and CI/CD practices. Key Responsibilities Client Data Strategy: Guide clients through their data journey with comprehensive knowledge of the GCP and AWS data product offerings and ecosystems. ETL and Data Processing: Develop Data Ingest and ETL integrations using tools like Apache Beam and Databricks to ensure optimal performance across streaming and batch modes. Programming and Analysis: Utilize Python, Jupyter Notebooks, Pandas, and scikit-learn for data manipulation, analysis, and machine learning tasks. SQL Development: Write efficient and modular SQL to facilitate optimal querying of complex data questions. Visualization and Reporting: Design, build, and deliver visualization capabilities such as reporting and dashboarding via off-the-shelf systems (e.g., Looker, Tableau) as well as custom-developed metrics visualizations. Infrastructure Management: Architect, create, and manage the technological infrastructure of a data platform, incorporating CI/CD practices for streamlined deployment and maintenance. Data Innovation: Evaluate business needs and objectives; generate ideas for data innovation to meet clients' business goals and enable business leaders to perform exploratory analytics. Collaboration and Communication: Communicate relevant data concepts effectively to both technical and non-technical stakeholders, ensuring alignment and understanding across teams. Key Criteria Technical Expertise: Programming Languages: Proficiency in Python, SQL, and at least one other relevant programming language (e.g., R, Java). Data Tools: Strong experience with Databricks, Jupyter Notebooks, Pandas, and scikit-learn for data processing and analysis. Cloud Platforms: Experience with both GCP and AWS data services and ecosystems. Data Warehousing: Strong experience with Data Warehousing solutions (e.g., BigQuery) and Transactional DBs (such as Spanner, AlloyDB, MySQL, Postgres, etc.). CI/CD: Familiarity with CI/CD tools and practices for automating data pipeline deployments. Data Engineering Experience: Previous experience as a Data Engineer or in a similar role working with big data pipelines. Visualization and Analytics: Proficiency with visualization, reporting, and analytics tools such as Looker and Tableau. Big Data Tools: Knowledge of other big data tools such as Hadoop, Flink, etc., is a plus. SQL Proficiency: Fluency in SQL or similar data manipulation syntax for relational databases. Communication Skills: Effective communication, interpersonal, and organizational skills to collaborate with clients and team members. Self-Motivation: Ability to self-motivate and manage your own tasks and projects according to an agreed roadmap. Qarik offers a competitive and comprehensive employee compensation and benefits package. Qarik is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity and expression, national origin, disability, or protected veteran status. For further information please contact our careersqarik.com