Logo
TechnoGen

Scala

TechnoGen, Denver, Colorado, United States, 80285


Title: Scala Developer IV

Location: Remote Denver, CO

Duration: Contract

Requirements

Job Description:

3+ years' Scala programming experienceSpark streaming experience (using this for batch jobs is a plus)KafkaPlusses

Akka Streams data library experienceCore Java development backgroundSQL database experiencePythonLambdaCompany Overview

Client is a leading broadband communications company and the second largest cable operator in the United States. Spectrum is the premier TV, Internet, and voice service from Charter; similar to how Comcast brands themselves as Xfinity for their TV and internet services. Charter provides a full range of services, including Spectrum TV, Spectrum Internet, Spectrum Voice, and Spectrum Mobile. They have over 28 million customers, service 41 states in the US, and have 98,000 internal employees.Design and build scalable data pipelines to ingest, transform, and deliver company data to internal and external stakeholders, including multiple reporting teams and external media partners.Develop and maintain data pipelines for real-time and batch processing (hourly).Implement new data exports and update business logic for various data feeds.Contribute to the development of billion-scale streaming data pipeline infrastructure.Ensure data quality and integrity throughout the ingestion and processing lifecycle.Typical Day-to-Day

Participate in brief morning status meeting (15 minutes) and developer sync with the greater team (10-30 minutes).Work on assigned tasks with minimal meetings outside of weekly team syncs.Agile development environment with 2-week sprints and production releases on Tuesday during working hours.Work hours based on Mountain Standard Time (MST).On call responsibilities (Mon-Fri, 8am-7pm MST, about once every six weeks)Skills And Experience

3+ years of experience coding in Scala with a strong foundation in functional programming concepts.Experience with Apache Spark Structured Streaming (in Scala), with a focus on migrating existing projects and building new features using Spark.Experience with Apache Kafka for real-time message queuing, with an understanding of Consumer and Producer offset management and at-least-once & at-most-once semantics.Familiarity with container orchestration concepts, preferably Kubernetes.Basic understanding of AWS cloud technologiesS3, Athena, Lambda, IAM, EMRComfortable using SQL for data exploration.Experience supporting large-scale data feeds.Nice to have: experience building infrastructure with TerraformWe are looking for a candidate with a strong foundation in Scala who is transitioning to a data background rather than a PySpark Data Engineer who is new to Scala or streaming concepts.Additional Information

This role offers the opportunity to work on challenging and impactful projects within a collaborative team environment.You will be instrumental in building and maintaining the data infrastructure that supports critical business decisions.Per company policy, full-time conversions will require the candidate be local to Denver or certain hub locations at this time.