Logo
GEICO

Senior Engineer - Data ETL (SQL & Spark)

GEICO, Chevy Chase, Maryland, United States, 20815


Senior Engineer - Data ETL (SQL & Spark)Position SummaryGEICO is seeking an experienced Senior Engineer with a passion for building high-performance, low maintenance, zero-downtime platforms, and applications. You will help drive our insurance business transformation as we transition from a traditional IT model to a tech organization with engineering excellence as its mission, while co-creating the culture of psychological safety and continuous improvement.

Position DescriptionOur Senior Engineer is a key member of the engineering staff working across the organization to provide a friction-less experience to our customers and maintain the highest standards of protection and availability. Our team thrives and succeeds in delivering high quality technology products and services in a hyper-growth environment where priorities shift quickly. The ideal candidate has broad and deep technical knowledge, typically ranging from front-end UIs through back-end systems and all points in between.Candidates must have expertise in SQL and a strong knowledge of Data Engineering ETL concepts. They should have experience with at least one programming language (Python, Java, etc.), and be able to support new data development as well as maintain existing pipelines. Preferred experience would include a background in Databricks DBT, Python, Airflow, Azure Data Factory, and/or KAFKA. Team members will work to deliver customer needs within our enterprise data warehouse, which may include data ingestion, flattening, alerting, testing, transformation, optimization, and/or data mart development.

Position ResponsibilitiesAs a Senior Engineer, you will:

Design and implement a data ingestion platformScope, design, and build scalable, resilient distributed systemsBuild product definition and leverage your technical skills to drive towards the right solutionEngage in cross-functional collaboration throughout the entire software lifecycleLead in design sessions and code reviews with peers to elevate the quality of engineering across the organizationDefine, create, and support reusable application components/patterns from a business and technology perspectiveBuild the processes required for optimal extraction, transformation, and loading of dataWork with other teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologiesPerform unit tests and conduct reviews with other team members to make sure code is rigorously designed, elegantly coded, and effectively tuned for performanceShare your passion for staying on top of tech trends, experimenting with, and learning recent technologies, participating in internal and external technology communities, and mentoring other members of the engineering communityMentor other engineersConsistently share best practices and improve processes within and across teams

QualificationsExperience developing new and enhancing existing data processing (Data Ingest, Data Transformation, Data Store, Data Management, Data Quality) componentsAdvanced programming experience and big data experienceUnderstanding of data warehouse concepts including data modeling and OLAPExperience working with cloud data solutions (Delta Lake, Iceberg, Hudi, Snowflake, Redshift or equivalent)Experience with data formats such as Parquet, Avro, ORC, XML, JSONExperience with designing, developing, implementing, and maintaining solutions for data ingestion and transformation projectsExperience working streaming applications (Spark Streaming, Flink, Kafka or equivalent)Data processing/data transformation using ETL/ELT tools such as DBT (Data Build Tool), or DatabricksExperience programming languages like Python, Scala, Spark, JavaExperience with container orchestration services including Docker and KubernetesStrong working knowledge of SQL and the ability to write, debug and optimize SQL queries and ETL jobs to reduce the execution window or reduce resource utilizationExperience with cloud computing (AWS, Microsoft Azure, Google Cloud)Exposure to messaging such as Kafka, ActiveMQ, RabbitMQ or similar messaging technologies.Experience with REST, Microservices is a big plusExperience with developing systems that are scalable, resilient, and highly availableExperience with Infrastructure as CodeExperience with CI/CD deployment and test automation. ADO, Jenkins, Gradle, Artifactory or equivalentsExperience with containerization (examples include Docker and Kubernetes)Experience with version control systems such as GITExperience with load testing and load testing toolsAdvanced understanding of monitoring concepts and toolingExperience with Elastic Search, Dynatrace, Thousand Eyes, Influx, Prometheus, Grafana or equivalentsExperience architecting and designing new and current systemsAdvanced understanding of DevOps conceptsStrong problem-solving abilityAbility to excel in a fast-paced environmentKnowledge of developer tooling across the software development life cycle (task management, source code, building, deployment, operations, real-time communication)

Experience4+ years of professional software development within at least one of the following Java, Spark, Scala, Python3+ years of experience with architecture and design3+ years of experience with AWS, GCP, Azure, or another cloud service2+ years of experience in open-source frameworks

EducationBachelor’s degree in Computer Science, Information Systems, or equivalent education or work experience

#J-18808-Ljbffr