Logo
GEICO

Staff Software Engineer - Platform (Finance Data) REMOTE

GEICO, Chevy Chase, Maryland, United States, 20815


Position DescriptionOur Staff Engineer is a lead member of the engineering staff working across the organization to provide a friction-less experience to our customers and maintain the highest standards of protection and availability. Our team thrives and succeeds in building and delivering high-quality technology products and services while influencing best practices in a hyper-growth environment as priorities evolve. The ideal candidate has broad and deep technical knowledge, typically ranging from managing backend resources to System reliability and all points in between. They will have advance experience and deep expertise in Platform and Data Engineering to build and manage a Finance Data Lake with multiple edge source integrations from the ground up.Position ResponsibilitiesTake ownership and proactively drive execution and management of end-to-end Data Lakehouse for Finance DataFocus on multiple areas and provide leadership to the engineering teamsOwn complete solution across its entire life cycleInfluence and build vision with engineering leadership, team members, customers, and other engineering teams to solve complex problems for building enterprise-class business applicationsAccountable for the automation, quality, usability, and performance of the solutionsLead in design sessions and code reviews to elevate the quality of engineering across the organizationUtilize programming languages like Python, SQL, and NoSQL databases, Container Orchestration services including Terraform, Docker and Kubernetes, and a variety of Azure tools and services to build an Event Driven Big Data Streaming platform for an ELT data pipelineMentor more junior team members professionally to help them realize their full potentialConsistently share best practices and improve processes within and across teamsQualificationsFluency and specialization with at least two modern languages such as Java, or Python including other object-oriented design and PowerShell scriptingExperience building the architecture and design (architecture, design patterns, reliability, and scaling) of new and current systemsExperience with Event driven Big Data streaming infrastructure and ETL/ELT frameworks (e.g., Spark Streaming, Flink, Kafka, Hive, Hadoop, Airflow, etc.)Experience with deploying highly robust and scalable data pipelines processing petabytes of dataExperience working with Hadoop, SQL, No-SQL platformsExperience with various file formats such as Iceberg, Avro, JSON, and ParquetFluency with DevOps concepts, Containerization, Test Automation, CI/CD, and Infrastructure as code like GitHub, Kubernetes, Docker, Terraform, Helm, Ansible, Chef, etc.Experience with the Azure Ecosystem – Azure DevOps, Azure Data Lake, Azure Data Factory, Azure Databricks, Azure StorageExperience with observability and monitoring platforms for telemetry, alerts, monitoring, SLA and SLOs with Grafana, Azure Monitoring, AppInsights, Dynatrace, or equivalentsExperience with performance tuning with applications processing large amounts of dataExperience with Load Testing and Quality AssuranceStrong verbal and written communication skillsExperience6+ years of professional experience in data software development, programming languages and developing with big data technologies4+ years of experience in open-source frameworks3+ years of experience with architecture and design3+ years of experience with AWS, GCP, Azure, or another cloud serviceEducationBachelor’s degree in Computer Science, Information Systems, or equivalent education or work experience

#J-18808-Ljbffr