Logo
TRM Labs

Senior or Staff Software Engineer, Data Platform

TRM Labs, San Francisco, California, United States, 94199


TRM is on a mission to build a safer financial system for billions of people. We deliver a blockchain intelligence data platform to financial institutions, crypto companies, and governments to fight cryptocurrency fraud and financial crime. We consider our business — and our profit — as a way to move towards our mission sustainably and at scale.The Data Platform team collaborates with an experienced group of data scientists, engineers, and product managers to build highly available and scalable data infrastructure for TRM's products and services. As a Software Engineer on the Data Platform team, you will be responsible for executing mission-critical systems and data services that analyze blockchain transaction activity at petabyte scale, and ultimately work to build a safer financial system for billions of people.The impact you’ll have here:Build highly reliable data services to integrate with dozens of blockchains.Develop complex ETL pipelines that transform and process petabytes of structured and unstructured data in real-time.Design and architect intricate data models for optimal storage and retrieval to support sub-second latency for querying blockchain data.Oversee the deployment and monitoring of large database clusters with an unwavering focus on performance and high availability.Collaborate across departments, partnering with data scientists, backend engineers, and product managers to design and implement novel data models that enhance TRM’s products.What we’re looking for:A Bachelor's degree (or equivalent) in Computer Science or a related field.A proven track record, with 8+ years of hands-on experience in architecting distributed system architecture, guiding projects from initial ideation through to successful production deployment.Exceptional programming skills in Python, as well as adeptness in SQL or SparkSQL.Versatility that spans the entire spectrum of data engineering in one or more of the following areas:

In-depth experience with data stores such as Icerberg, Trino, BigQuery, and StarRocks, and Citus.Proficiency in data pipeline and workflow orchestration tools like Airflow, DBT, etc.Expertise in data processing technologies and streaming workflows including Spark, Kafka, and Flink.Competence in deploying and monitoring infrastructure within public cloud platforms, utilizing tools such as Docker, Terraform, Kubernetes, and Datadog.Proven ability in loading, querying, and transforming extensive datasets.

#J-18808-Ljbffr