DRW
Senior Data Engineer
DRW, Chicago, Illinois, United States, 60290
DRW
is a diversified trading firm with over 3 decades of experience bringing sophisticated technology and exceptional people together to operate in markets around the world. We value autonomy and the ability to quickly pivot to capture opportunities, so we operate using our own capital and trading at our own risk.Headquartered in Chicago with offices throughout the U.S., Canada, Europe, and Asia, we trade a variety of asset classes including Fixed Income, ETFs, Equities, FX, Commodities and Energy across all major global markets. We have also leveraged our expertise and technology to expand into three non-traditional strategies: real estate, venture capital and cryptoassets.We operate with respect, curiosity and open minds. The people who thrive here share our belief that it’s not just what we do that matters–it's how we do it.
DRW
is a place of high expectations, integrity, innovation and a willingness to challenge consensus.As a Senior Data Engineer on our Unified Platform team, you will play an integral role in designing and building an innovative data platform used by Traders, Quantitative Researchers, and Back-Office personnel to analyze financial markets, determine trading opportunities, establish new strategies, and ensure smooth back-office processes.Technical requirements summary:Experience designing and building data-intensive distributed systemsExperience working within modern batch and streaming data ecosystemsExpert in Java/Scala or Python with experience in SQL and BashAble to own, organize, and steer team projectsContribute to project management and project reportingLead/mentor junior members of the team concerning engineering best-practices and code-qualityWhat you will do in this role:Help design, build, and manage DRW's Unified Data Platform and support its users.Work closely with Traders and Researchers to determine appropriate data sources and implement processes to onboard and manage new data sources for analysis to unlock future trading opportunities.Design and develop data solutions to help discover, purchase, organize, track usage, manage rights, and control quality of data sets to address the needs of various DRW trading teams and strategies.Monitor data ingestion pipelines and data quality to ensure stability, reliability, and quality of the data. Contribute to the monitoring and quality control software and processes.What you will need in this role:7+ years of experience working with modern data technologies and/or building data-intensive distributed systemsExpert level skills in Java/Scala or Python with a proven ability to output high-quality, maintainable codeStrong familiarity with SQL and BashExperience leveraging and building cloud-native technologies for scalable data processingPrior experience with both batch and streaming systems and an understanding of the limitations those paradigms imposeExperience with an array of data processing technologies (e.g. Flink, Spark, Polars, Dask, etc.)Experience with an array of data storage technologies (e.g. S3, RDBMS, NoSQL, Delta/Iceberg, Cassandra, Clickhouse, Kafka, etc.)Experience with an array of data formats and serialization systems (e.g. Arrow, Parquet, Protobuf/gRPC, Avro, Thrift, JSON, etc.)Experience managing complex data ETL Pipelines (e.g. Kubernetes, Argo Workflows, Airflow, Prefect, Dagster, etc.)Prior experience dealing with schema governance and schema evolutionPrior experience developing data quality control processes to detect data gaps or inaccuraciesA desire to mentor less experienced team members and champion both engineering best-practices and high code-quality standardsStrong technical problem-solving skillsProven ability to work in an agile, fast-paced environment, prioritize multiple tasks and projects, and efficiently handle the demands of a trading environment
#J-18808-Ljbffr
is a diversified trading firm with over 3 decades of experience bringing sophisticated technology and exceptional people together to operate in markets around the world. We value autonomy and the ability to quickly pivot to capture opportunities, so we operate using our own capital and trading at our own risk.Headquartered in Chicago with offices throughout the U.S., Canada, Europe, and Asia, we trade a variety of asset classes including Fixed Income, ETFs, Equities, FX, Commodities and Energy across all major global markets. We have also leveraged our expertise and technology to expand into three non-traditional strategies: real estate, venture capital and cryptoassets.We operate with respect, curiosity and open minds. The people who thrive here share our belief that it’s not just what we do that matters–it's how we do it.
DRW
is a place of high expectations, integrity, innovation and a willingness to challenge consensus.As a Senior Data Engineer on our Unified Platform team, you will play an integral role in designing and building an innovative data platform used by Traders, Quantitative Researchers, and Back-Office personnel to analyze financial markets, determine trading opportunities, establish new strategies, and ensure smooth back-office processes.Technical requirements summary:Experience designing and building data-intensive distributed systemsExperience working within modern batch and streaming data ecosystemsExpert in Java/Scala or Python with experience in SQL and BashAble to own, organize, and steer team projectsContribute to project management and project reportingLead/mentor junior members of the team concerning engineering best-practices and code-qualityWhat you will do in this role:Help design, build, and manage DRW's Unified Data Platform and support its users.Work closely with Traders and Researchers to determine appropriate data sources and implement processes to onboard and manage new data sources for analysis to unlock future trading opportunities.Design and develop data solutions to help discover, purchase, organize, track usage, manage rights, and control quality of data sets to address the needs of various DRW trading teams and strategies.Monitor data ingestion pipelines and data quality to ensure stability, reliability, and quality of the data. Contribute to the monitoring and quality control software and processes.What you will need in this role:7+ years of experience working with modern data technologies and/or building data-intensive distributed systemsExpert level skills in Java/Scala or Python with a proven ability to output high-quality, maintainable codeStrong familiarity with SQL and BashExperience leveraging and building cloud-native technologies for scalable data processingPrior experience with both batch and streaming systems and an understanding of the limitations those paradigms imposeExperience with an array of data processing technologies (e.g. Flink, Spark, Polars, Dask, etc.)Experience with an array of data storage technologies (e.g. S3, RDBMS, NoSQL, Delta/Iceberg, Cassandra, Clickhouse, Kafka, etc.)Experience with an array of data formats and serialization systems (e.g. Arrow, Parquet, Protobuf/gRPC, Avro, Thrift, JSON, etc.)Experience managing complex data ETL Pipelines (e.g. Kubernetes, Argo Workflows, Airflow, Prefect, Dagster, etc.)Prior experience dealing with schema governance and schema evolutionPrior experience developing data quality control processes to detect data gaps or inaccuraciesA desire to mentor less experienced team members and champion both engineering best-practices and high code-quality standardsStrong technical problem-solving skillsProven ability to work in an agile, fast-paced environment, prioritize multiple tasks and projects, and efficiently handle the demands of a trading environment
#J-18808-Ljbffr