Logo
Twist Bioscience

Sr Data Architect

Twist Bioscience, San Francisco, California, United States, 94199


The Scientific Computing Team at Twist is looking for a

Data Architect

to design and oversee the implementation of our data infrastructure. This role is essential for supporting our cutting-edge products in genomics, diagnostics and personalized medicine through robust and scalable data solutions.

As a Data Architect with a focus on data engineering you will play a pivotal role in designing and implementing Twist’s data infrastructure. You will work closely with cross-functional teams to ensure our data architecture supports the company's strategic objectives while being actively involved in hands-on data engineering tasks.

Key Responsibilities:

Develop and maintain scalable, flexible and high-performance data architecture.

Create data models, schemas and architecture blueprints to support business intelligence, analytics and data science initiatives.

Ensure data architecture aligns with business goals and complies with data governance and security policies.

Build, optimize and maintain efficient ETL/ELT pipelines to process large volumes of data.

Implement data integration solutions to connect various data sources, including APIs, databases and third-party services.

Collaborate with data engineers to develop and enhance data pipelines using modern data engineering tools and frameworks.

Work closely with data scientists, analysts, and other stakeholders to understand data needs and deliver solutions.

Provide technical leadership and mentorship to junior data engineers and team members.

Foster a culture of continuous improvement by identifying opportunities to optimize data processes and workflows.

Establish monitoring and alerting mechanisms to ensure data integrity and system performance.

Troubleshoot and resolve data-related issues promptly to minimize disruption to business operations.

What You’ll Bring to the Team:

Programming experience - SQL and Python

Comfortable operating in cloud-based technologies such as AWS, Databricks, Snowflake, dbt

Experience with visualization tools - Tableau preferred

Strong interpersonal skills and the ability to work well in a team.

Attention to detail and the ability to work efficiently in a fast-paced environment.

Critical and strategic thinking to your work, including making logical conclusions, anticipating obstacles, and open-minded approaches to problem solving.

Preferred Qualifications:

Bachelor’s or Master’s degree in Computer Science, Information Systems or a related field with proven experience in data solutions.

Minimum of 5 years of relevant industry experience in a field such as computer science, molecular biology, biochemistry, genetics, genomics, or chemistry.

Proficient in Snowflake for data warehousing, including comprehensive administration and advanced data modeling with expertise in automating workflows and optimizing performance across data systems.

Expertise in building and maintaining scalable ETL/ELT pipelines using tools like Matillion, dbt, Fivetran and HVR for data transformation and integration.

Proficiency in SQL and programming languages such as Python and Java, with experience in big data technologies like Hadoop, Spark, and Databricks.

Expertise in data visualization tools like Tableau with a strong ability to create insightful reports and dashboards.

Strong understanding of data quality, data cataloging and governance practices to ensure data integrity and compliance.

Excellent problem-solving skills with the ability to work independently and collaboratively and strong communication skills to convey complex technical concepts to non-technical stakeholders.

#J-18808-Ljbffr