Alphanumeric Systems
Scientific Knowledge Engineer
Alphanumeric Systems, Durham, North Carolina, United States, 27703
Alphanumeric is hiring a SCIENTIFIC KNOWLEDGE ENGINEER to work in the Research Triangle Park, NC with our client of 20 years committed to improving lives through medical and pharmaceutical advancements.
The Onyx Research Data Platform organization represents a major investment by R&D and Digital & Tech, designed to deliver a step- change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward:
* Building a next-generation, metadata- and automation-driven data experience for scientists, engineers, and decision-makers, increasing productivity and reducing time spent on 'data mechanics'
* Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent
* Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time
The Scientific Knowledge Engineering team, which sits within the Onyx Product Management organization, is responsible for the data modeling, ontology definition and management, vocabulary mapping, and other key metadata activities that ensure Onyx platforms and data assets speak scientific language. They are a core factor in delivering the R&D Knowledge Graph - the semantic layer that connects all of our data and metadata systems - as well as the core metadata experiences that ultimately allow us to build products and services that both delight our customers and enable impressive automation and intelligence.
This role is responsible for maximizing the value of our data assets over a lifetime to bring purpose to data by acting as translators of highly technical information from domain experts into an appropriate data model - complete with significant ontology and vocabularythat can be utilized to effectively structure and index the data. Specifically working with Product managers and R&D subject matter expertise to define the language (data models, ontology, standards, etc.) of science into data products by acting as the voice of 'Knowledgebase' and interoperability/value of asset. This includes responsibility for the understanding and translation of computational methods back through the data chain to maximize the quality and speed of data from source to drive experimental multi-variant analysis and data driven decision-making.
* Definition of schemas and data models of scientific information required for the creation of value adding data products. This includes accountability for the quality control and mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling.
* Accountable for the quality control (through validation and verification) of mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling - e.g., models, schemas, controlled vocab.
* Working with Product managers/engineers confidently convert business need into defined deliverable business requirements to enable the integration of large-scale biology data to predict, model, and stabilize therapeutically relevant protein complex and antigen conformations for drug and vaccine discovery.
* Collaborate with external groups to align data standards with industry/ academic ontologies ensuring that data standards are defined with usage/analytics in mind. They may also provide data source profiling and advisory consultancy to R&D outside of Onyx.
* Support effective ingestion of data by through understanding the entry requirements required by platform engineering teams and ensuring that the 'barrier for entry' is met e.g. Scientific information has the appropriate metadata to be indexed, structured, integrated and standardized as needed. This may require articulation of engineering standards and metadata information needs to third parties to ensure efficient and automate ingestion at scale.
* Provides bespoke subject matter expertise for R&D data to translate deep science into data for actionable insights
Basic Qualifications
* Bachelor's degree (Bioinformatics, Biomedical Science, Biomedical Engineering, Molecular Biology, or Computer Science)* Biologist related work experience* 5-8 years job-related experience with an established track record of delivery* Working experience querying relational databases - SQL* Experience with industry standard data management / metadata platforms e.g. Collibra, Datahub, Datum, Informatica* Data modeling, quality, analysis, profiling (working experience with any data quality tool, SAS, Ataccama, Informatica Data Quality, Talend, OpenRefine)* Experience with industry standard tools for building data protocols e.g. Avro, Protocol Buffers, Thrift* Experience with at least one programming language - e.g. Python - for scripting vocabulary mappings, building data models, etc.* Awareness of RDF, Ontology, reference data* Experience with open-source ontology tools, data formats, languages (Protg, SPARQL, OWL, SKOS, SHACL, RML)* Specific experience with Knowledge Graph efforts, experience using ontology/taxonomy tools such as Centree, TopBraid, Smartlogic Semaphore etc* Experience with at least one programming language - e.g. Python - for scripting vocabulary mappings, building data models, etc.
Preferred Qualifications* Demonstrated comfort operating and leading across organizational boundaries a matrixed team* Membership of data standards group, industry committee, board, or consortium* Specific experience with ontology, Knowledge Graph efforts* Experience in technical writing, documentation
The Onyx Research Data Platform organization represents a major investment by R&D and Digital & Tech, designed to deliver a step- change in our ability to leverage data, knowledge, and prediction to find new medicines. We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward:
* Building a next-generation, metadata- and automation-driven data experience for scientists, engineers, and decision-makers, increasing productivity and reducing time spent on 'data mechanics'
* Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent
* Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time
The Scientific Knowledge Engineering team, which sits within the Onyx Product Management organization, is responsible for the data modeling, ontology definition and management, vocabulary mapping, and other key metadata activities that ensure Onyx platforms and data assets speak scientific language. They are a core factor in delivering the R&D Knowledge Graph - the semantic layer that connects all of our data and metadata systems - as well as the core metadata experiences that ultimately allow us to build products and services that both delight our customers and enable impressive automation and intelligence.
This role is responsible for maximizing the value of our data assets over a lifetime to bring purpose to data by acting as translators of highly technical information from domain experts into an appropriate data model - complete with significant ontology and vocabularythat can be utilized to effectively structure and index the data. Specifically working with Product managers and R&D subject matter expertise to define the language (data models, ontology, standards, etc.) of science into data products by acting as the voice of 'Knowledgebase' and interoperability/value of asset. This includes responsibility for the understanding and translation of computational methods back through the data chain to maximize the quality and speed of data from source to drive experimental multi-variant analysis and data driven decision-making.
* Definition of schemas and data models of scientific information required for the creation of value adding data products. This includes accountability for the quality control and mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling.
* Accountable for the quality control (through validation and verification) of mapping specifications to be industrialized by data engineering and maintained in platform provisioned tooling - e.g., models, schemas, controlled vocab.
* Working with Product managers/engineers confidently convert business need into defined deliverable business requirements to enable the integration of large-scale biology data to predict, model, and stabilize therapeutically relevant protein complex and antigen conformations for drug and vaccine discovery.
* Collaborate with external groups to align data standards with industry/ academic ontologies ensuring that data standards are defined with usage/analytics in mind. They may also provide data source profiling and advisory consultancy to R&D outside of Onyx.
* Support effective ingestion of data by through understanding the entry requirements required by platform engineering teams and ensuring that the 'barrier for entry' is met e.g. Scientific information has the appropriate metadata to be indexed, structured, integrated and standardized as needed. This may require articulation of engineering standards and metadata information needs to third parties to ensure efficient and automate ingestion at scale.
* Provides bespoke subject matter expertise for R&D data to translate deep science into data for actionable insights
Basic Qualifications
* Bachelor's degree (Bioinformatics, Biomedical Science, Biomedical Engineering, Molecular Biology, or Computer Science)* Biologist related work experience* 5-8 years job-related experience with an established track record of delivery* Working experience querying relational databases - SQL* Experience with industry standard data management / metadata platforms e.g. Collibra, Datahub, Datum, Informatica* Data modeling, quality, analysis, profiling (working experience with any data quality tool, SAS, Ataccama, Informatica Data Quality, Talend, OpenRefine)* Experience with industry standard tools for building data protocols e.g. Avro, Protocol Buffers, Thrift* Experience with at least one programming language - e.g. Python - for scripting vocabulary mappings, building data models, etc.* Awareness of RDF, Ontology, reference data* Experience with open-source ontology tools, data formats, languages (Protg, SPARQL, OWL, SKOS, SHACL, RML)* Specific experience with Knowledge Graph efforts, experience using ontology/taxonomy tools such as Centree, TopBraid, Smartlogic Semaphore etc* Experience with at least one programming language - e.g. Python - for scripting vocabulary mappings, building data models, etc.
Preferred Qualifications* Demonstrated comfort operating and leading across organizational boundaries a matrixed team* Membership of data standards group, industry committee, board, or consortium* Specific experience with ontology, Knowledge Graph efforts* Experience in technical writing, documentation