Cynet Systems
Data Engineer - ML and Graph - Remote / Telecommute
Cynet Systems, New York, New York, United States,
Job Description:
Pay Range: $65hr - $70hr
8+ years expertise as a backend/data engineer building graphs systems and graph databases.5+ years expertise with machine learning and/or natural language processing.Degree in Computer Science, Machine Learning, Data Science or related field, with expertise in knowledge representation.Strong proficiency in graph theory, graph algorithms, and graph databases (e.g., Neo4j, TigerGraph,), coupled with extensive knowledge of vector databases (OpenSearch, Milvus, ...).Proficient in Data bricks, Python, Pyspark, Scala for developing and maintaining data engineering pipelines, with expertise in Apache Spark, Flink, and containerization.Experienced in cloud platforms (AWS, Azure, Google Cloud) and skilled in working with various databases and data warehouses for efficient data processing and storage.Expertise working with graph data models databases (Neo4J, TigerGraph), or graph query languages (Gremlin, SPARQL, Cypher).Expertise architecting, designing and building data pipelines and acquiring data needed to build and evaluate models, using tools like Data bricks, Dataflow, Apache Beam, or Spark.
Pay Range: $65hr - $70hr
8+ years expertise as a backend/data engineer building graphs systems and graph databases.5+ years expertise with machine learning and/or natural language processing.Degree in Computer Science, Machine Learning, Data Science or related field, with expertise in knowledge representation.Strong proficiency in graph theory, graph algorithms, and graph databases (e.g., Neo4j, TigerGraph,), coupled with extensive knowledge of vector databases (OpenSearch, Milvus, ...).Proficient in Data bricks, Python, Pyspark, Scala for developing and maintaining data engineering pipelines, with expertise in Apache Spark, Flink, and containerization.Experienced in cloud platforms (AWS, Azure, Google Cloud) and skilled in working with various databases and data warehouses for efficient data processing and storage.Expertise working with graph data models databases (Neo4J, TigerGraph), or graph query languages (Gremlin, SPARQL, Cypher).Expertise architecting, designing and building data pipelines and acquiring data needed to build and evaluate models, using tools like Data bricks, Dataflow, Apache Beam, or Spark.