Logo
Technogen International Company

Data Engineer

Technogen International Company, Madison, WI


Please Note: As of July 22, 2021, our team will require that all candidate submissions include a LinkedIn profile. Please do not submit any candidates that do not have a LinkedIn.

  • Mid-Sr level (at least 2 year)
  • ASAP 12/31/23 (potential to extend up to 3 yrs or convert)
  • 100% remote; potential to convert, but must live near hub
  • Start date: asap in January
  • This team works on Change Data Capture (CDC) pipeline. Optimizing pipelines for efficiency, make pipeline changes (ex. Create a lambda) to make the pipeline better to reduce CPUs

Projects:
  • V11 (legacy platform) and V4 (new platform)
  • Core technology skillset: Python, AWS, SQL, Spark and ETL experience required.
  • V11 uses AWS, Glue, Lambda, S3, SQL.
  • V4 uses AWS, Spoon (repository), Pentaho. Exposure to Pentaho a plus (will train on this).
  • Strong work experience in building data ingestion, ETL pipelines using AWS platform managed services such as AWS glue, AWS Data Pipeline, Amazon S3.
  • Data engineering knowledge and prior experience with data lakes, data warehouses (e.g. star schema storage), understanding of different database architectures
  • Data modeling, data mapping and SQL expertise by leveraging it being able to develop complex data transformation and storage solutions
  • Excellent data analysis, SQL query writing, ETL and incident managements skills
  • GCP experience a plus as they are eventually going to Google
  • Excellent Communications as well as independent self-starter.
  • Type of projects/data: policy data; claims data; quote data; user patterns
  • Between ETL ingestion/Data storage-excluding Tableau, ETL validation consume 70% of the time of the engineer and 30% would be building new solutions and work on side projects internally
  • Looking for someone to keep the lights on/Maintenance work on our legacy system
  • AWS experience required. GCP experience is a plus.
  • Support on-call (adjust hours to stay within 40 hour work week). On call est. every 5-7 weeks.
  • Familiarity with how BI tools ingest data

Required Skills : Mid-Sr level (at least 2 year) ASAP ? 12/31/23 (potential to extend up to 3 yrs or convert) 100% remote; potential to convert, but must live near hub Start date: asap in January -This team works on Change Data Capture (CDC) pipeline. Optimizing pipelines for efficiency, make pipeline changes (ex. Create a lambda) to make the pipeline better to reduce CPUs -Projects: V11 (legacy platform) and V4 (new platform) -Core technology skillset: Python, AWS, SQL, Spark and ETL experience required. -V11 uses AWS, Glue, Lambda, S3, SQL. -V4 uses AWS, Spoon (repository), Pentaho. Exposure to Pentaho a plus (will train on this). -Strong work experience in building data ingestion, ETL pipelines using AWS platform managed services such as AWS glue, AWS Data Pipeline, Amazon S3. -Data engineering knowledge and prior experience with data lakes, data warehouses (e.g. star schema storage), understanding of different database architectures -Data modeling, data mapping and SQL expertise ? by leveraging it being able to develop complex data transformation and storage solutions -Excellent data analysis, SQL query writing, ETL and incident managements skills -GCP experience a plus as they are eventually going to Google -Excellent Communications as well as independent self-starter. -Type of projects/data: policy data; claims data; quote data; user patterns -Between ETL ingestion/Data storage-excluding Tableau, ETL validation consume 70% of the time of the engineer and 30% would be building new solutions and work on side projects internally -Looking for someone to keep the lights on/Maintenance work on our legacy system -AWS experience required. GCP experience is a plus. -Support on-call (adjust hours to stay within 40 hour work week). On call est. every 5-7 weeks. -Familiarity with how BI tools ingest data
Basic Qualification :
Additional Skills :
Rank :B1
Requested Date :2023-01-05