Logo
MARS Solutions Group

Sr. Data Engineer

MARS Solutions Group, Milwaukee, Wisconsin, United States, 53244


Req # 2424

: Sr. Data Engineer# of positions:

1L ocation : Remote - within client's foot print:Arizona, Connecticut, Florida, Iowa, Illinois, Indiana, Maine, Massachusetts, Michigan, Minnesota, Missouri, New Jersey, New York, Ohio, Pennsylvania, Rhode Island, Texas, and Wisconsin.Nevada, Nebraska, New Hampshire, South CarolinaHours: Normal business hoursInterview process: 1-2 rounds videoJob description:Project Overview:Finastra announced they are exiting the Loan Origination platform business and will sunset the Ambit CME product currently used at ASB.Loan Origination System Replacement + Booking & Funding SolutionsMost likely option of Moddy's Loan Origination Systems (LOS)Small Business Portal IntegrationsDoc Prep & Collateral ManagementMultiple key system integrations

Job Responsibilities:Build out/expand current DRR processing that exists in Snowflake to include Commercial loansBuild and deliver three one-time data migration files for CreditLensBuild and deliver two batch recurring data import files for CreditLensSupport the development and delivery of a standard data extract file from CreditLens to DRR onlyModify Snowflake to switch from handling the current Optimist data to handling new CreditLens dataDevelop technical documentation to be leveraged in the Phase 2 project

The Data Engineer will have a strong background in data engineering, with extensive experience in designing, building, and maintaining scalable data pipelines and architectures. As a Data Engineer, you will play a critical role in shaping our data infrastructure, ensuring the availability, reliability, and performance of our data systems.

* Design, develop, test, and deploy streaming and batch ingestion methods and pipelines across a variety of data domains leveraging programming languages, application integration software, messaging technologies, REST APIs and ETL/ELT tools.

* Ensure that high-throughput, low-latency, and fault-tolerant data pipelines are developed by applying best practices to the data mapping, code development, error handling and automation.

* As part of an agile team, design, develop and maintain an optimal data pipeline architecture using both structured data sources and big data for both on-premises and cloud-based environments.

* Develop and automate ETL/ELT code using scripting languages and ETL tools to support all reporting and analytical data needs.

* Following DataOps best practices, enable orchestration of data, tools, environments, and code.

* Design and build dimensional data models to support the data warehouse initiatives.

* Identify, design, and implement internal process improvements: automating manual processes, optimizing data pipeline performance, re-designing infrastructure for greater scalability and access to information.

* Participate in requirements gathering sessions to distill technical requirements from business requests.

* Collaborate with business partners to productionize, optimize, and scale enterprise analytics.Education:* Bachelor's Degree or equivalent combination of education and experience in Computer Science, Information Technology, Engineering required.* Master's Degree in Computer Science, Information Technology, Engineering preferred.Required experience:* 3-5 years minimum Data engineering focused on using ETL or ELT patterns to build automated data pipelines or related technologies required.

* 1-2 years Cloud platforms (AWS, Azure, GCP, Snowflake) and Python programming or proficient SQL coding and performance tuning skills required.Preferred experience:* Thorough understanding of relational, columnar and NoSQL database architectures and industry best practices for development.* Understanding of dimensional data modeling for designing and building data warehouses.* Experience with Infrastructure as Code (IAC) tools such as Terraform.* Experience with big data streaming technologies such as Kafka. Experience with parsing data formats such as XML/JSON and leveraging external APIs.Additional Skills Preferred:

* Strong problem-solving and analytical skills.

* Excellent communication and collaboration skills.

* Ability to work in a fast-paced, dynamic environment.

* Attention to detail and commitment to data quality