Logo
Saxon Global

Data Engineer

Saxon Global, Mc Lean, Virginia, us, 22107


Data Engineer

Freddie Mac

McLean, VA (must be in VA or willing to relocate)

6+ month contract -

$60hr

Position OverviewFreddie Mac's Investments & Capital Markets Division is currently seeking a Senior Data Engineer who enjoys data and building data storage platforms from ground up. The ideal candidate has a passion for data analysis, technology and helping people leverage the technology to transform their business processes and analytics. As a Data Engineer, you will be part of a team responsible for supporting a wide range of internal customers. You will draw on all the skills in your toolkit to analyze, design, and develop data storage and data analytic solutions using data lake patterns, that help our customers run more effective operations and make better business decisions.

Your Work Falls Into Two Primary Categories:Strategy Development and Implementation

Develop data filtering, transformational and loading requirementsDefine and execute ETLs using Apache Sparks on Hadoop among other Data technologiesDetermine appropriate translations and validations between source data and target databasesImplement business logic to cleanse & transform dataDesign and implement appropriate error handling proceduresDevelop project, documentation and storage standards in conjunction with data architectsMonitor performance, troubleshoot and tune ETL processes as appropriate using tools like in the AWS ecosystem.Create and automate ETL mappings to consume loan level data source applications to target applicationsExecution of end to end implementation of underlying data ingestion workflow.

Operations and Technology

Leverage and align work to appropriate resources across the team to ensure work is completed in the most efficient and impactful wayUnderstand capabilities of and current trends in Data Engineering domain

Qualifications

At least 5 years of experience developing in Java, PythonBachelor's degree with equivalent work experience in statistics, data science or a related field.Experience working with different Databases and understanding of data concepts (including data warehousing, data lake patterns, structured and unstructured data)3+ years' experience of Data Storage/Hadoop platform implementation, including 3+ years of hands-on experience in implementation and performance tuning Hadoop/Spark implementations.Implementation and tuning experience specifically using Amazon Elastic Map Reduce (EMR).Implementing AWS services in a variety of distributed computing, enterprise environments.Experience writing automated unit, integration, regression, performance and acceptance testsSolid understanding of software design principlesKey to success in this role

Strong consultation and communication skillsAbility to work with and collaborate across the team and where silos existDeep curiosity to learn about new trends and how to do things betterAbility to use data to help inform strategy and directionTop Personal Competencies to possess

Seek and Embrace Change - Continuously improve work processes rather than accepting the status quoGrowth and Development - Know or learn what is needed to deliver results and successfully competePreferred Skills

Understanding of Apache Hadoop and the Hadoop ecosystem. Experience with one or more relevant tools (Sqoop, Flume, Kafka, Oozie, Hue, Zookeeper, HCatalog, Solr, Avro).Deep knowledge on Extract, Transform, Load (ETL) and distributed processing techniques such as Map-ReduceExperience with Columnar databases like Snowflake, RedshiftExperience in building and deploying applications in AWS (EC2, S3, Hive, Glue, EMR, RDS, ELB, Lambda, etc.)Experience with building production web servicesExperience with cloud computing and storage servicesKnowledge of Mortgage industry