Logo
Bank of America Corporation

Data Engineer - GenAI Platform

Bank of America Corporation, Charlotte, NC


Job Description:

At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day.

One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We're devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being.

Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization.

Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us!

Job Description:
This job is responsible for driving efforts to develop and deliver complex data solutions to accomplish technology and business goals. Key responsibilities include leading code design and delivery tasks with the integration, cleaning, transformation and control of data in operational and analytical data systems. Job expectations include liaising with vendors and working with stakeholders and Product and Software Engineering teams to implement data requirements, analyzing performance, and researching and troubleshooting issues within system engineering domains.

Data Engineer to build out data pipelines to source large volumes of structured (ex: KDB) & unstructured data (ex: Research documents, Term Sheets), classify, and store data to meet GenAI requirements. The Data Engineer will design, develop, and engineer platform for high performance and scalability

Responsibilities:
  • Leads story refinement and delivery of requirements through the delivery lifecycle and assists team members in resolving technical complexities
  • Codes complex solutions to integrate, clean, transform, and control data, builds processes supporting data transformation, data structures, metadata, data quality controls, dependency, and workload management, assembles complex data sets, and communicates required information for deployment
  • Leads documentation of system requirements, collaborates with development teams to understand data requirements and feasibility, and leverages architectural components to develop client requirements
  • Leads testing teams to develop test plans, contributes to existing test suites including integration, regression, and performance, analyzes test reports, identifies test issues and errors, and leads triage of underlying causes
  • Leads work efforts with technology partners and stakeholders to close gaps in data management standards adherence, negotiates paths forward by thinking outside the box to identify and communicate solutions to complex problems, and leverages knowledge of information systems, techniques, and processes
  • Leads complex information technology projects to ensure on-time delivery and adherence to release processes and risk management and defines and builds data pipelines to enable data-informed decision making
  • Mentors Data Engineers to enable continuous development and monitors key performance indicators and internal controls


Required Qualifications

Technical
  • Proficient in data engineering practices and using design and architectural patterns.
  • At least 4+ years' experience as a Data Engineer or in a similar role, in Extract, Transform, Load (ETL) and/or Extract, Load, Transform (ELT) processes.
  • Proficiency in working with unstructured and structured data.
  • Experience with data processing frameworks such as Hadoop, Spark, or similar technologies.
  • Experience with data platforms like SQL, HDFS, and NoSQL (MongoDB) databases.
  • Extensive experience with object-oriented programming (OOP)/Scripting languages (Python preferred).
  • Technical expertise with data models, data mining, and segmentation techniques.
  • Experience working in multiple technology deployment lanes (development through production).
  • DevOps processes and CICD tooling (Jira, Git/Bitbucket, Jenkins, Datical, Artifactory, Ansible), orchestration & automation.
  • Containerization technologies such as Docker and Kubernetes

Non-Technical
  • Ability to communicate effectively to a wide range of audience (business stakeholders, developer & support teams
  • Detail oriented & highly organized.
  • Adaptable to shifting & competing priorities.
  • Problem solving skills to diagnose & resolve complex issues.
  • Committed and pro-active in ensuring high quality of service.


Desired Qualifications

  • Familiarity in AI & Deep learning, modeling techniques, Generative AI application stack.
  • Familiarity with containerization and orchestration technologies such as Docker, Kubernetes and OpenShift.
  • Experience with creating visualization dashboards (Tableau).
  • Experience with vector databases such as Redis.


Skills:
  • Analytical Thinking
  • Application Development
  • Data Management
  • DevOps Practices
  • Solution Design
  • Agile Practices
  • Collaboration
  • Decision Making
  • Risk Management
  • Test Engineering
  • Architecture
  • Business Acumen
  • Data Quality Management
  • Financial Management
  • Solution Delivery Process


Education Requirements - Bachelor Degree or Equivalent Work Exp

Shift:
1st shift (United States of America)

Hours Per Week:
40