Logo
Testing Solutions GmbH

Software Engineer September 06, 2022

Testing Solutions GmbH, Charlotte, North Carolina, United States, 28245


Do you want your voice heard and your actions to count?Discover your opportunity with Mitsubishi UFJ Financial Group (MUFG), the 5th largest financial group in the world. Across the globe, we’re 180,000 colleagues, striving to make a difference for every client, organization, and community we serve. We stand for our values, building long-term relationships, serving society, and fostering shared and sustainable growth for a better world.With a vision to be the world’s most trusted financial group, it’s part of our culture to put people first, listen to new and diverse ideas and collaborate toward greater innovation, speed and agility. This means investing in talent, technologies, and tools that empower you to own your career.Join MUFG, where being inspired is expected and making a meaningful impact is rewarded.Responsibilities

Perform data sourcing analysis, data quality, automated testing, exception processing, error handling and notification, and correction processing of source system data as it enters and is processed through the EDP. Build data pipelines at scale by extracting, cleaning and transforming data using Python, Bash scripting, Spark, SQL, and Hive. Build dashboards, reports and application performance monitoring using Tableau Desktop and Splunk. Design and build infrastructure for big data workloads using AWS S3, Elastic MapReduce (EMR), Glue catalog, Step functions, Redshift, CloudWatch and CloudTrail. Automate and orchestrate data pipeline components in the workflow of fetching/moving data from different source systems. Support ingestion and transformation pipelines that handle data for analytical or operational uses across a broad line of business needs areas and enterprise data domains. Work with Business IT teams in proactively identifying data quality issues, and coordinate with the development groups to ensure data accuracy to business analysts, leadership groups, and other end users to aid in ongoing operational insights. Analyze business and technical requirements for data integration from various data sources and execute the extract, transform, and load (ETL) data from disparate sources across the organization. Develop components/applications by studying operations and designing and developing, reusable services and solutions that support the automated ingesting, profiling, and handling of structured and unstructured data. Design and implement a robust set of Controls and Reconciliation tools and platforms to support point-to-point and end-to-end comprehensiveness controls and G/L Reconciliations. Design and build RESTful APIs, Automated Testing Systems, Event Monitoring, and Notification Systems. Work with data providers and data consumers to build and deploy scalable models and standard output formats (SOFs) to production. Provide clear documentation on design decisions and workflows and work with partners including the Business, Enterprise Architecture, Infrastructure, and Chief Data Office to assist with data-related technical issues and support their data infrastructure needs. Handle user inquiries and provide level 3 production support including Onsite/Offshore collaboration as needed. Maintain best practices to facilitate optimized software development and continuous improvement/delivery (CI/CD). Lead, support, and coordinate code migration activities. Perform peer review and quality reviews of code/scripts.Qualifications

Education : Bachelor’s degree in Computer Science, Computer Engineering, Management Information Systems, or a related field (or foreign equivalent degree).Experience : 2 years of technical experience building data pipelines at scale by extracting, cleaning and transforming data using Python, Bash scripting, Spark, SQL, and Hive; building dashboards, reports and application performance monitoring using Tableau Desktop and Splunk; designing and building infrastructure for big data workloads using AWS S3, Elastic MapReduce (EMR), Glue catalog, Step functions, Redshift, CloudWatch and CloudTrail; automating and orchestrating data pipeline components in the workflow of fetching/moving data from different source systems; and in the banking industry.Other : Required to work nights & weekends & be on-call during non-business hours as needed for testing and deployment purposes.Location : Charlotte, NC 28244Reference internal requisition #10056134-WD.We are committed to leveraging the diverse backgrounds, perspectives and experience of our workforce to create opportunities for our people and our business; Equal Opportunity Employer: Minority/Female/Disability/Veteran.

#J-18808-Ljbffr