Logo
OutsideConnection

Deloitte

OutsideConnection, Cleveland, Ohio, us, 44101


Deloitte Consulting LLP seeks a Consulting, Consultant in Cleveland, Ohio and various unanticipated Deloitte office locations and client sites nationally.Work You’ll DoProvide advisory and implementation services of large-scale data ecosystems, including data management, governance, and the integration of structured and unstructured data to help companies unlock the value of big technology investments. Work across all phases of the Agile and Waterfall lifecycle for the design, development, testing and implementation of Master Data Management, Data Governance, Data Analytics, and Data Integration solutions for enterprise-level clients. Provide technical recommendations for optimized data access and retention from various data warehouses. Implement data integration solutions using dimensional modeling, granularity and source-to-target mapping to integrate new BI/DW requirements. Perform data management, including master data, metadata, data architecture, data governance, data quality, and data modeling. Design database queries, triggers, procedures, functions and packages for reporting and data analytics. Design and develop data cleansing routines utilizing typical data quality functions, including standardization, transformation, rationalization, linking, and matching. Implement data enrichment, lookup, filtering, and data cleansing routines and solutions to improve the quality of the ingested data.

50% Travel required nationallyTelecommuting permittedWork location includes various unanticipated Deloitte office locations and client sites nationally.RequirementsBachelor’s degree or foreign equivalent in Business Administration, any STEM field, or a related field. Must have 1 year of related work experience in the job offered or in a related occupation.Position requires 1 year of related work experience in each of the following:

Participating in various aspects of the full data delivery lifecycle for implementations, including requirements gathering, scripting utilizing Python, Unix, and Code deployments using Jenkins and Urban Code, and change requests including escalations and urgent issues utilizing Service Now, Jira, GIT, and production support;Designing database queries, triggers, procedures, functions, and packages for reporting and data analytics using SQL, Hive, and Spark-SQL;Providing industry insight and analytical support using Regression Modelling for pattern matching and data visualization using Tableau and predictive modelling utilizing Python;Utilizing Big Data technologies, including Hadoop and Spark frameworks, Apache Sqoop for data ingestion, Confluent Kafka and Apache Kafka for real time data streaming, Hadoop Distributed Filing System (HDFS) for data storage, Apache Pig for cleaning, Apache Hive as well as SparkScala, Spark-SQL for transformation, and Apache Oozie for scheduling;Delivering Data Warehousing and Analytics solutions utilizing Hive, Teradata, Microsoft Excel, MySQL and Microsoft SQL Server (MSSS);Performing data analysis, including inspecting, cleansing, and transforming using Scala, Python, and Unix Shell Scripting and modeling for enabling data mining, resolving data mapping, and data modeling issues using SQL Server Management Studio; andDeveloping, testing, and automating Extract, Transform, Load (ETL) procedures using Informatica, Apache Spark Framework, and Relational Database Stored Procedure and Functions to perform data warehousing and data integration functions.OtherHours: M-F, 40 hours/week;If offered employment, must have legal right to work in U.S. EOE, including disability/veterans.

#J-18808-Ljbffr