Logo
Cameron Craig Group

Enterprise Data Architecture Consultant

Cameron Craig Group, Chicago, Illinois, United States, 60290


Salary: $160,000 - $200,000 + annual incentive bonus + full benefits +

signing bonus

+ paid relocationSponsorship: NoThis position is open to fully remote candidates.Data Architecture Consultant

Join the Enterprise Architecture team as a Data Architecture Consultant. This role will be responsible for developing data architecture plans and driving these plans to fruition in collaboration with business and IT. You will play a key role in driving various data and analytics initiatives including cloud data transformation, data governance, data quality, data standards, CRM, MDM, Generative AI, and data science. You will define cloud reference architectures to promote reusable patterns and best practices for data integration and consumption. Additionally, you will guide the data science team in implementing data models and analytics models and serve as a data science architect delivering technology and architecture services to the data science community. You will also guide application development teams in the data design of complex solutions within a large data ecosystem, ensuring alignment with data architecture principles, standards, strategies, and target states.Responsibilities

Create, maintain, and govern architectural views and blueprints depicting the Business and IT landscape in its current, transitional, and future state.Recommend long-term direction on strategic advancements within the technical portfolio.Define and maintain standards for artifacts containing architectural content within the operating model.Build a Community of Practice for solutions architecture while leveraging architectural tools, processes, and practices.Offer insight, guidance, and direction on the usage of emerging trends and technical capabilities.Qualifications

Strong cloud data architecture knowledge (preference for Microsoft Azure).Experience developing architecture strategies and plans to enable cloud data transformation, MDM, data governance, and data science capabilities.Design reusable data architecture and best practices to support batch/streaming ingestion, efficient batch, real-time, and near real-time integration/ETL, integrating quality rules, and structuring data for analytic consumption by end users.Ability to lead software evaluations including RFP development, capabilities assessment, formal scoring models, and delivery of executive presentations supporting a final recommendation.Well-versed in the data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Standards, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, non-traditional data and multi-media, ETL, ESB).Capable of communicating data concepts to varied forums and audiences (including business and technology) and advocating for the importance of data quality, data standards, managed replication, and co-existent/federated data in an enterprise data ecosystem.Experience with big data technologies such as Cloudera, Spark, Sqoop, Hive, HDFS, Flume, Storm, and Kafka.Experience with Microsoft CoPilot, and exposure to TensorFlow, Python NLTK are a plus.Typically requires 17 years of relevant experience or a combination of related experience, education, and training.Must Have

Strong cloud data architecture knowledge (preference for Microsoft Azure).Experience developing architecture strategies and plans to enable cloud data transformation, MDM, data governance, and data science capabilities.Design reusable data architecture and best practices to support batch/streaming ingestion, efficient batch, real-time, and near real-time integration/ETL, integrating quality rules, and structuring data for analytic consumption by end users.Ability to lead software evaluations including RFP development, capabilities assessment, formal scoring models, and delivery of executive presentations supporting a final recommendation.Well-versed in the data domains (Data Warehousing, Data Governance, MDM, Data Quality, Data Standards, Data Catalog, Analytics, BI, Operational Data Store, Metadata, Unstructured Data, non-traditional data and multi-media, ETL, ESB).Capable of communicating data concepts to varied forums and audiences (including business and technology) and advocating for the importance of data quality, data standards, managed replication, and co-existent/federated data in an enterprise data ecosystem.Experience with big data technologies such as Cloudera, Spark, Sqoop, Hive, HDFS, Flume, Storm, and Kafka.Preferred

Experience with Microsoft CoPilot.Exposure to TensorFlow, Python NLTK are a plus.Benefits include medical, dental, vision, disability, and life insurance, 401(k) with company match, parental leave, paid time off, paid company holidays, and time off to volunteer in your community. Please click here to learn more about who we are and the many benefits we offer our employees.Unless noted above, applicants MUST be authorized to work in the US without Visa Sponsorship. US citizens and Green Card holders ONLY. We do not provide relocation assistance for those living outside the continental US. Please only click apply if you meet the specific requirements of the job listing, you are able to work in the location listed, and are comfortable with the salary range indicated above. Thanks for your interest. We look forward to working with you.

#J-18808-Ljbffr