TalentElixir Consulting
Cloud Data Architect
TalentElixir Consulting, Houston, Texas, United States, 77246
About the job Cloud Data Architect
Our client is looking for a Cloud Data Architect with minimum 5 years of experience designing, building, and supporting solutions. You will participate in helping them store and process large amounts of batch, streaming, structured and unstructured data. The data sources come from a wide variety of hybrid cloud environments. You will be asked to be self-driven, take charge in identifying and fixing problems, and excel in a collaborative and constantly changing environment. This position will be responsible for designing, building, and supporting a next-generation cloud data platform that will form the technical foundation for a data-driven organization.
Daily Responsibilities:
Lead architecture, design, and development of a next-generation data platform in the cloud that serves as the central repository for data across the organization Hands-on solution development leveraging modern cloud data technologies, tools, languages, and platforms Define and establish data architecture principles, standards, guidelines, and patterns Balance service delivery with continuous process improvement and automation to architect reliable, and stable data platform capabilities Work with functional leads and business leaders to ensure a technical strategy is aligned with company and departmental vision Lead, manage and guide technical project teams Drive the adoption of a DevOps/SRE culture across teams Mentor other team members Support vendor management activities with key vendors such as cloud providers, and drive early adoption activities and vendor technology roadmap discussions Required Skills:
8+ years of experience in a hands-on software/data engineering role including data modeling, data integration, data ingestion, data transformation, data mining, and data warehousing Experience designing and building cloud-native data platforms (AWS preferred) Experience designing and building ETL and ELT data pipelines for batch and streaming data using cloud-native serverless technologies like AWS Glue and Lambda Advanced knowledge of SQL and data warehouse concepts Experience building solutions to measure data quality Experience with Snowflake or other cloud-native data warehouses Experience with cloud-native SQL and NoSQL DBaaS platforms like AWS RDS, MongoDB Atlas Experience building data ingestion pipelines using Python and PySpark Experience with streaming data platforms like Apache Kafka or AWS Kinesis, and event-driven architectures Solid understanding of DevOps and SRE and experience implementing CI/CD, test automation, infrastructure as code (Terraform), metrics, and monitoring Experience using DevOps platforms and tools (Azure DevOps, GitHub) Proficiency with both Linux and Windows Experience with containerized applications using Docker Experience with agile methodologies including Scrum and Kanban Experience leading teams of data engineers and software developers spanning multiple time zones Strong problem-solving and analytical skills Exceptional verbal and written communication Bachelor's degree in Computer Science, Engineering, MIS, or equivalent experience Desired Skills:
Experience building solutions with Docker and Kubernetes (EKS, AKS, GKE) Experience building ETL and ELT data pipelines using Informatica Cloud platform Experience with data quality and catalog tools (e.g. Informatica, Collibra). Knowledge of data governance practices, business, and technology issues related to management of enterprise information assets, and approaches related to data protection Familiarity and experience in data protection and information security in the cloud Familiarity with cloud-native technologies and platforms for artificial intelligence and machine learning (e.g. SageMaker)
Our client is looking for a Cloud Data Architect with minimum 5 years of experience designing, building, and supporting solutions. You will participate in helping them store and process large amounts of batch, streaming, structured and unstructured data. The data sources come from a wide variety of hybrid cloud environments. You will be asked to be self-driven, take charge in identifying and fixing problems, and excel in a collaborative and constantly changing environment. This position will be responsible for designing, building, and supporting a next-generation cloud data platform that will form the technical foundation for a data-driven organization.
Daily Responsibilities:
Lead architecture, design, and development of a next-generation data platform in the cloud that serves as the central repository for data across the organization Hands-on solution development leveraging modern cloud data technologies, tools, languages, and platforms Define and establish data architecture principles, standards, guidelines, and patterns Balance service delivery with continuous process improvement and automation to architect reliable, and stable data platform capabilities Work with functional leads and business leaders to ensure a technical strategy is aligned with company and departmental vision Lead, manage and guide technical project teams Drive the adoption of a DevOps/SRE culture across teams Mentor other team members Support vendor management activities with key vendors such as cloud providers, and drive early adoption activities and vendor technology roadmap discussions Required Skills:
8+ years of experience in a hands-on software/data engineering role including data modeling, data integration, data ingestion, data transformation, data mining, and data warehousing Experience designing and building cloud-native data platforms (AWS preferred) Experience designing and building ETL and ELT data pipelines for batch and streaming data using cloud-native serverless technologies like AWS Glue and Lambda Advanced knowledge of SQL and data warehouse concepts Experience building solutions to measure data quality Experience with Snowflake or other cloud-native data warehouses Experience with cloud-native SQL and NoSQL DBaaS platforms like AWS RDS, MongoDB Atlas Experience building data ingestion pipelines using Python and PySpark Experience with streaming data platforms like Apache Kafka or AWS Kinesis, and event-driven architectures Solid understanding of DevOps and SRE and experience implementing CI/CD, test automation, infrastructure as code (Terraform), metrics, and monitoring Experience using DevOps platforms and tools (Azure DevOps, GitHub) Proficiency with both Linux and Windows Experience with containerized applications using Docker Experience with agile methodologies including Scrum and Kanban Experience leading teams of data engineers and software developers spanning multiple time zones Strong problem-solving and analytical skills Exceptional verbal and written communication Bachelor's degree in Computer Science, Engineering, MIS, or equivalent experience Desired Skills:
Experience building solutions with Docker and Kubernetes (EKS, AKS, GKE) Experience building ETL and ELT data pipelines using Informatica Cloud platform Experience with data quality and catalog tools (e.g. Informatica, Collibra). Knowledge of data governance practices, business, and technology issues related to management of enterprise information assets, and approaches related to data protection Familiarity and experience in data protection and information security in the cloud Familiarity with cloud-native technologies and platforms for artificial intelligence and machine learning (e.g. SageMaker)