Logo
JobRialto

Machine Learning Artificial Intelligence Engineer

JobRialto, Pleasanton, CA, United States


Must Haves-

Strong project experience in Machine Learning, Big Data, NLP, Deep Learning, RDBMS is must.

Strong project experience with Amazon Web Services and Cloudera Data Platform is must.

4-5 experience building data pipelines using Python, MLLib, PyTorch, TensorFlow, Numpy/Scipy/Pandas, Spark, Hive,

4-5 years of programming experience in AWS, Linux and Data Science notebooks is must.

Strong experience with REST API development using Python frameworks (Django, Flask etc.).

Micro Services/Web service development experience using Spring framework is highly desirable

Deliverables or Tasks:

The tasks for the AI/ML Engineer include, but are not limited to, the following:

Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.

Design, build and scale Machine Learning systems across multiple domains.

Design and implement NLP algorithms

Design and implement an integrated Big Data platform and analytics solution

Design and implement data collectors to collect and transport data to the Big Data Platform.

Technical Knowledge and Skills:

Consultant resources shall possess most of the following technical knowledge and experience:

Provide technical leadership, develop vision, gather requirements and translate client user requirements into technical architecture.

4-5 years of Strong programming experience in Python, Java, Scala, SQL.

Proficient in Machine Learning Algorithms: Supervised Learning (Regression, Classification, SVM, Decision Trees etc.), Unsupervised Learning (Clustering) and Reinforcement Learning

Strong Hands-on Experience in building, deploying and productionizing ML models using MLLib, TensorFlow, PyTorch, Keras, Python Scikit-learn etc.

Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Spark, Spark SQL, HBase.

Data Processing and Analysis experience with Pandas, NumPy, Matplotlib/Seaborn etc. and using Big Data technologies (Hadoop/Spark)

Must have Natural Language Processing (NLP) and Computer Vision experience.

Ability to evaluate and choose best suited ML algorithms, perform feature engineering and optimize Machine Learning Models is mandatory

Strong fundamentals in algorithms, data structures, statistics, predictive modeling, & distributed systems is must

Strong Experience with Data Science Notebooks like Jupyter, Zeppelin, RStudio. PyCharm etc.

Strong Mathematics and Statistics Background (Linear Algebra, Calculus, Probability and Statistics)

4+ years of hands-on Development, Deployment and production Support experience in Hadoop environment.

Proficient in Big Data, SQL, relational database and NoSQL database for data retrieval and analysis.

Must have experience with developing Hive QL, UDF's for analyzing semi structured/structured datasets.

Expertise in Unix/Linux environment in writing scripts and schedule/execute jobs.

Experience with AWS and other cloud platforms.

Experience using Git and Eclipse.

Experience in creating and managing RESTFul API's using Python and Java frameworks.

Experience in Docker and Kubernetes containerization.

Hands-on experience ingesting and processing various file formats like Avro/Parquet/Sequence Files/Text Files etc.

Successful track record of building automation scripts/code using Java, Bash, Python etc. and experience in production support issue resolution process.

Preferred Skills:

Machine Learning, Big Data, NLP, Deep Learning, Python, MLLib, PyTorch, TensorFlow, Numpy/Scipy/Pandas, Spark, Hive, Data Science Notebooks, SQL, API, Unix/Linux, AWS

Education: Bachelors Degree