TECH OFFER

Enabling Rapid Machine Learning Development and Operations (MLOps)

KEY INFORMATION

TECHNOLOGY CATEGORY:
Infocomm - Artificial Intelligence
TECHNOLOGY READINESS LEVEL (TRL):
LOCATION:
Singapore
ID NUMBER:
TO174560

TECHNOLOGY OVERVIEW

While data may be at the heart of every Machine Learning (ML) application, the variety of ML tools, frameworks, and libraries each necessitates different configurations, unique dependencies, and skillsets. Managing this variety of components in the ML pipeline is as time-consuming and complex as data collection and preparation itself, and is a challenge for small-to-medium enterprises that are trying to build new ML applications. Containerisation has been a popular way of packaging an ML application, with all its necessary libraries, frameworks, and dependencies into a reuseable container - minimising the pain points associated with setting up new computing instances each time an ML application needs to be executed. In this manner, multiple microservice containers can be orchestrated to execute on-demand, each providing a small slice of functionality that is linked to the next.

This technology offer is a unified container orchestration platform that orchestrates the end-to-end Machine Learning pipeline; from data preparation to model deployment. It aligns with the iterative nature of developing AI applications and simplifies the process of provisioning the computing infrastructure needed to support model development and deployment. This technology is purpose-built for small-to-medium businesses with limited tech manpower and resources as it allows such organisations to collaborate internally (through re-useable/re-playable development workspaces) via on-premise computing infrastructure, or on public cloud platforms.

TECHNOLOGY FEATURES & SPECIFICATIONS

The platform comprises a suite of micro-services using container-based computing orchestrated through Kubernetes. The micro-services abstract the complexity of ML pipeline setup and automate the provisioning of infrastructure, computing resources, machine learning development tools, and other underlying dependencies required by different stages of AI development pipeline. Key features include:

  • Smart AI-assisted Resource Management Automator
  • Unique ability to specify deployment cluster size i.e. number of machines and nodes
  • Python-based development in notebooks (Jupyter or Google Colab)
  • Ready-to-go integration with Apache Spark for distributed deep learning
  • Supports Apache Spark, Hadoop for big data processing
  • Supports popular Deep Learning (DL) frameworks e.g. Tensorflow and PyTorch
  • Manages various storage types including Hadoop Distributed File System (HDFS), AWS S3, NoSQL databases (Apache Cassandra)
  • Data pipelining through Kafka
  • Fully automated deployment provisions CPU and GPU clusters over public clouds including Google Cloud Platform (GCP), Amazon Web Services (AWS) and Microsoft Azure
  • Enables rapid scaffolding by launching customizable development environment in minutes with pre-configured microservices such as database, dashboard, etc

POTENTIAL APPLICATIONS

This technology serves as a development and deployment platform to enable SMEs to rapidly develop and deploy AI applications. Alternatively, it can be used as a training tool to educate software engineers to transition to the development of AI applications as AI engineers.

Unique Value Proposition

As an integrated AI development and deployment platform, it offers the following benefits:

  • Easy and intuitive usage - Easy access to the data and tools by abstracting away low-level details of configuration and dependency setup (zero configuration)
  • Scalability - Effective pipeline management with automatic scaling up/down based on demand at each stage of the AI pipeline, centralised management of libraries, tools and datasets
  • Efficiency - Optimization engine to enhance distributed computing and maximise sharing of computing resource efficiency
  • Flexibility – Supports GPU/TPU/CPU clusters over public cloud and on-premise deployment
  • Reuseability - Sandboxed micro-services of the entire AI pipeline translate to isolated, reuseable workspaces for different roles (data scientist, devops engineer, ML developer etc.)
RELATED TECH OFFERS
Generative AI Technology Developed for B2B Sales Automation and Acceleration
Generative AI Technology for Business Process Automation and Customer Engagement Improvement
SeaLLMs - Large Language Models for Southeast Asia
Digital Twin Platform for Quick Conversion of Point Cloud Data to BIM
Automating Medical Certificate Submission using Named Entity Recognition Model
Physical Climate Risk Analytics
Autonomous Built Environment Inspection
Building Explainable, Verifiable, Compact & Private AI Solutions For Critical Applications
Highly Sensitive, Multiplex, Spectroscopic - Portable Gas Sensing System
Osteoporosis Prediction Enabled by Automated AI System