innovation marketplace

TECH OFFERS

Discover new technologies by our partners

Leveraging our wide network of partners, we have curated numerous enabling technologies available for licensing and commercialisation across different industries and domains. Enterprises interested in these technology offers and collaborating with partners of complementary technological capabilities can reach out for co-innovation opportunities.

Non-invasive Blood Glucose Evaluation And Monitoring (BGEM) Technology For Diabetic Risk Assessment
The latest Singapore National Population Health Survey has reported a concerning diabetes trend. From 2019-2020, 9.5% of the adults had diabetes, slightly dropping to 8.5% from 2021-2022. About 1 in 12 (8.5%) of residents aged 18 to 74 were diagnosed, with an age-standardised prevalence of 6.8% after accounting for population ageing. Among the diabetes patients, close to 1 in every 5 (18.8%) had undiagnosed diabetes, and 61.3% did not meet glucose control targets. Prediabetes is also prevalent, with 35% progressing to type 2 diabetes within eight years without lifestyle changes. Untreated Type 2 diabetes can lead to severe health issues. Tackling this challenge requires a holistic approach, focusing on awareness, early diagnosis, and lifestyle adjustments for diabetes and prediabetes. Recognising the need for innovation to address this, the technology owner develops a cost-effective and non-invasive AI-powered solution, Blood Glucose Evaluation And Monitoring (BGEM), that detects glucose dysregulation in individuals to monitor and evaluate diabetic risks. BGEM allows users to track their blood glucose levels regularly, identify any adverse trends and patterns, and adopt early intervention and lifestyle changes to prevent or delay the onset of diabetes. Clinically validated in 2022, with a research paper published in October 2023, the technology is open for licensing to senior care/home care providers, telehealth platforms, health wearables companies, and more.
Optimisation of Aquatic Feed with Underutilized Okara
In Singapore, more than 30,000kg of okara are generated from soya milk and tofu production. Due to the high amount of insoluble dietary fiber and a unique, poignant smell of okara, it is often discarded as a waste product. Despite okara's low palatability, it is rich in nutrients. Therefore, the technology owner has developed a cost-effective formulation to include okara in feed for abalone. The formulation can potentially be adapted and customised for other aquatic species. The technology owner is seeking potential partners to license and commercialise the technology.
Biodegradable, Organic Solvent-Free Nanoencapsulation of Hydrophobic Actives
Many active ingredients in formulated products often suffer from degradation triggered by light, heat, mechanical stress or volatile loss, as well as incompatibility issues with other ingredients or excipients. Encapsulation of actives could be a solution, however existing methods including nanoemulsion, liposomes, nanostructured lipids, spray drying involve undesirable steps that use organic solvents, surfactants, alcohols, non-biodegradable polymers and high shear processes. These undesirable steps render difficulties for economical scale-up.   The technology provider has developed a novel method for producing a biodegradable polymer that is based on green chemistry and easily scalable. Through their simple and novel nanoencapsulation process, the technology allows the encapsulation of most actives at submicron scale to form water-based formulation without the forementioned undesirable steps. This technology presents a low-cost, scalable co-block amphiphilic biodegradable polymer-based nanoencapsulation that is of superior performance and stability due to its polymeric chain entanglement and nano-sized effects.   The technology provider is seeking collaborations with partners, including  actives manufacturers and formulated products owners, who may have interests to adopt this encapsulation technology for hydrophobic actives in applications including insect repellents, pesticides, skincare, aromatherapy products, and pharmaceutical applications.  
Accelerating Vision-based Artificial Intelligence Development with Pre-trained Models
Vision-based Artificial Intelligence (AI) models require substantial time to train, fine-tune and deploy in production. After production, this process is still required when performance degrades and re-training on a new dataset becomes necessary; this maintenance process exists throughout the model's lifetime to ensure optimal performance. Rather than embarking on the time-consuming and painful process of collecting/acquiring data to train and tune the AI model, many organisations have turned to the use of pre-trained models to accelerate the AI model development process. This technology consists of a suite of pre-trained models that are intended to detect food, human behaviours, facial features and count people. These AI models are operable on video footage and static images obtained from cameras. Models are tuned and trained on various use-cases and are accessible via API calls or embedded within software as a Software Development Kit (SDK) library. These models can be deployed as AI as a Service on Microservices platform providing customer data protection with blockchain technology. With customer protection enhanced with blockchain technology, AI Model performance can further be enhanced to meet customer requirement.  
Enhanced Biogel Formulation for Dental Clear Aligner
In dental treatment, clear aligners are successful alternatives to the conventional fixed appliances or braces in achieving physiological orthodontic tooth movement (OTM). However, its control of dental movement is not absolute and attachments, usually made of tooth-coloured dental composite resins, are inserted at precise locations to allow the aligners to grip the teeth and guide them into their new locations. This procedure takes up clinical time and increases the cost to the clinicians. Moreover, these attachments protrude off from the surface of the teeth making the appliance obviously visible and may also potentially increase patient’s discomfort as they scratch the insides of the patient’s mouth. Upon completion of treatment, these attachments need to be removed and the enamel surfaces of the teeth may potentially be scratched or damaged. This invention introduces a Biogel material that will be applied and act as an interface layer between the clear aligner and the clinical crowns of the teeth. As a base and catalyst, the Biogel, a 2-part mixture sets into a semi-solid form after the clear aligner is inserted onto the teeth. The Biogel is a thin interface that engages the undercuts of the teeth, grips the dentition to enhance the transfer of active orthodontic forces from the clear aligners onto the teeth without the need for placement of attachments. The Biogel does not adhere to the teeth but rather the internal surfaces of the clear aligners and can be easily peeled off clean and replaced as required.
Green Plastics from Carbon Dioxide and Renewable Feedstock
To date, the current primary feedstock for plastic production is oil, which accounts for more than 850 million metric tons of greenhouse gases emissions per year. Hence, there has been an increasing demand for green plastics, which are plastic materials produced from renewable sources. This technology offer is a synthesis method of green plastics from carbon dioxide (CO2) and renewable feedstock. The green plastics produced are non-isocyanate polyurethanes (NIPUs) and can be actively tuned to be anionic, cationic, oil-soluble and cross-linkable which enables a wide range of applications. These NIPUs are non-skin irritant, have high bio-content and can possibly be made to be bio-degradable. This technology owner is looking for partners in various industries such as personal and consumer care, coatings and lubricant additives (to name a few) for further co-development of the solution. The technology owner is keen to license this technology as well.
Intelligent Body Pose Tracking for Posture Assessment
Most existing training applications offer good programmes for guiding users to achieve individual fitness goals, some even come with guided video workouts led by professional trainers. However, such applications lack or have limited capability to assess whether the correct posture is maintained during exercise - poor posture can reduce exercise effectiveness and may even cause injury, e.g. arched back during push-ups. This solution is a synergistic combination of video/image processing, human pose recognition, and machine learning technologies to deliver a solution that addresses the twin challenges of accurate count and correct execution of exercises in an automated manner, without having to wear any additional hardware/sensors. The software-only solution is able to advise users on the correct execution of repetitive movement sequences, e.g. sit-ups, and push-ups, and is deployable on a wide range of affordable camera-enabled hardware devices such as mobile phones, tablets, and laptops, and it can be easily integrated into existing applications to enhance functionality. It is applicable to the sports and healthcare industry to help users perform exercises correctly and effectively in an unencumbered manner.
Video-level Assisted Data Labelling for Industrial Applications
Existing publicly available datasets, such as COCO, are built from the ground up to be general-purpose and therefore lack domain specificity. When such public datasets are used to train deep learning models for industrial use-cases and applications, e.g. detection of electronic components, they often result in sub-par performance caused by the disparity between objects typically found in industrial environments and data residing in public datasets. This disparity requires significant effort in pixel-level supervision (annotation), where each pixel, per frame, has to be annotated manually to make up for the difference in training data to improve model performance This solution is a deep-learning-based technique for instance segmentation in industrial environments intended to reduce the effort cost of annotation from pixel-level to video-level. With instance segmentation, the goal is not just to detect and localise objects within a scene, but also to determine the different classes and number of instances (or recognising more of the same type objects as different). This aids scene understanding and the resulting model can be deployed for productivity measurement or process improvement. Incremental learning is used to ensure that only the parts of the model that need to be updated with new data are changed, thus reducing the amount of time taken for re-training and model updates.
No-code User Interface (UI) Guidance and Walkthroughs
With the rapid pace of digitalisation, many existing systems and processes are becoming increasingly complex. Many users find themselves struggling to achieve their desired outcomes due to a wide variance in digital proficiencies; what is intuitive to one user may not be intuitive to another. Simply put, it is not possible to build a User Interface (UI) that is completely intuitive for every user profile i.e. no one-size-fits-all interface. Similarly, many Frequently-Asked-Questions (FAQs) and user guides are poorly maintained or are written in a manner that is too generic with little to no consideration of a user's role or level of proficiency. This technology offer is a no-code solution that can be deployed on websites to provide just-in-time (JIT) tutorial-style overlays which bridge the gap between digital workflows and human usage. These customisable overlays serve as guided walkthroughs to simplify employee onboarding and/or provide external users with a curated customer experience (CX). With this solution, highly re-useable, step-by-step Standard Operating Procedures (SOP), user manuals, and interactive guides can be easily created to provide clear, simple instructions for end-users to receive assistance when needed, this in turn, improves their understanding of proper software product usage and provide a painless user interface experience.