Computer Vision and Pattern Discovery

BII - Computer Vision and Pattern Discovery Group Photo

Research

The group of Computer Vision and Pattern Discovery for BioImages uses advanced computer vision, machine learning and mathematical models to build better machines; for the improvement of health care and discovery of biological knowledge. The group analyses images of tissues, histological slides, radiology images and 2D/3D live cells assays. These images were acquired using a wide variety of imaging devices.

Video Analysis on Ultrasound of the Heart


BII - Computer Vision and Pattern Discovery for Bioimages Figure 1
Figure 1: In Apical Four Chambers view, the volume of the left ventricle of the heart measured in diastolic phase with good confidence for the annotation of the left ventricle.

The number one cause of death in the world is heart disease. Thus, early detection is the key to treatment. Echocardiography is the most widely used tool for detection of heart disease. However, it has several disadvantages. Firstly, images analysis is manual and it takes up to 30 minutes per patient. Secondly, sonographer shortages are common and manual results vary widely. Lastly, the hardware and software needed are expensive as they cost approximately $200k. We aim to eliminate the manual processes and the expensive hardware used by doctors by developing an intelligent software that can review echo results to determine if a patient has heart disease, while having the option to review why the decisions were rendered by our system. We have unique access to proprietary data for training our deep learning models, clinical outcome data, inhouse clinical expertise, proprietary image processing & DICOM workflow technique.

Analysis of Angiogram Using Machine Learning Techniques

Coronary angiography is the gold standard imaging technique for visualizing the coronary arteries which aids in diagnosing coronary artery disease, and guiding patient management. Iodine-based contrast is injected into the coronary arteries and multiple moving X-ray images are acquired from different view angles around the patient torso. Cardiologists are trained to interpret the coronary angiogram, but this takes time and there may be interobserver disagreement. In a new collaboration with the National Heart Centre Singapore, we are exploring artificial intelligence approaches to analyzing X-ray video sequences with the goal of developing a quantitative assessment tool for repeatable and objective angiographic measurements.

BII - Computer Vision and Pattern Discovery for Bioimages Figure 2
Figure 2: A typical frame of an angiogram video used for image analysis for coronary heart diseases.

Intra-tumor Heterogeneity Through the Lens of Image Analysis

According to the World Health Organization, cancer is one of the major causes of death globally, and it is estimated to be responsible for 9.6 million deaths in 2018. This highly deadly disease starts in one cell or in a small group of cells that acquire mutations in their genetic material and become abnormal cells. Then, abnormal cells start to grow in an uncontrolled manner, which is called as cancer, come together and form tumors. Tumors are composed of abnormal cell groups with different genetic materials (so biological capabilities), which is called as intra-tumor heterogeneity, since cancer is a reiterative evolutionary process and abnormal cells are susceptible to further mutations during their lifetime. Hence, intra-tumor heterogeneity exists within the tumors and results in therapeutic failure and drug resistance in cancer. Therefore, intra-tumor heterogeneity is one of the key difficulties in cancer treatment.

We are developing deep learning models to predict the intratumor heterogeneity and reveal the histological features behind intra-tumor heterogeneity by analyzing histopathology images. We aim to support medical professionals in diagnosis, treatment plans, medication management and precision medicine of cancer in order to better address increased healthcare demands in the future.

Whole Slide Image Stack Registration for 3D Blood Vessel Analysis


BII - Computer Vision and Pattern Discovery for Bioimages FIgure 3
Figure 3: A 3D reconstruction of a blood vessel in the imaged tissue volume after applying the proposed registration algorithm and before regional registration. 

Whole slide imaging (WSI) has increased availability of data to scientists and physicians for research, training, and more accurate diagnosis. With the help of this technology, more complex analysis of the tissue volume is now possible such as 3D analysis of the vascular network in the tissue volume of interest. A 3D analysis, however, requires reconstruction of the tissue volume from the acquired images. This task is not trivial as the process of cutting the tissue volume into thin slices and mounting them on the glass slides may impose different deformations to each individual slice.

The majority of proposed WSI registration algorithms perform registration for the whole tissue slice in consecutive glass slides. In such algorithms, the registration results are not always desirable as the registration is poorly affected by the deformed tissue regions.

 As a result, regional registration is found to be more effective. In order to perform an accurate regional registration, rough registration of the consecutive tissue slides is crucial. We propose a robust algorithm for rough registration of whole slide images. We show that using our registration algorithm followed by a regional registration provides accurate and more robust registration results.

Use of Deep Learning to Grade Acne

Acne vulgaris is one of the most common skin disease afflicting humanity. It is caused by overactive sebaceous glands, which are clinically characterized as comedones, papules, pustules, nodules and, in some cases, scarring.

Grading is a subjective method, which involves determining the severity of acne, based on observing the dominant lesions, evaluating the presence or absence of inflammation and estimating the extent of involvement. Investigator Global Assessment (IGA) score is normally used to classify the level of severity from 0 to 4, where 0 is the lowest and 4 is the highest scale according to the presence of different types of acnes and their density in the region of interest i.e., facial or truncal view.

In the process of acne grading, different types of acnes are observed and counted by the doctors to evaluate the presence or absence of inflammation. This screening process is very tedious and time consuming, which can cause high number of false positives. Therefore, an automated acne grading system is needed that can help the dermatologists and skin specialists in the screening, both before and after treatment. In this project, we work with dermatologists from the National Skin Center to develop an automated acne grading system, which will use deep learning architectures to classify a given image from IGA scale 0-4 based on the level of acne severity. 

BII - Computer Vision and Pattern Discovery for Bioimages Figure 4
Figure 4: Confusion matrix for grading of Investigator Global Assessment (IGA) score separated into three classes.

In our current dataset, we have a class imbalance problem; therefore, we convert five class problem (IGA score 0-4) to three class problem (low, medium and high). Where IGA score 0-1 is considered low, IGA score 2 is consider medium and IGA score 3-4 is consider high. Then we designed and trained a CNN model for three class classification problem. Our model is able to achieve a classification accuracy of 70%, 69% and 62% for low (Class A), medium (Class B) and high (Class C), respectively as shown in the confusion matrix above. 

Novel Networks for Regressing Distributions

Deep learning methods have shown superior performance in many machine learning applications due to their ability to model high-level abstractions in the data. However, conventional neural networks are not efficient for regression on distributions. Since each node encodes just a real value in the neural network, the network is unable to encode distributions compactly, resulting in many parameters. To that end, we propose a novel network which generalizes the neural network structure by encoding an entire probability distribution in each node. Our network, called distribution regression network (DRN), exhibits non-linear level sets in the transformation in each node, increasing the degree of non-linearity in each layer. On several real world datasets for distribution-to-distribution regression, DRN achieves higher accuracies than conventional neural networks while using much fewer parameters.

As an extension, we used DRN for the more complex task of performing forward prediction on time sequences of distributions. We also studied how the test accuracy varies with the size of training set since the number of data may be limited given the seasonality of a time series data. We found that DRN requires two to five times fewer training data than neural networks to achieve similar accuracies. However, DRN is a feedforward network and does not explicitly model time dependencies. Hence, we propose a new recurrent architecture for DRN, named recurrent distribution regression network (RDRN). RDRN and DRN outperform neural network models, with RDRN achieving similar or better accuracies than DRN. Given the importance of distributions in capturing characteristics of a population and the effects of noise, DRN and RDRN have applications in a wide range of fields such as human population studies, bioinformatics and finance.

BII - Computer Vision and Pattern Discovery in Bioimages Figure 5
Figure 5a,5b: (Left) An example network for our distribution regression network (DRN). DRN generalizes the conventional neural network by encoding an entire probability distribution within each network node. (Right) Our recurrent distribution regression network (RDRN) extends the DRN model with a recurrent architecture and performs regression on an input sequence of probability distributions.

Members

 Deputy Director (Training and Talent), Senior Principal Investigator  LEE Hwee Kuan   |    [View Bio]  
 Senior Post-Doctoral Research Fellow CHENG Zi Yi, Nicholas  
 Senior Post-Doctoral Research Fellow PAKNEZHAD Mahsa 
 Senior Post-Doctoral Research Fellow LIU Wei
 Post-Doctoral Research Fellow SINGH Malay
 Post-Doctoral Research Fellow TAN Wei Ping Eddy
 Research Officer LIN Li
 Research Officer D/O RANJIT SINGH Kavita Kaur
 PhD Student CHEN Brian 
 PhD Student CHEONG Jiasheng Isaac 
 PhD Student COPPOLA Davide
 PhD Student PARK Sojeong 

Selected Publications

Journal Publications: Conference Publications: