Select Page

Medical images segmentation

based on self-supervision- learning


Accurate segmentation of internal body organs, tissues, lesions etc. from medical images (CT, MRI etc.) are of great value in diagnosis and treatment of various diseases. Given the fact that the world is increasingly seeing newer kinds of pathogens, the need to detect novel types of lesions, tumors etc. and the need to monitor whole or custom parts of various anatomical structures (lungs, kidney, liver, heart etc.) has become all the more imperative. Supervised techniques for training segmentation targets require large and reliable annotations which are difficult to procure in medical images due to following reasons. –

1) Image acquisition is expensive, complex and not standardized across various medical equipment.

2) Segmentation annotation require subject matter expertise unlike in case of natural images.

3) Constant need to adapt to new type of segmentation targets (e.g. a new kind of lesion, a new sub-region in an anatomical structure) arises very often in medical domain.

Learning to segment medical images under minimal supervision achieves significance under the present evolving scenario. The availability of many un-labeled medical image data opens up the scope of learning the nature of image data in a self-supervised setting. The strength of self-supervision-based approaches has been less exploited in the segmentation works in medical image domain. In this proposal we aim to develop novel and robust techniques for identifying “super-pixels” in medical images based on self-supervision learning.  The proposed work will be evaluated against multiple datasets that cover Lungs (Covid-19 Lesion Segmentation dataset), Abdominal Organs (Abd-CT, Abd-MRI & CHAOS datasets) and Heart (Card MRI dataset) and will be compared against supervised techniques.

  • Dr. Viswanath Gopalakrishnan