François Chung, Ph.D.
Multimodal prior

Multimodal prior

INRIA project @Sophia-Antipolis, France (2009). Model-based image segmentation requires prior information about the appearance of a structure in the image. Instead of relying on Principal Component Analysis (PCA) such as in Statistical Appearance Models (SAM), we propose a prior based on a regional clustering of intensity profiles that does not rely on accurate pointwise registration.

This appearance prior, denoted as Multimodal Prior Appearance Model (MPAM), is built upon the Expectation-Maximization (EM) algorithm with regularized covariance matrices and includes spatial regularization. The number of appearance regions is determined by a novel model order selection criterion. The prior is described on a reference mesh where each vertex has a probability to belong to several intensity profile classes.

We tested our method with 7 liver meshes segmented from Computed Tomography (CT) images and 4 tibia meshes segmented from Magnetic Resonance (MR) images. For both structures, outward profiles composed of 10 samples extracted every mm were generated from meshes with around 4000 vertices.

The main advantage of our approach is that appearance regions are extracted without requiring an accurate pointwise registration. Another advantage is that a meaningful prior may be built with very few datasets (in fact one dataset suffices). Furthermore, the prior is multimodal, therefore able to cope with large variation of appearance.

Reference

  • Related publication: MICCAI 2009 (F. Chung et al., conference proceeding).