Automatic Brain Tumor Segmentation from MR Images via a Multimodal Sparse Coding Based Probabilistic Model | |
Y. Li; Q. Dou; J. Yu; F. Jia; J. Qin; P. A. Heng | |
2015 | |
会议名称 | The Proc. of 5th Pattern Recognition in NeuroImaging workshop, 2015. |
会议地点 | Stanford, CA, USA |
英文摘要 | Accurate segmentation of brain tumor from MR image is crucial for the diagnosis and treatment of brain cancer. We propose a novel automated brain tumor segmentation method based on a probabilistic model combining sparse coding and Markov random field (MRF). We formulate the brain tumor segmentation task as a pixel-wise labeling problem with regard to three classes: tumor, edema and healthy tissue. For each class, dictionary learning is performed independently on multi-modality gray scale patches. Sparse representation is then extracted based on a joint dictionary which is constructed by combing the three independent dictionaries. Finally, we build the probabilistic model aiming to estimate maximum a posterior (MAP) probability by introducing the sparse representation into likelihood probability and prior probability using the Markov random field (MRF) assumption. Compared with traditional methods, which employed hand-crafted low level features to construct the probabilistic model, our model can better represent the characteristics of a pixel and its relation with neighbors based on the sparse coefficients obtained from the learned dictionary. We validated our method on the MICAAI 2012 BRATS challenge brain MRI dataset and achieved comparable or better results compared with state-ofthe- art methods. |
收录类别 | EI |
语种 | 英语 |
内容类型 | 会议论文 |
源URL | [http://ir.siat.ac.cn:8080/handle/172644/6794] ![]() |
专题 | 深圳先进技术研究院_集成所 |
作者单位 | 2015 |
推荐引用方式 GB/T 7714 | Y. Li,Q. Dou,J. Yu,et al. Automatic Brain Tumor Segmentation from MR Images via a Multimodal Sparse Coding Based Probabilistic Model[C]. 见:The Proc. of 5th Pattern Recognition in NeuroImaging workshop, 2015.. Stanford, CA, USA. |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论