Manuscript received September 19, 2022; revised December 26, 2022; accepted January 10, 2023.
Abstract—Early screening of cervical lesions is of great
significance in pathological diagnosis. Owing to the complexity
of cell morphological changes and the limitations of medical
images, accurate segmentation of cervical cells is still a
challenging task. In this paper, an isomorphic multi-branch
modulation deformable convolution residual model is proposed
to extract features for enhancing the segmentation of small cells
and overlapping cytoplasmic boundaries. Then the regional
feature extraction, boundary box recognition, and adding a
single pixel-level mask at the last level are integrated and
optimized based on the cascade regional convolution neural
network (Cascade R-CNN) to complete the segmentation of
cervical cells for getting better accuracy. The proposed
framework was evaluated on the ISBI2014 cervical cell
segmentation competition public dataset. Experimental results
show that the average accuracy of the network model in cervical
cell segmentation is 81.1%, and the accuracy of small targets is
77%. To some extent, it can assist pathologists in screening
cervical cancer in the early phase.
Index Terms—Cervical cell, instance segmentation, cascade
RCNN, modulation deformable convolution, residual network,
deep learning
Yanjing Ding and Weiwei Yue are with School of Physics and
Electronics, Shandong Normal University, Jinan, China.
Qinghua Li is with College of Artificial Intelligence and Big Data for
Medical Sciences SDFMU, Jinan, China.
*Correspondence: liqinghua1977@163.com (Q.L.)
Cite: Yanjing Ding, Weiwei Yue, and Qinghua Li*, "Automated Segmentation of Cervical Cell Images Using IMBMDCR-Net," International Journal of Machine Learning vol. 13, no. 4, pp. 163-172, 2023.
Copyright @ 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).