RESUMO
In order to achieve highly precise medical image segmentation, this paper presents ConvMedSegNet, a novel convolutional neural network designed with a U-shaped architecture that seamlessly integrates two crucial modules: the multi-receptive field depthwise convolution module (MRDC) and the guided fusion module (GF). The MRDC module's primary function is to capture texture information of varying sizes through multi-scale convolutional layers. This information is subsequently utilized to enhance the correlation of global feature data by expanding the network's width. This strategy adeptly preserves the inherent inductive biases of convolution while concurrently amplifying the network's ability to establish dependencies on global information. Conversely, the GF module assumes responsibility for implementing multi-scale feature fusion by connecting the encoder and decoder components. It facilitates the transfer of information between features that are separated over substantial distances through guided fusion, effectively minimizing the loss of critical data. In experiments conducted on public medical image datasets such as BUSI and ISIC2018, ConvMedSegNet outperforms several advanced competing methods, yielding superior results. Additionally, the code can be accessed at https://github.com/csust-yixin/ConvMedSegNet.