Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Biol Med ; 173: 108331, 2024 May.
Article in English | MEDLINE | ID: mdl-38522252

ABSTRACT

Medical image segmentation is a focus research and foundation in developing intelligent medical systems. Recently, deep learning for medical image segmentation has become a standard process and succeeded significantly, promoting the development of reconstruction, and surgical planning of disease diagnosis. However, semantic learning is often inefficient owing to the lack of supervision of feature maps, resulting in that high-quality segmentation models always rely on numerous and accurate data annotations. Learning robust semantic representation in latent spaces remains a challenge. In this paper, we propose a novel semi-supervised learning framework to learn vital attributes in medical images, which constructs generalized representation from diverse semantics to realize medical image segmentation. We first build a self-supervised learning part that achieves context recovery by reconstructing space and intensity of medical images, which conduct semantic representation for feature maps. Subsequently, we combine semantic-rich feature maps and utilize simple linear semantic transformation to convert them into image segmentation. The proposed framework was tested using five medical segmentation datasets. Quantitative assessments indicate the highest scores of our method on IXI (73.78%), ScaF (47.50%), COVID-19-Seg (50.72%), PC-Seg (65.06%), and Brain-MR (72.63%) datasets. Finally, we compared our method with the latest semi-supervised learning methods and obtained 77.15% and 75.22% DSC values, respectively, ranking first on two representative datasets. The experimental results not only proved that the proposed linear semantic transformation was effectively applied to medical image segmentation, but also presented its simplicity and ease-of-use to pursue robust segmentation in semi-supervised learning. Our code is now open at: https://github.com/QingYunA/Linear-Semantic-Transformation-for-Semi-Supervised-Medical-Image-Segmentation.


Subject(s)
COVID-19 , Semantics , Humans , Brain , Supervised Machine Learning , Image Processing, Computer-Assisted
2.
IEEE J Biomed Health Inform ; 28(5): 2569-2580, 2024 May.
Article in English | MEDLINE | ID: mdl-38498747

ABSTRACT

Acupoints (APs) prove to have positive effects on disease diagnosis and treatment, while intelligent techniques for the automatic detection of APs are not yet mature, making them more dependent on manual positioning. In this paper, we realize the skin conductance-based APs and non-APs recognition with machine learning, which could assist in APs detection and localization in clinical practice. Firstly, we collect skin conductance of traditional Five-Shu Point and their corresponding non-APs with wearable sensors, establishing a dataset containing over 36000 samples of 12 different AP types. Then, electrical features are extracted from the time domain, frequency domain, and nonlinear perspective respectively, following which typical machine learning algorithms (SVM, RF, KNN, NB, and XGBoost) are demonstrated to recognize APs and non-APs. The results demonstrate XGBoost with the best precision of 66.38%. Moreover, we also quantify the impacts of the differences among AP types and individuals, and propose a pairwise feature generation method to weaken the impacts on recognition precision. By using generated pairwise features, the recognition precision could be improved by 7.17%. The research systematically realizes the automatic recognition of APs and non-APs, and is conducive to pushing forward the intelligent development of APs and Traditional Chinese Medicine theories.


Subject(s)
Acupuncture Points , Galvanic Skin Response , Machine Learning , Signal Processing, Computer-Assisted , Humans , Galvanic Skin Response/physiology , Algorithms , Male , Wearable Electronic Devices , Female , Adult , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL