Your browser doesn't support javascript.
loading
Шоу: 20 | 50 | 100
Результаты 1 - 20 de 496
Фильтр
Добавить фильтры








Годовой диапазон
1.
Int. j. morphol ; 42(3): 826-832, jun. 2024. ilus, tab
Статья в английский | LILACS | ID: biblio-1564601

Реферат

SUMMARY: The study aims to demonstrate the success of deep learning methods in sex prediction using hyoid bone. The images of people aged 15-94 years who underwent neck Computed Tomography (CT) were retrospectively scanned in the study. The neck CT images of the individuals were cleaned using the RadiAnt DICOM Viewer (version 2023.1) program, leaving only the hyoid bone. A total of 7 images in the anterior, posterior, superior, inferior, right, left, and right-anterior-upward directions were obtained from a patient's cut hyoid bone image. 2170 images were obtained from 310 hyoid bones of males, and 1820 images from 260 hyoid bones of females. 3990 images were completed to 5000 images by data enrichment. The dataset was divided into 80 % for training, 10 % for testing, and another 10 % for validation. It was compared with deep learning models DenseNet121, ResNet152, and VGG19. An accuracy rate of 87 % was achieved in the ResNet152 model and 80.2 % in the VGG19 model. The highest rate among the classified models was 89 % in the DenseNet121 model. This model had a specificity of 0.87, a sensitivity of 0.90, an F1 score of 0.89 in women, a specificity of 0.90, a sensitivity of 0.87, and an F1 score of 0.88 in men. It was observed that sex could be predicted from the hyoid bone using deep learning methods DenseNet121, ResNet152, and VGG19. Thus, a method that had not been tried on this bone before was used. This study also brings us one step closer to strengthening and perfecting the use of technologies, which will reduce the subjectivity of the methods and support the expert in the decision-making process of sex prediction.


El estudio tuvo como objetivo demostrar el éxito de los métodos de aprendizaje profundo en la predicción del sexo utilizando el hueso hioides. En el estudio se escanearon retrospectivamente las imágenes de personas de entre 15 y 94 años que se sometieron a una tomografía computarizada (TC) de cuello. Las imágenes de TC del cuello de los individuos se limpiaron utilizando el programa RadiAnt DICOM Viewer (versión 2023.1), dejando solo el hueso hioides. Se obtuvieron un total de 7 imágenes en las direcciones anterior, posterior, superior, inferior, derecha, izquierda y derecha-anterior-superior a partir de una imagen seccionada del hueso hioides de un paciente. Se obtuvieron 2170 imágenes de 310 huesos hioides de hombres y 1820 imágenes de 260 huesos hioides de mujeres. Se completaron 3990 imágenes a 5000 imágenes mediante enriquecimiento de datos. El conjunto de datos se dividió en un 80 % para entrenamiento, un 10 % para pruebas y otro 10 % para validación. Se comparó con los modelos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. Se logró una tasa de precisión del 87 % en el modelo ResNet152 y del 80,2 % en el modelo VGG19. La tasa más alta entre los modelos clasificados fue del 89 % en el modelo DenseNet121. Este modelo tenía una especificidad de 0,87, una sensibilidad de 0,90, una puntuación F1 de 0,89 en mujeres, una especificidad de 0,90, una sensibilidad de 0,87 y una puntuación F1 de 0,88 en hombres. Se observó que se podía predecir el sexo a partir del hueso hioides utilizando los métodos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. De esta manera, se utilizó un método que no se había probado antes en este hueso. Este estudio también nos acerca un paso más al fortalecimiento y perfeccionamiento del uso de tecnologías, que reducirán la subjetividad de los métodos y apoyarán al experto en el proceso de toma de decisiones de predicción del sexo.


Тема - темы
Humans , Male , Female , Adolescent , Adult , Middle Aged , Aged , Aged, 80 and over , Young Adult , Tomography, X-Ray Computed , Sex Determination by Skeleton , Deep Learning , Hyoid Bone/diagnostic imaging , Predictive Value of Tests , Sensitivity and Specificity , Hyoid Bone/anatomy & histology
2.
Статья в Китайский | WPRIM | ID: wpr-1006526

Реферат

@#Lung adenocarcinoma is a prevalent histological subtype of non-small cell lung cancer with different morphologic and molecular features that are critical for prognosis and treatment planning. In recent years, with the development of artificial intelligence technology, its application in the study of pathological subtypes and gene expression of lung adenocarcinoma has gained widespread attention. This paper reviews the research progress of machine learning and deep learning in pathological subtypes classification and gene expression analysis of lung adenocarcinoma, and some problems and challenges at the present stage are summarized and the future directions of artificial intelligence in lung adenocarcinoma research are foreseen.

3.
Статья в Китайский | WPRIM | ID: wpr-1017252

Реферат

Objective To discuss the value of clinical radiomic nomogram(CRN)and deep convolutional neural network(DCNN)in distinguishing atypical pulmonary hamartoma(APH)from atypical lung adenocarcinoma(ALA).Methods A total of 307 patients were retrospectively recruited from two institutions.Patients in institu-tion 1 were randomly divided into the training(n=184:APH=97,ALA=87)and internal validation sets(n=79:APH=41,ALA=38)in a ratio of 7∶3,and patients in institution 2 were assigned as the external validation set(n=44:APH=23,ALA=21).A CRN model and a DCNN model were established,respectively,and the performances of two models were compared by delong test and receiver operating characteristic(ROC)curves.A human-machine competition was conducted to evaluate the value of AI in the Lung-RADS classification.Results The areas under the curve(AUCs)of DCNN model were higher than those of CRN model in the training,internal and external validation sets(0.983 vs 0.968,0.973 vs 0.953,and 0.942 vs 0.932,respectively),however,the differences were not statistically significant(p=0.23,0.31 and 0.34,respectively).With a radiologist-AI com-petition experiment,AI tended to downgrade more Lung-RADS categories in APH and affirm more Lung-RADS cat-egories in ALA than radiologists.Conclusion Both DCNN and CRN have higher value in distinguishing APH from ALA,with the former performing better.AI is superior to radiologists in evaluating the Lung-RADS classification of pulmonary nodules.

4.
Статья в Китайский | WPRIM | ID: wpr-1018528

Реферат

Objective:Glioblastoma(GBM)and brain metastases(BMs)are the two most common malignant brain tumors in adults.Magnetic resonance imaging(MRI)is a commonly used method for screening and evaluating the prognosis of brain tumors,but the specificity and sensitivity of conventional MRI sequences in differential diagnosis of GBM and BMs are limited.In recent years,deep neural network has shown great potential in the realization of diagnostic classification and the establishment of clinical decision support system.This study aims to apply the radiomics features extracted by deep learning techniques to explore the feasibility of accurate preoperative classification for newly diagnosed GBM and solitary brain metastases(SBMs),and to further explore the impact of multimodality data fusion on classification tasks. Methods:Standard protocol cranial MRI sequence data from 135 newly diagnosed GBM patients and 73 patients with SBMs confirmed by histopathologic or clinical diagnosis were retrospectively analyzed.First,structural T1-weight,T1C-weight,and T2-weight were selected as 3 inputs to the entire model,regions of interest(ROIs)were manually delineated on the registered three modal MR images,and multimodality radiomics features were obtained,dimensions were reduced using a random forest(RF)-based feature selection method,and the importance of each feature was further analyzed.Secondly,we used the method of contrast disentangled to find the shared features and complementary features between different modal features.Finally,the response of each sample to GBM and SBMs was predicted by fusing 2 features from different modalities. Results:The radiomics features using machine learning and the multi-modal fusion method had a good discriminatory ability for GBM and SBMs.Furthermore,compared with single-modal data,the multimodal fusion models using machine learning algorithms such as support vector machine(SVM),Logistic regression,RF,adaptive boosting(AdaBoost),and gradient boosting decision tree(GBDT)achieved significant improvements,with area under the curve(AUC)values of 0.974,0.978,0.943,0.938,and 0.947,respectively;our comparative disentangled multi-modal MR fusion method performs well,and the results of AUC,accuracy(ACC),sensitivity(SEN)and specificity(SPE)in the test set were 0.985,0.984,0.900,and 0.990,respectively.Compared with other multi-modal fusion methods,AUC,ACC,and SEN in this study all achieved the best performance.In the ablation experiment to verify the effects of each module component in this study,AUC,ACC,and SEN increased by 1.6%,10.9%and 15.0%,respectively after 3 loss functions were used simultaneously. Conclusion:A deep learning-based contrast disentangled multi-modal MR radiomics feature fusion technique helps to improve GBM and SBMs classification accuracy.

5.
Статья в Китайский | WPRIM | ID: wpr-1019036

Реферат

In the past 20 years,the development of artificial intelligence has made rapid progress,and it is increasingly applied in the medical field,including medical image-assisted diagnosis and treatment,health management,disease risk prediction and so on.In this paper,the application status of artificial intelligence-assisted detection and diagnosis system based on deep learning in anorectal diseases is summarized,and the new methods related to the diagnosis and treatment of anorectal diseases at home and abroad are summarized.It mainly reviews the research progress of artificial intelligence technology in the diagnosis and treatment of anal fistula,perianal abscess,hemorrhoids and other anorectal diseases.

6.
Статья в Китайский | WPRIM | ID: wpr-1019608

Реферат

Objective To study the feasibility on automatic contouring of pelvic intestinal tube based on deep learning for radiotherapy images.Methods A total of 100 patients with diagnosis of rectal cancer,received radiotherapy in Zhongshan Hospital,Fudan University from 2019 to 2021,were randomly selected.Sixty cases were randomly enrolled to train the models,and the other 40 cases were applied to test.Based on the original small intestine model in automatic segmentation software AccuContour,60,40 and 20(2 groups)cases in the model cases were used to train the models Rec60,Rec40,Rec20A and Rec20B with manual contouring as ground truth.Other 40 cases for test were applied to evaluate the Dice similarity coefficient(DSC),95%Hausdorff distance(HD95)and average symmetric surface distance(ASSD)between the manual contouring and original model along with model Rec60.The DSC of the 5 groups of auto-segmentations were compared as well.The paired t tests were performed for each pair of the original model and 4 trained models.Results The small bowel contoured by trained models were more similar to the manual contouring.They could distinguish the boundary of the intestinal tube better and distinguish the small bowel from the colon.The average DSC,HD95 and ASSD of Rec60 were 0.16 higher(P<0.001),12.4 lower(P<0.001)and 5.14 lower(P<0.001)than the original model respectively.According to the paired t tests,there were no statistical differences in DSC between the 4 training models and the original model.No statistical difference was observed between Rec60 and Rec40,while they were both significantly different from the two Rec20 models.There was no statistical difference between Rec20B and Rec20B.Conclusion For radiotherapy images,model training can effectively improve the accuracy of intestinal tube delineation.Forty cases were enough for training an optimal model of automatic segmentation for pelvic intestinal tube in AccuContour software.

7.
Статья в Китайский | WPRIM | ID: wpr-1019896

Реферат

Objective The objective of this study is to improve the accuracy of automatic identification in complex background herbal slice images.The goal is to achieve accurate recognition of herbal slice images in the presence of complex backgrounds.Methods The experiment was conducted on a collected and organized dataset of Tibetan herbal slice images.The RGB,HOG,and LBP features of the slices were analyzed.An improved HOG algorithm was used to fuse multiple features,and a deep learning network was utilized for image recognition.Results The proposed method of multi-feature fusion combined with deep learning achieved an identification accuracy of 91.68%on 3610 Tibetan herbal slice images with complex backgrounds.Furthermore,the average identification accuracy for 20 common traditional Chinese medicine slices,such as Chuan Beimu,Hawthorn,and Pinellia,reached 98.00%.This method outperformed existing methods for identifying herbal slices in complex backgrounds,indicating its feasibility and wide applicability for the identification of other traditional Chinese herbal medicines.Conclusion The fusion of multiple features effectively captures distinguishing characteristics of herbal slices in complex backgrounds.It exhibits high recognition rates for Tibetan herbal slices with complex and heavily occluded backgrounds,and can be successfully applied to the recognition of natural scene-based traditional Chinese herbal medicines and herbal slices.This approach shows promising prospects for practical applications.

8.
Статья в Китайский | WPRIM | ID: wpr-1020046

Реферат

Children′s bronchial lumen is relatively narrow, pulmonary interstitial development is superior to elastic tissue, and ciliary clearance is weak, which makes children more prone to pulmonary infection and pneumonia.The development of artificial intelligence (AI) and its application in medicine is changing the traditional disease diagnosis, assessment and treatment.AI with deep learning as the core is increasingly used in the diagnosis and prognosis evaluation of pneumonia in children, which is conducive to the early diagnosis and accurate assessment of the disease.In addition to novel coronavirus pneumonia and acute respiratory distress syndrome, researchers rarely pay attention to other viral pneumonia, bacterial pneumonia, mycoplasmal pneumonia, and fungal pneumonia.Meanwhile, there are still problems, such as small datasets, small sample sizes, incomplete algorithms, and little attention paid to pneumonia types and subtypes.In the future, a large-sample dataset of children′s pulmonary infections should be established, and learning about AI should be promoted among medical students and medical staff, so as to explore the value of AI in children′s pulmonary infection and play its auxiliary role in clinical decision-making related to diagnosis and treatment.

9.
Journal of Practical Radiology ; (12): 135-139, 2024.
Статья в Китайский | WPRIM | ID: wpr-1020175

Реферат

Objective To analyze the effects of deep learning image reconstruction(DLIR)and adaptive statistical iterative recon-struction V(ASIR-V)on the imaging quality of chest CT in patient with pulmonary nodules,and to evaluate the differences based on different image reconstruction techniques in the detection of efficiency of computer-aided diagnosis(CAD)for pulmonary nodules.Methods The image data of pulmonary nodules of eighty patients with chest CT screening were reconstructed with ASIR-V 80%,DLIR-low(DLIR-L),DLIR-medium(DLIR-M)and DLIR-high(DLIR-H)images,respectively.The objective image quality and sub-jective image quality of the four groups were compared and analyzed.Objective image quality includes CT value of region of interest(ROI),noise,signal-to-noise ratio(SNR),contrast-to-noise ratio(CNR)and image average gradient.The diagnostic efficacy of CAD in detecting pulmonary nodules of reconstructed images among four groups were further evaluated.Results There were no signifi-cant difference in CT value of ROI of reconstructed images among the four groups(P>0.05).The noise,SNR and CNR of DLIR-H images were similar to those of ASIR-V 80%(P>0.05),but significantly better than those of DLIR-L and DLIR-M(P<0.05).The average gradient of DLIR-L,DLIR-M and DLIR-H images were significantly higher than those of ASIR-V 80%(P<0.05).The subjective image quality scores of DLIR-L,DLIR-M and DLIR-H images were significantly higher than those of ASIR-V 80%(P<0.05),and the subjective image quality score of DLIR-H image was the highest.CAD showed the highest true positive rate in DLIR-H images for detecting pulmonary nodules(P<0.05),and CAD showed the highest false positives per capita in ASIR-V 80%images for detecting pulmonary nodules(P<0.05).Conclusion The noise,SNR and CNR of DLIR-H images are similar to those of ASIR-V 80%,with the significantly higher image clarity and subjective image quality scores.DLIR-H has advantages in CAD detection of pulmonary nodules,which is an ideal image reconstruction technology for chest CT pulmonary nodule screening.

10.
Journal of Practical Radiology ; (12): 140-144, 2024.
Статья в Китайский | WPRIM | ID: wpr-1020176

Реферат

Objective To explore the feasibility of automating the measurement of abduction angle after total hip arthroplasty(THA)on postoperative radiographs by using deep learning algorithms.Methods The data were retrospectively collected.A total of 381 cases were used to develop deep learning model.Two radiologists annotated the key points on the images(lateral-superior point and medial-inferior point of acetabular cups,tear drops).The data was split into training dataset(304 cases),tuning dataset(38 cases),and test dataset(39 cases).A 2D U-net model was trained to segment the key points and the abduction angle were automatically meas-ured.After development of the model,an external validation dataset was collected(143 cases).Dice similarity coefficient(DSC)and mean absolute error(MAE)were used to evaluate the prediction efficiency of the model in the test dataset and the external validation dataset.Bland-Altman test was used to analyze the agreement between the abduction angle measured automatically by the model and the physician measurement.Results The DSC were 0.870-0.905 and 0.690-0.750 in the test dataset and the external validation dataset,and the corresponding MAE were 0.311-0.561 and 0.951-1.310.For the result of Bland-Altman analysis,only 6.52%(3/46)and 2.08%(3/144)of the abduction angle measurements in the test dataset and external validation dataset were outside the 95%limit of agreement(LoA).In the qualitative evaluation of the abduc-tion angle,the agreement of the model with the physician were 97.8%and 90.3%in the test dataset and the external validation dataset.Conclusion It is feasible to use deep learning algorithms to automatically measure the abduction angle after THA on X-ray images,achieving similar accuracy to that of physician.

11.
Journal of Practical Radiology ; (12): 145-150, 2024.
Статья в Китайский | WPRIM | ID: wpr-1020177

Реферат

Objective To analyze the feasibility and efficacy of a deep convolutional neural network(DCNN)model based on chest CT images to evaluate bone mineral density(BMD).Methods A total of 1 048 health check subjects'2 096 central level images of lumbar 1 and 2 vertebral bodies were used for experiments and analysis in this retrospective study.According to the results of quanti-tative computed tomography(QCT)BMD measurement,the subjects were divided into three categories:normal,osteopenia,osteopo-rosis(OP).Herein,a DCNN segmentation model was constructed based on chest CT images[training set(n=1 096),tuning set(n=200),and test set(n=800)],the segmentation performance was evaluated using the Dice similarity coefficient(DSC)to com-pare the consistency with the manually sketched region of vertebral body.Then,the DCNN classification models 1(fusion feature construction of lumbar 1 and 2 vertebral bodies)and model 2(image feature construction of lumbar 1 alone)was developed based on the training set(n=530).Model performance was compared in a test set(n=418)by the receiver operating characteristic(ROC)curve analysis.Results When the number of images in the training set(n=300)was adopted,the DSC value was 0.950 in the test set.The results showed that the sensitivity,specificity and area under the curve(AUC)of model 1 and model 2 in diagno-sing osteopenia and OP were 0.716,0.960,0.952;0.941,0.948,0.980;0.638,0.954,0.940;0.843,0.959,0.978,respectively.The AUC value of normal model 1 was higher than that of model 2(0.990 vs 0.983,P=0.033),while there was no significant difference in AUC values between osteopenia and OP(P=0.210,0.546).Conclusion A DCNN may have the potential to evaluate bone mass based on chest CT images,which is expected to become an effective tool for OP screening.

12.
Journal of Practical Radiology ; (12): 572-576, 2024.
Статья в Китайский | WPRIM | ID: wpr-1020257

Реферат

Objective To develop and validate a deep learning model for automatic identification of liver CT contrast-enhanced phases.Methods A total of 766 patients with liver CT contrast-enhanced images were retrospectively collected.A three-phase classification model and an arterial phase(AP)classification model were developed,so as to automatically identify liver CT contrast-enhanced phases as early arterial phase(EAP)or late arterial phase(LAP),portal venous phase(PVP),and equilibrium phase(EP).In addition,221 patients with liver CT contrast-enhanced images in 5 different hospitals were used for external validation.The annotation results of radiologists were used as a reference standard to evaluate the model performances.Results In the external validation datasets,the accuracy in identifying each enhanced phase reached to 90.50%-99.70%.Conclusion The automatic identification model of liver CT contrast-enhanced phases based on residual network may provide an efficient,objective,and unified image quality control tool.

13.
Статья в Китайский | WPRIM | ID: wpr-1020714

Реферат

Objective To Explore the feasibility and value of deep learning technology for quality control of echocardiography images.Methods A total of 180985 echocardiography images collected from Sichuan Provin-cial People's Hospital between 2015 and 2022 were selected to establish the experimental dataset.Two task models of the echocardiography standard views quality assessment method were trained,including intelligent recognition of seven types of views(six standard views and other views)and quality scoring of six standard views.The predictions of the models on the test set were compared with the results of the sonographer's annotation to assess the accuracy,feasibility,and timeliness of the runs of the two models.Results The overall classification accuracy of the stan-dard views recognition model was 98.90%,the precision was 98.17%,the recall was 98.18%and the F1 value was 98.17%,with the classification results close to the expert recognition level;the average PLCC of the six standard views quality scoring models was 0.933,the average SROCC was 0.929,the average RMSE was 7.95 and the average MAE was 4.83,and the prediction results were in strong agreement with the expert scores.The single-frame inference time after deployment on the 3090 GPU was less than 20 ms,meeting real-time requirements.Conclusion The echocardiography standard views quality assessment method can provide objective and accurate quality assessment results,promoting the development of echocardiography image quality control management towards real-time,objective,and intelligent.

14.
The Journal of Practical Medicine ; (24): 893-897,903, 2024.
Статья в Китайский | WPRIM | ID: wpr-1020846

Реферат

Deep learning is a machine learning method in the field of artificial intelligence,which simu-lates the workings of the neural network of the human brain to solve complex problems,and has been used in many important researches and applications in the field of medicine,such as diagnostic imaging,biomedical data pro-cessing,drug research and development,personalized medicine,etc.,which improves the accuracy and efficiency of diagnosis and treatment.In the field of assisted reproduction,deep learning could efficiently identify well-grown embryos,suitable oocytes,or sperms during the intervention process,assisting medical staff to make more accu-rate choices to improve pregnancy rates and reduce the risk of multiple pregnancies.This paper summarizes the latest advances in the application of deep learning in the field of assisted reproduction technology in the past 5 years,and provides an outlook for future research.

15.
Статья в Китайский | WPRIM | ID: wpr-1021155

Реферат

Objective To analyze the relationship of the volume of 87 brain regions with postnatal age and neurobehavior in full-term neonates.Methods A total of 75 full-term newborns[gestational age(39.38±1.22)weeks;male/female(51/24);postnatal age(11.11±6.67)days]without abnormalities on brain MRI(three-dimensional T1-weighted imaging,3D T1WI)at our hospital between November 2010 and September 2017 were retrospectively included.Based on the template of 87 brain regions,the neonatal brains were divided into 87 brain regions and their volumes were calculated by using V-shape Bottleneck network(VB-Net)deep learning segmentation technique,Pearson partial correlation and regression analysis were used to explore the relationship of the volume of each brain region with postnatal age and neurobehavioral scores.Results After adjusting for gestational age,birth weight,head circumference,body length and sex,66.7%of the regional brain volumes(58/87 brain regions)significantly increased with the postnatal age(correlation coefficient r:0.2-0.7,P<0.05).The volumes of gray matter in bilateral lentiform nucleus,left caudate nucleus,right occipital lobe,right inferior temporal lobe,and bilateral anterior temporal lobe strongly correlated with the postnatal age(r>0.50,P<0.05).The gray matter volume of the right occipital lobe linearly increased with age(slope:100.67),and was positively correlated with behavioral scores(r=0.324,P<0.01).Conclusion Most of regional brain volumes increase with the postnatal age during the neonatal period,and the fastest growth occurs in primary sensorimotor-related brain regions,presenting the spatial heterogeneity.Partial brain region grows with the development of behavioral ability.

16.
Статья в Китайский | WPRIM | ID: wpr-1021410

Реферат

BACKGROUND:Based on different algorithms of machine learning,how to carry out clinical research on lumbar disc herniation with the help of various algorithmic models has become a trend and hot spot in the development of intelligent medicine at present. OBJECTIVE:To review the characteristics of different algorithmic models of machine learning in the diagnosis and treatment of lumbar disc herniation,and summarize the respective advantages and application strategies of algorithmic models for the same purpose. METHODS:The computer searched PubMed,Web of Science,EMBASE,CNKI,WanFang,VIP and China Biomedical(CBM)databases to extract the relevant articles on machine learning in the diagnosis and treatment of lumbar disc herniation.Finally,96 articles were included for analysis. RESULTS AND CONCLUSION:(1)Different algorithm models of machine learning provide intelligent and accurate application strategies for clinical diagnosis and treatment of lumbar disc herniation.(2)Traditional statistical methods and decision trees in supervised learning are simple and efficient in exploring risk factors and establishing diagnostic and prognostic models.Support vector machine is suitable for small data sets with high-dimensional features.As a nonlinear classifier,it can be applied to the recognition,segmentation and classification of normal or degenerative intervertebral discs,and to establish diagnostic and prognostic models.Ensemble learning can make up for the shortcomings of a single model.It has the ability to deal with high-dimensional data and improve the precision and accuracy of clinical prediction models.Artificial neural network improves the learning ability of the model,and can be applied to intervertebral disc recognition,classification and making clinical prediction models.On the basis of the above uses,deep learning can also optimize images and assist surgical operations.It is the most widely used model with the best performance in the diagnosis and treatment of lumbar disc herniation.The clustering algorithm in unsupervised learning is mainly used for disc segmentation and classification of different herniated segments.However,the clinical application of semi-supervised learning is relatively less.(3)At present,machine learning has certain clinical advantages in the identification and segmentation of lumbar intervertebral discs,classification and grading of the degenerative intervertebral discs,automatic clinical diagnosis and classification,construction of the clinical predictive model and auxiliary operation.(4)In recent years,the research strategy of machine learning has changed to the neural network and deep learning,and the deep learning algorithm with stronger learning ability will be the key to realizing intelligent medical treatment in the future.

17.
Статья в Китайский | WPRIM | ID: wpr-1022014

Реферат

BACKGROUND:MRI is important for the diagnosis of early knee osteoarthritis.MRI image recognition and intelligent segmentation of knee osteoarthritis using deep learning method is a hot topic in image diagnosis of artificial intelligence. OBJECTIVE:Through deep learning of MRI images of knee osteoarthritis,the segmentation of femur,tibia,patella,cartilage,meniscus,ligaments,muscles and effusion of knee can be automatically divided,and then volume of knee fluid and muscle content were measured. METHODS:100 normal knee joints and 100 knee osteoarthritis patients were selected and randomly divided into training dataset(n=160),validation dataset(n=20),and test dataset(n=20)according to the ratio of 8:1:1.The Coarse-to-Fine sequential training method was used to train the 3D-UNET network deep learning model.A Coarse MRI segmentation model of the knee sagittal plane was trained first,and the rough segmentation results were used as a mask,and then the fine segmentation model was trained.The T1WI and T2WI images of the sagittal surface of the knee joint and the marking files of each structure were input,and DeepLab v3 was used to segment bone,cartilage,ligament,meniscus,muscle,and effusion of knee,and 3D reconstruction was finally displayed and automatic measurement results(muscle content and volume of knee fluid)were displayed to complete the deep learning application program.The MRI data of 26 normal subjects and 38 patients with knee osteoarthritis were screened for validation. RESULTS AND CONCLUSION:(1)The 26 normal subjects were selected,including 13 females and 13 males,with a mean age of(34.88±11.75)years old.The mean muscle content of the knee joint was(1 051 322.94±2 007 249.00)mL,the mean median was 631 165.21 mL,and the mean volume of effusion was(291.85±559.59)mL.The mean median was 0 mL.(2)There were 38 patients with knee osteoarthritis,including 30 females and 8 males.The mean age was(68.53±9.87)years old.The mean muscle content was(782 409.18±331 392.56)mL,the mean median was 689 105.66 mL,and the mean volume of effusion was(1 625.23±5 014.03)mL.The mean median was 178.72 mL.(3)There was no significant difference in muscle content between normal people and knee osteoarthritis patients.The volume of effusion in patients with knee osteoarthritis was higher than that in normal subjects,and the difference was significant(P<0.05).(4)It is indicated that the intelligent segmentation of MRI images by deep learning can discard the defects of manual segmentation in the past.The more accuracy evaluation of knee osteoarthritis was necessary,and the image segmentation was processed more precisely in the future to improve the accuracy of the results.

18.
Modern Hospital ; (6): 93-98, 2024.
Статья в Китайский | WPRIM | ID: wpr-1022208

Реферат

Objective With the focus on emerging infectious diseases and diseases of unknown cause,the study aims to realize multi-point trigger monitoring of infectious diseases through key monitoring sites and key populations.Methods Using ar-tificial intelligence,deep learning,big data and other information technologies to build an intelligent information center for infec-tious diseases with patients'disease files as the core,construct a core capacity of infectious disease surveillance,early warning and situation prediction,and predict and evaluate the importance of infectious disease warning signals.Results The system cov-ered 1 425 primary-level medical institutions,18 hospitals,2 580+schools,4 134 pharmacies,4 laboratories and civil affairs departments,detected 55 kinds of infectious diseases and 6 kinds of syndrome monitoring signals.Since its launch,121 000 ac-tive notification cards have been issued,more than 54 000 new notification cards have been added,35.256 million times of multi-source monitoring and 14.4 million disease files have been recorded.Conclusion By expanding monitoring content and chan-nels,we realized early monitoring,auxiliary investigation and multi-mode visual early warning of infectious diseases,built a multi-point trigger mechanism,and moved forward the infectious disease surveillance.

19.
Статья в Китайский | WPRIM | ID: wpr-1022504

Реферат

Primary liver cancer (hereinafter referred to as liver cancer) is one of the most common and deadly malignancies, posing a serious threat to human health. In recent years, advance-ments in artificial intelligence (AI) have opened up possibilities for the comprehensive enhancement of liver cancer diagnosis and treatment. AI technologies in liver cancer mainly include the machine learning (ML) and the deep learning (DL) models, with DL being a subtype of ML based on neural network structures. The application of ML and DL models in liver cancer has demonstrated tremen-dous potential, but there are still many issues that need to be addressed, including enhancing the generalizability and interpretability of results. The authors elaborate on the application progress of AI in the field of liver cancer in recent years, and explore the current challenges and future explora-tion directions.

20.
Статья в Китайский | WPRIM | ID: wpr-1026182

Реферат

Video-based intelligent action recognition remains challenging in the field of computer vision.The review analyzes the state-of-the-art methods of video-based intelligent action recognition,including machine learning methods with handcrafted features,deep learning methods with automatically extracted features,and multi-information fusion methods.In addition,the important medical applications and limitations of these technologies in the past decade are introduced,and the interdisciplinary views on the future application to improve human health are also shared.

Критерии поиска