Your browser doesn't support javascript.
loading
節目: 20 | 50 | 100
结果 1 - 20 de 497
过滤器
1.
Rev. colomb. cir ; 39(5): 691-701, Septiembre 16, 2024. fig
文章 在 西班牙语 | LILACS | ID: biblio-1571841

摘要

Introducción. La formación integral de los residentes excede el conocimiento teórico y la técnica operatoria. Frente a la complejidad de la cirugía moderna, su incertidumbre y dinamismo, es necesario redefinir la comprensión de la educación quirúrgica y promover capacidades adaptativas en los futuros cirujanos para manejar efectivamente el entorno. Estos aspectos se refieren a la experticia adaptativa. Métodos. La presente revisión narrativa propone una definición de la educación quirúrgica con énfasis en la experticia adaptativa, y un enfoque para su adopción en la práctica. Resultados. Con base en la literatura disponible, la educación quirúrgica representa un proceso dinámico que se sitúa en la intersección de la complejidad de la cultura quirúrgica, del aprendizaje en el sitio de trabajo y de la calidad en el cuidado de la salud, dirigido a la formación de capacidades cognitivas, manuales y adaptativas en el futuro cirujano, que le permitan proveer cuidado de alto valor en un sistema de trabajo colectivo, mientras se fortalece su identidad profesional. La experticia adaptativa del residente es una capacidad fundamental para maximizar su desempeño frente a estas características de la educación quirúrgica. En la literatura disponible se encuentran seis estrategias para fortalecer esta capacidad. Conclusión. La experticia adaptativa es una capacidad esperada y necesaria en el médico residente de cirugía, para hacer frente a la complejidad de la educación quirúrgica. Existen estrategias prácticas que pueden ayudar a fortalecerla, las cuales deben ser evaluadas en nuevos estudios.


Introduction. The comprehensive training of residents exceeds theoretical knowledge and operative technique. Faced with the complexity of modern surgery, its uncertainty and dynamism, it is necessary to redefine the understanding of surgical education and promote adaptive capabilities in future surgeons for the effective management of the environment. These aspects refer to adaptive expertise. Methods. The present narrative review proposes a definition of surgical education with an emphasis on adaptive expertise, and an approach for its adoption in practice. Results. Based on the available literature, surgical education represents a dynamic process that is situated at the intersection of the complexity of surgical culture, learning in the workplace, and quality in health care, aimed at training of cognitive, manual, and adaptive capacities in the future surgeon, which allow them to provide high-value care in a collective work system, while strengthening their professional identity. Resident's adaptive expertise is a fundamental capacity to maximize his or her performance in the face of these characteristics of surgical education. In the available literature there are six strategies to strengthen this capacity. Conclusion. Adaptive expertise is an expected and necessary capacity in the surgical resident to deal with the complexity of surgical education. There are practical strategies that can help strengthen it, which must be evaluated in new studies.


Subject(s)
Humans , Education, Medical, Graduate , Deep Learning , Professional Competence , General Surgery , Vocational Education , Metacognition
2.
Int. j. morphol ; 42(3): 826-832, jun. 2024. ilus, tab
文章 在 英语 | LILACS | ID: biblio-1564601

摘要

SUMMARY: The study aims to demonstrate the success of deep learning methods in sex prediction using hyoid bone. The images of people aged 15-94 years who underwent neck Computed Tomography (CT) were retrospectively scanned in the study. The neck CT images of the individuals were cleaned using the RadiAnt DICOM Viewer (version 2023.1) program, leaving only the hyoid bone. A total of 7 images in the anterior, posterior, superior, inferior, right, left, and right-anterior-upward directions were obtained from a patient's cut hyoid bone image. 2170 images were obtained from 310 hyoid bones of males, and 1820 images from 260 hyoid bones of females. 3990 images were completed to 5000 images by data enrichment. The dataset was divided into 80 % for training, 10 % for testing, and another 10 % for validation. It was compared with deep learning models DenseNet121, ResNet152, and VGG19. An accuracy rate of 87 % was achieved in the ResNet152 model and 80.2 % in the VGG19 model. The highest rate among the classified models was 89 % in the DenseNet121 model. This model had a specificity of 0.87, a sensitivity of 0.90, an F1 score of 0.89 in women, a specificity of 0.90, a sensitivity of 0.87, and an F1 score of 0.88 in men. It was observed that sex could be predicted from the hyoid bone using deep learning methods DenseNet121, ResNet152, and VGG19. Thus, a method that had not been tried on this bone before was used. This study also brings us one step closer to strengthening and perfecting the use of technologies, which will reduce the subjectivity of the methods and support the expert in the decision-making process of sex prediction.


El estudio tuvo como objetivo demostrar el éxito de los métodos de aprendizaje profundo en la predicción del sexo utilizando el hueso hioides. En el estudio se escanearon retrospectivamente las imágenes de personas de entre 15 y 94 años que se sometieron a una tomografía computarizada (TC) de cuello. Las imágenes de TC del cuello de los individuos se limpiaron utilizando el programa RadiAnt DICOM Viewer (versión 2023.1), dejando solo el hueso hioides. Se obtuvieron un total de 7 imágenes en las direcciones anterior, posterior, superior, inferior, derecha, izquierda y derecha-anterior-superior a partir de una imagen seccionada del hueso hioides de un paciente. Se obtuvieron 2170 imágenes de 310 huesos hioides de hombres y 1820 imágenes de 260 huesos hioides de mujeres. Se completaron 3990 imágenes a 5000 imágenes mediante enriquecimiento de datos. El conjunto de datos se dividió en un 80 % para entrenamiento, un 10 % para pruebas y otro 10 % para validación. Se comparó con los modelos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. Se logró una tasa de precisión del 87 % en el modelo ResNet152 y del 80,2 % en el modelo VGG19. La tasa más alta entre los modelos clasificados fue del 89 % en el modelo DenseNet121. Este modelo tenía una especificidad de 0,87, una sensibilidad de 0,90, una puntuación F1 de 0,89 en mujeres, una especificidad de 0,90, una sensibilidad de 0,87 y una puntuación F1 de 0,88 en hombres. Se observó que se podía predecir el sexo a partir del hueso hioides utilizando los métodos de aprendizaje profundo DenseNet121, ResNet152 y VGG19. De esta manera, se utilizó un método que no se había probado antes en este hueso. Este estudio también nos acerca un paso más al fortalecimiento y perfeccionamiento del uso de tecnologías, que reducirán la subjetividad de los métodos y apoyarán al experto en el proceso de toma de decisiones de predicción del sexo.


Subject(s)
Humans , Male , Female , Adolescent , Adult , Middle Aged , Aged , Aged, 80 and over , Young Adult , Tomography, X-Ray Computed , Sex Determination by Skeleton , Deep Learning , Hyoid Bone/diagnostic imaging , Predictive Value of Tests , Sensitivity and Specificity , Hyoid Bone/anatomy & histology
3.
文章 在 中文 | WPRIM | ID: wpr-1006526

摘要

@#Lung adenocarcinoma is a prevalent histological subtype of non-small cell lung cancer with different morphologic and molecular features that are critical for prognosis and treatment planning. In recent years, with the development of artificial intelligence technology, its application in the study of pathological subtypes and gene expression of lung adenocarcinoma has gained widespread attention. This paper reviews the research progress of machine learning and deep learning in pathological subtypes classification and gene expression analysis of lung adenocarcinoma, and some problems and challenges at the present stage are summarized and the future directions of artificial intelligence in lung adenocarcinoma research are foreseen.

4.
Herald of Medicine ; (12): 661-666,后插1, 2024.
文章 在 中文 | WPRIM | ID: wpr-1023764

摘要

Objective To develop an accurate deep learning prediction model of YOLO-V5 capable of accurately iden-tifying medication packaging boxes in outpatient and emergency pharmacies,aiming to assist pharmacists in achieving"zero dis-pensing error".Methods A total of 2 560 images of packaging boxes from 136 different drugs were collected and labeled to form the deep learning dataset.The dataset was split into training and validation sets at a ratio of 4∶1.YOLO-V5 deep-learning algorithm was employed for training the data using images from our dataset(train epochs:500,batch size:4,learning rate:0.01).The values of the precision(Pr)and mean average precision(mAP)were used as measures for model performance evaluation.Results The Pr of the four sub-models of YOLO-V5 in the training set all reached 1.00.The mAP_0.5 of YOLO-V5x was 0.95,which was higher than those of YOLO-V5s(0.94),YOLO-V5l(0.94),and YOLO-V5m(0.94).The mAP_0.5:0.95 of YOLO-V5l and YOLO-V5x were 0.85 which were higher than those of YOLO-V5s(0.84)and YOLO-V5m(0.84).Training time and model size were 82.56 hours and 166.00MB for YOLO-V5x which were the highest among the four models.The speed of detection in one im-age was 11ms for YOLO-V5s which was the fastest among the four models.Conclusion YOLO-V5 can accurately identify the packaging of drugs in outpatient and emergency pharmacies.Implementing an artificial-intelligence-assisted drug dispensation sys-tem is feasible for pharmacists to achieve"zero dispensing error".

5.
文章 在 中文 | WPRIM | ID: wpr-1024456

摘要

Objective To observe the value of a YOLOX target detection model for automatically identifying endovascular interventional instruments on images of digital subtract angiography(DSA).Methods DSA data of 37 patients who underwent abdominal endovascular interventional therapy were retrospectively analyzed.Totally 4 435 DSA images were captured and taken as data set,which were divided into training set(n=3 991)and verification set(n=444)at the ratio of 9∶1.Six kinds of endovascular interventional instruments were labeled.YOLOX algorithm was applied for deep learning of data in training set in order to build a target detection model,and the efficacy of the model for automatically identifying endovascular interventional instruments on DSA images was evaluated based on varification set.Results A total of 6 668 labels were put on 4 435 DSA images,aimed on Terumo 0.035in loach guide wire(n=587),Cook Lunderquist super hard guide wire(n=990),Optimed 5F with graduated pig tail catheter(n=1 680),Cordis MPA multi-functional catheter(n=667),Boston Scientific V-18 controllable guide wire(n=1 330)and Terumo 6F long sheath(n= 1 414),respectively.The training set contained 527,875,1 466,598,1 185 and 1 282,while the verification set contained 60,115,214,69,145 and 132 the above labels,respectively.The pixel accuracy of YOLOX target detection model for automatically identifying the above instruments in the verification set was 95.23%,97.32%,99.18%,98.97%,97.60%and 98.19%,respectively,with a mean pixel accuracy of 97.75%.Conclusion YOLOX target detection model could automatically identify endovascular interventional instruments on images of DSA.

6.
文章 在 中文 | WPRIM | ID: wpr-1025693

摘要

Gastrointestinal stromal tumor(GIST),with a certain malignant potential,are currently the most common subepithelial tumors of the gastrointestinal tract.Early diagnosis and prediction of malignant potential are very important for the formulation of a treatment plan and determining the prognosis of GIST.Deep learning technology has made significant progress in the diagnosis of digestive tract diseases,and it can also effectively assist physicians in diagnosing GIST and predicting their malignant potential,preoperatively.The application of deep learning technology in the diagnosis of GIST includes CT,gastrointestinal endoscopy and endoscopic ultrasound.This paper aims to review the application of deep learning technology in the diagnosis and prediction of malignant potential of GIST.

7.
文章 在 中文 | WPRIM | ID: wpr-1026182

摘要

Video-based intelligent action recognition remains challenging in the field of computer vision.The review analyzes the state-of-the-art methods of video-based intelligent action recognition,including machine learning methods with handcrafted features,deep learning methods with automatically extracted features,and multi-information fusion methods.In addition,the important medical applications and limitations of these technologies in the past decade are introduced,and the interdisciplinary views on the future application to improve human health are also shared.

8.
文章 在 中文 | WPRIM | ID: wpr-1026191

摘要

A non-invasive deep learning method is proposed for reconstructing arterial blood pressure signals from photoplethysmography signals.The method employs U-Net as a feature extractor,and a module referred to as bidirectional temporal processor is designed to extract time-dependent information on an individual model basis.The bidirectional temporal processor module utilizes a BiLSTM network to effectively analyze time series data in both forward and backward directions.Furthermore,a deep supervision approach which involves training the model to focus on various aspects of data features is adopted to enhance the accuracy of the predicted waveforms.The differences between actual and predicted values are 2.89±2.43,1.55±1.79 and 1.52±1.47 mmHg on systolic blood pressure,diastolic blood pressure and mean arterial pressure,respectively,suggesting the superiority of the proposed method over the existing techniques,and demonstrating its application potential.

9.
文章 在 中文 | WPRIM | ID: wpr-1026226

摘要

Objective To propose a novel algorithm model based on YOLOv7 for detecting small lesions in ultrasound images of hepatic cystic echinococcosis.Methods The original feature extraction backbone was replaced with a lightweight feature extraction backbone network GhostNet for reducing the quantity of model parameters.To address the problem of low detection accuracy when the evaluation index CIoU of YOLOv7 was used as a loss function,ECIoU was substituting for CIoU,which further improved the model detection accuracy.Results The model was trained on a self-built dataset of small lesion ultrasound images of hepatic cystic echinococcosis.The results showed that the improved model had a size of 59.4 G and a detection accuracy of 88.1%for mAP@0.5,outperforming the original model and surpassing other mainstream detection methods.Conclusion The proposed model can detect and classify the location and category of lesions in ultrasound images of hepatic cystic echinococcosis more efficiently.

10.
文章 在 中文 | WPRIM | ID: wpr-1026323

摘要

Shoulder pain ranks the third in musculoskeletal pain,with relatively high incidence in the population.Early diagnosis of shoulder diseases is crucial.Deep learning(DL)in shoulder joint imaging was conducive to clinical diagnosis,treatment and prognosis evaluation of shoulder diseases.The research progresses of DL in shoulder joint imaging were reviewed in this article.

11.
文章 在 中文 | WPRIM | ID: wpr-1026361

摘要

Purpose To explore the value of three-dimensions densely connected convolutional networks(3D-DenseNet)in the differential diagnosis of high-grade gliomas(HGGs)and single brain metastases(BMs)via MRI,and to compare the diagnostic performance of models built with different sequences.Materials and Methods T2WI and T1WI contra-enhanced(T1C)imaging data of 230 cases of HGGs and 111 cases of BMs confirmed by surgical pathology in Lanzhou University Second Hospital from June 2016 to June 2021 were retrospectively collected,and the volume of interest under the 3D model was delineated in advance as the input data.All data were randomly divided into a training set(n=254)and a validation set(n=87)in a ratio of 7∶3.Based on the 3D-DenseNet,T2WI,T1C and two sequence fusion prediction models(T2-net,T1C-net and TS-net)were constructed respectively.The predictive efficiency of each model was evaluated and compared by the receiver operating characteristic curve,and the predictive performance of models built with different sequences were compared.Results The area under curve(AUC)of T1C-net,T2-net and TS-net in the training and validation sets were 0.852,0.853,0.802,0.721,0.856 and 0.745,respectively.The AUC and accuracy of the validation set of T1C-net were significantly higher than those of T2-net and TS-net,respectively,and the AUC and accuracy of the validation set of TS-net were significantly higher than those of T2-net.There was a significant difference between T1C-net and T2-net models(P<0.05),while there were no statistical differences between the models of TS-net and T2-net,T1C-net and TS-net(P>0.05).The T1C-net model based on 3D-DenseNet had the best performance,the accuracy of the validation set was 80.5%,the sensitivity was 90.9%,the specificity was 62.5%.Conclusion The 3D-DenseNet model based on MRI conventional sequence has better diagnostic performance,and the model built by T1C-net sequence has better performance in differentiating HGGs and BMs.Deep learning models can be a potential tool to identify HGGs and BMs and to guide the clinical formulation of precise treatment plans.

12.
文章 在 中文 | WPRIM | ID: wpr-1026369

摘要

Purpose To explore the feasibility of the deep learning-based segmentation of extra-pelvic region and metastases in advanced prostate cancer based on metastasis reporting and data system for prostate cancer(MET-RADS-P).Materials and Methods Four datasets(68,91,57 and 263 patients with head,neck,chest and abdomen metastases,respectively)from Jan 2017 to Jan 2022 in Peking University First Hospital were retrospectively collected for the development of the classification model of scanning range and segmentation model of different regions and metastases according to the scanning sites(head,neck,chest and abdomen).In addition,90 patients with prostate cancer confirmed by pathology and underwent whole-body MRI were collected for external validation of the developed model.The manual annotation of the regions and metastases were used as the"reference standard"for the model evaluation.The evaluation indexes included dice similarity coefficient(DSC)and volumetric similarity(VS).Results In the external validation set,the classification accuracy of head,neck,chest and abdomen were 100%(90/90),98.89%(89/90),96.67%(87/90)and 94.44%(85/90),respectively.The range of DSC,VS values of the segmentation model for organs in different regions were(0.86±0.10)-(0.99±0.01),(0.89±0.10)-(0.99±0.01),respectively.The range of DSC,VS values of the segmentation model for metastases in different regions were(0.65±0.07)-(0.72±0.13),(0.74±0.04)-(0.82±0.13),respectively.Conclusion The 3D U-Net model based on deep learning may achieve the segmentation of extra-pelvic region and metastasis in advanced prostate cancer.

13.
文章 在 中文 | WPRIM | ID: wpr-1026373

摘要

Purpose To evaluate the image quality improvement of deep learning iterative reconstruction(DLIR)on pediatric head CT images of head injury and to evaluate the performance of DLIR and conventional adaptive statistical iterative reconstruction-veo(ASIR-V)of noise and image texture of CT image in children's head trauma.Materials and Methods A total of 80 cases in Beijing Children's Hospital,Capital Medical University from December 7th to 11th 2020 of children's head low-dose CT were retrospectively selected.Scan voltage was 120 kV.Scan current was 150-220 mA.The raw data were reconstructed into 5 mm thick slice and 0.625 mm thin slice brain window and bone window images.50%ASIR-V and high weight DLIR images(DL-H)were reconstructed,respectively.A 4-point system was used to subjectively evaluate the display of sulcus,brain matter and bone.The number of lesions in each group was counted.The CT value and image noise values of gray matter and white matter were measured,and the contrast to noise ratio was calculated,then measured the blur metric index was measured in the same slice.The differences between the two image reconstruction methods were compared.Results Compared to 50%ASIR-V images,DL-H significantly improved the display ability of the sulcus and ventricles,as well as the display ability of the brain parenchyma(W=5.5-22.2,all P<0.05)in both slice thickness.There was no statistically significant difference in the display ability of the sulcus and ventricles between 5 mm 50%ASIR-V and 0.625 mm DL-H images(W=0.9,2.0,P=0.32,0.05,respectively).In terms of bone display ability,all images could achieve a maximum score of 4.0.A total of 35 lesions were found in 80 patients via 5 mm 50%ASIR-V and DL-H images,including 12 hemorrhagic lesions,1 intracranial gas,9 fractures,and 13 soft tissue swelling.In terms of objective evaluation,the noise level of DL-H images was significantly lower than that of 50%ASIR-V images(t=21.4-35.7,all P<0.05),and there was no statistically significant difference in noise and contrast noise ratio between 5 mm 50%ASIR-V and 0.625 mm DL-H images(t=1.7-2.2,all P≥0.05).The blur metric index showed that DL-H was superior to 50%ASIR-V images(t=6.1,10.0,both P<0.05),and there was no statistically significant difference in blur metric index between 0.625 mm DL-H and 5 mm 50%ASIR-V images(t=2.6,P=0.28).Conclusion DLIR can improve the CT image quality and image texture of children's head trauma,0.625 mm DL-H image quality is close to 5 mm 50%ASIR-V image,which can meet the diagnostic requirements,and possible to further reduce the radiation dose.

14.
文章 在 中文 | WPRIM | ID: wpr-1026377

摘要

Purpose To explore the application value of combining 18F-FDG PET images with interpretable deep learning radiomics(IDLR)models in the differential diagnosis of primary Parkinson's disease(IPD)and atypical Parkinson's syndrome.Materials and Methods This cross-sectional study was conducted using the Parkinson's Disease PET Imaging Benchmark Database from Huashan Hospital,Fudan University from March 2015 to February 2023.A total of 330 Parkinson's disease patients underwent 18F-FDG PET imaging,both 18F-FDG PET imaging and clinical scale information were collected for all subjects.The study included two cohorts,a training group(n=270)and a testing group(n=60),with a total of 211 cases in the IPD group,59 cases in the progressive supranuclear palsy(PSP)group,and a group of 60 patients with multiple system atrophy(MSA).The clinical information between different groups were compared.An IDLR model was developed to extract feature indicators.Under the supervision of radiomics features,IDLR features were selected from the features collected by neural network extractors,and a binary support vector machine model was constructed for the selected features in images of in testing group.The constructed IDLR model,traditional radiomics model and standard uptake ratio model were separately used to calculate the performance metrics and area under curve values of deep learning models for pairwise classification between IPD/PSP/MSA groups.The study conducted independent classification and testing in two cohorts using 100 10-fold cross-validation tests.Brain-related regions of interest were displayed through feature mapping,using gradient weighted class activation maps to highlight and visualize the most relevant information in the brain.The output heatmaps of different disease groups were examined and compared with clinical diagnostic locations.Results The IDLR model showed promising results for differentiating between Parkinson's syndrome patients.It achieved the best classification performance and had the highest area under the curve values compared to other comparative models such as the standard uptake ratio model(Z=1.22-3.23,all P<0.05),and radiomics model(Z=1.31-2.96,all P<0.05).The area under the curve values for the IDLR model in differentiating MSA and IPD were 0.935 7,for MSA and PSP were 0.975 4,for IPD and PSP were 0.982 5 in the test set.The IDLR model also showed consistency between its filtered feature maps and the visualization of gradient-weighted class activation mapping slice thermal maps in the radiomics regions of interest.Conclusion The IDLR model has the potential for differential diagnosis between IPD and atypical Parkinson's syndrome in 18F-FDG PET images.

15.
China Medical Equipment ; (12): 29-33,43, 2024.
文章 在 中文 | WPRIM | ID: wpr-1026519

摘要

Objective:To explore the predictive value of deep learning based on three dimensional deep residual network(3D Res-Unet)model for the dose accuracy of postoperative volume modulated arc therapy(VMAT)plan of endometrial carcinoma.Methods:A retrospective collection of 154 VMAT radiotherapy plans for endometrial carcinoma from The First People's Hospital of Neijiang was conducted.The data set was divided into one training set with 108 cases,one validation set with 15 cases and one test set with 31 cases as the ratio of 7:1:2 through randomly sampling.The approved dose of clinical application was used as"gold standard"to compare the difference between predictive radiotherapy dose of 3D Res-UNet and clinically radiotherapy dose.Results:There were statistical differences in the conformity index(CI)of target area and average dose(Dmean)between deep learning and clinical gold standard(t=-3.115,-0.124,P<0.05),and the difference of bladder V40 of organ at risk(OAR)between them was significant(t=0.510,P<0.05),and the difference of rectum V50 between them was significant(t=-2.121,P<0.05).The predictive dose of the left femoral head V30 was significantly lower than that of clinical dose(t=0.415,P<0.05).The predictive dose of the right femoral head V30 was significantly lower than that of clinical dose(t=-3.102,P<0.05).The predictive dose of pelvic Dmean was significantly higher than that of clinical dose(t=1.224,P<0.05).The predictive dose of small intestine V40 was significantly higher than that of clinical dose(t=0.461,P<0.05).There were no statistically significant difference in other indicators(P>0.05).The difference plot of dose showed that there was few difference between predictive results and clinical results,and the dose volume histogram of prediction basically coincided with that of clinical application.Conclusion:The 3DRes-UNet model can effectively predict the three-dimensionally spatial dose of VMAT plan after surgery for endometrial carcinoma,which can guide clinical radiotherapy work.

16.
文章 在 中文 | WPRIM | ID: wpr-1026750

摘要

Radiomics-based early prediction and treatment efficacy evaluation is critical for personalized treatment strategies in patients with colorectal cancer liver metastases(CCLM).Owing to the high artificial intelligence(AI)participation,repeatability,and reliable perform-ance,deep learning(DL)based on convolutional neural networks enhances the predictive efficacy of the models,enabling its potential clinic-al application more promising.Subsequent to the gradual construction of a multimodal fusion model and multicenter large sample database,radiomics and DL will become increasingly essential in the management of CCLM.This review focuses on the main steps of radiomics and DL,and summarizes the value of its application in early state prediction and treatment efficacy evaluation of different treatment modalities in CCLM,we also look forward to the potential of its in-depth application in the clinical management of CCLM.

17.
文章 在 中文 | WPRIM | ID: wpr-1026827

摘要

Objective To explore the method of objective identification of color information in sublingual veins diagnosis of TCM.Methods Combined with computer vision,compact fully convolution networks(CFCNs)and 19 deep learning classification models were used for study,and a double pulse rectangle algorithm was designed as a means of segmentation and recognition of sublingual veins and color information extraction.Results The accuracy of segmentation of tongue bottom obtained by the method of removing reflection + data expanding + data post-processing was 0.955 9,F1 value was 0.947 3,and mIoU value was 0.900 0.The accuracy of segmentation of sublingual veins obtained by the method of removing reflection + tongue input + data expanding + corrosion expansion was 0.778 4,F1 value was 0.738 3 and mIoU value was 0.585 1,which were obviously superior to the current classic or improved U-net model.On the color classification of sublingual veins,the best classification model was DenseNet161-bc-early_stopping with an accuracy rate of 0.803 7.Conclusion The deep learning method has a certain effect on identifying the color information of sublingual veins in TCM,which provides a new method for the research of quantitative color detection technology of sublingual veins diagnosis in TCM.

18.
文章 在 中文 | WPRIM | ID: wpr-1027093

摘要

Objective:To explore the feasibility and application value of an automated method for generation of surgical records for resection of benign soft tissue tumor based on dense video descriptions.Methods:The Transformer deep learning model was used to establish an automated surgical record generation system to analyze the surgical videos of 30 patients with benign soft tissue tumor who had been admitted to Department of Orthopedics, Xijing Hospital, Air Force Military Medical University from September 2021 to August 2023. The patient data were randomly divided into training sets, validation sets, and test sets in a ratio of 8∶1∶1. In the test sets, 7 evaluation indexes, BLEU-1, BLEU-2, BLEU-3, BLEU-4, Meteor, Rouge, and CIDEr, were used to evaluate the text quality of surgical records generated by the model. The text of surgical records was compared with the classical algorithm, dense video captioning with paralled decoding (PDVC) in the field of video-intensive description.Results:The automated surgical record generation system running in the test sets showed the following: BLEU-1, BLEU-2, BLEU-3, BLEU-4, Rouge, Meteor, and CIDEr were 16.80, 15.23, 13.01, 11.68, 16.01, 12.67 and 62.30, respectively. The operation of the classical algorithm PDVC showed the following: BLEU-1, BLEU-2, BLEU-3, BLEU-4, Rouge, Meteor, and CIDEr were 15.63, 14.17, 11.90, 10.45, 12.97, 11.99 and 53.64, respectively. The automated surgical record generation system resulted in significant improvements compared with PDVC in all evaluation indexes. The BLEU-4, Rouge, Meteor, and CIDEr were improved by 1.23, 3.04, 0.68 and 8.66, respectively, demonstrating that the system proposed can better capture the key data in the video to help generate more effective text records.Conclusion:As the automated surgical record generation system shows good performance in generating surgical records for resection of benign soft tissue tumor based on intensive video descriptions, it can be applied in clinical practice.

19.
文章 在 中文 | WPRIM | ID: wpr-1027173

摘要

Objective:To investigate the clinical value of deep learning model based on contrast enhanced ultrasound (CEUS) video in the differential diagnosis of benign and malignant liver tumors.Methods:Between May 2010 and June 2022, 1 213 patients who underwent CEUS examination for liver masses in the Affiliated Hospital of Southwest Medical University were retrospectively collected, and the enrolled patients were divided into training and independent test cohorts with December 31, 2021 as the time cut-off. In the training cohort, the TimeSformer algorithm was used as the infrastructure, and multiple fixed-time segments were obtained from CEUS arterial videos by using the sliding window of the video, and the classification results of the entire video were obtained after fusing the features of multiple segments, so as to build a deep learning model based on CEUS videos. In the independent test cohort, ROC curves were used to verify the validity of the model and compared with three radiologists with different CEUS experience (R1, R2, and R3, with 3, 6, and 10 years of CEUS experience, respectively).Results:A total of 1 213 patients with liver masses were included in the study, including 1 066 patients in the training cohort (426 cases of malignancy) and 147 patients in the independent test cohort (50 cases of malignancy). The area under curve (AUC)value of deep learning model was 0.93±0.01 in the training cohort and 0.89±0.01 in the independent test cohort, and the accuracy, sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were 80.42%, 74.19%, 92.00%, 94.52% and 65.71%, respectively. Among the three radiologists, R1 had the lowest diagnostic performance, with accuracy, sensitivity, specificity, PPV and NPV of 67.83%, 51.61%, 98.00%, 97.96% and 52.13%, respectively, while the above indicators of R3 were 82.52%, 76.36%, 94.00%, 95.95% and 68.12%, respectively. McNemar′s test showed that the difference between R1 and the deep learning model was statistically significant ( P<0.001), while the differences between R2 and R3 and the deep learning model were not statistically significant ( P=0.720, 0.868). In addition, the analysis time of the model for a single case was (340.24±16.32)ms, while the average analysis time of radiologists was 62.9 s. Conclusions:The deep learning model based on CEUS can better identify benign and malignant liver masses, and may reach the diagnostic level of experienced radiologists.

20.
Chinese Journal of Radiology ; (12): 209-215, 2024.
文章 在 中文 | WPRIM | ID: wpr-1027302

摘要

Objective:To explore the value of radiomics and deep learning in predicting the efficacy of initial transarterial chemoembolization (TACE) for hepatocellular carcinoma (HCC).Methods:This was a cohort study. The imaging and clinical information of HCC patients treated with TACE in the Second Affiliated Hospital of Harbin Medical University from January 2015 to January 2021 were collected retrospectively. A total of 265 patients were divided into response group (175 cases) and non-response group (90 cases) according to the modified solid tumor efficacy evaluation criteria (mRECIST) 1 to 2 months after initial TACE. According to the proportion of 8∶2, the patients were randomly divided into training group (212 cases, 140 responders and 72 non-responders) and test set (53 cases, 35 responders and 18 non-responders). Univariate and multivariate logistic regression was used to screen clinical variables and construct a clinical model. The radiomics features were extracted from the preoperative CT images, and radiomics model was constructed after feature dimensionality reduction. Using the deep learning method, three residual network (ResNet) models (ResNet18, ResNet50 and ResNet101) were established, and their effectiveness was compared and integrated to build a deep learning model with best performance. Univariate and multivariate logistic regression was used to combine pairwise three models to establish the combined model. The receiver operating characteristic curve was used to evaluate the performance of the model to distinguish between TACE response and non-response groups.Results:In the test set, the area under the curve (AUC) of the clinical model and the radiomics model in the differentiation between response and non-response after TACE were 0.730 (95% CI 0.569-0.891) and 0.775 (95% CI 0.642-0.907). The AUC of ResNet18, ResNet50 and ResNet101 were 0.719, 0.748 and 0.533, respectively. The AUC for deep learning model obtained by integrating ResNet18 and ResNet50 was 0.806 (95% CI 0.665-0.946). After pairwise fusion, the combined deep learning-radiomics model showed the highest performance, with an AUC of 0.843 (95% CI 0.730-0.956), which was better than those of the deep learning-clinical model (AUC of 0.838, 95% CI 0.719-0.957) and the radiomics-clinical model (AUC of 0.786, 95% CI 0.648-0.898). Conclusions:The combined model of radiomics and deep learning has high performance in predicting the curative effect of TACE in patients with HCC before operation.

搜索明细