Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Biomed Eng Lett ; 13(4): 715-728, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37872984

RESUMO

High-quality cardiopulmonary resuscitation (CPR) is the most important factor in promoting resuscitation outcomes; therefore, monitoring the quality of CPR is strongly recommended in current CPR guidelines. Recently, transesophageal echocardiography (TEE) has been proposed as a potential real-time feedback modality because physicians can obtain clear echocardiographic images without interfering with CPR. The quality of CPR would be optimized if the myocardial ejection fraction (EF) could be calculated in real-time during CPR. We conducted a study to derive a protocol to detect systole and diastole automatically and calculate EF using TEE images acquired from patients with cardiac arrest. The data were supplemented using thin-plate spline transformation to solve the problem of insufficient data. The deep learning model was constructed based on ResUNet + + , and a monogenic filtering method was applied to clarify the ventricular boundary. The performance of the model to which the monogenic filter was added and the existing model was compared. The left ventricle was segmented in the ME LAX view, and the left and right ventricles were segmented in the ME four-chamber view. In most of the results, the performance of the model to which the monogenic filter was added was high, and the difference was very small in some cases; but the performance of the existing model was high. Through this learned model, the effect of CPR can be quantitatively analyzed by segmenting the ventricle and quantitatively analyzing the degree of contraction of the ventricle during systole and diastole. Supplementary Information: The online version contains supplementary material available at 10.1007/s13534-023-00293-9.

2.
Retina ; 42(8): 1465-1471, 2022 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-35877965

RESUMO

PURPOSE: We used deep learning to predict the final central foveal thickness (CFT), changes in CFT, final best corrected visual acuity, and best corrected visual acuity changes following noncomplicated idiopathic epiretinal membrane surgery. METHODS: Data of patients who underwent noncomplicated epiretinal membrane surgery at Severance Hospital from January 1, 2010, to December 31, 2018, were reviewed. Patient age, sex, hypertension and diabetes statuses, and preoperative optical coherence tomography scans were noted. For image analysis and model development, a pre-trained VGG16 was adopted. The mean absolute error and coefficient of determination (R 2 ) were used to evaluate the model performances. The study involved 688 eyes of 657 patients. RESULTS: For final CFT, the mean absolute error was the lowest in the model that considered only clinical and demographic characteristics; the highest accuracy was achieved by the model that considered all clinical and surgical information. For CFT changes, models utilizing clinical and surgical information showed the best performance. However, our best model failed to predict the final best corrected visual acuity and best corrected visual acuity changes. CONCLUSION: A deep learning model predicted the final CFT and CFT changes in patients 1 year after epiretinal membrane surgery. Central foveal thickness prediction showed the best results when demographic factors, comorbid diseases, and surgical techniques were considered.


Assuntos
Aprendizado Profundo , Membrana Epirretiniana , Membrana Epirretiniana/diagnóstico , Membrana Epirretiniana/cirurgia , Humanos , Estudos Retrospectivos , Tomografia de Coerência Óptica , Acuidade Visual , Vitrectomia/métodos
3.
Sensors (Basel) ; 21(20)2021 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-34696057

RESUMO

In this study, we aimed to develop a new automated method for kidney volume measurement in children using ultrasonography (US) with image pre-processing and hybrid learning and to formulate an equation to calculate the expected kidney volume. The volumes of 282 kidneys (141 subjects, <19 years old) with normal function and structure were measured using US. The volumes of 58 kidneys in 29 subjects who underwent US and computed tomography (CT) were determined by image segmentation and compared to those calculated by the conventional ellipsoidal method and CT using intraclass correlation coefficients (ICCs). An expected kidney volume equation was developed using multivariate regression analysis. Manual image segmentation was automated using hybrid learning to calculate the kidney volume. The ICCs for volume determined by image segmentation and ellipsoidal method were significantly different, while that for volume calculated by hybrid learning was significantly higher than that for ellipsoidal method. Volume determined by image segmentation was significantly correlated with weight, body surface area, and height. Expected kidney volume was calculated as (2.22 × weight (kg) + 0.252 × height (cm) + 5.138). This method will be valuable in establishing an age-matched normal kidney growth chart through the accumulation and analysis of large-scale data.


Assuntos
Inteligência Artificial , Tomografia Computadorizada por Raios X , Adulto , Criança , Humanos , Processamento de Imagem Assistida por Computador , Rim/diagnóstico por imagem , Ultrassonografia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA