Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Int J Urol ; 30(12): 1103-1111, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37605627

RESUMEN

OBJECTIVES: To develop diagnostic algorithms of multisequence prostate magnetic resonance imaging for cancer detection and segmentation using deep learning and explore values of dynamic contrast-enhanced imaging in multiparametric imaging, compared with biparametric imaging. METHODS: We collected 3227 multiparametric imaging sets from 332 patients, including 218 cancer patients (291 biopsy-proven foci) and 114 noncancer patients. Diagnostic algorithms of T2-weighted, T2-weighted plus dynamic contrast-enhanced, biparametric, and multiparametric imaging were built using 2578 sets, and their performance for clinically significant cancer was evaluated using 649 sets. RESULTS: Biparametric and multiparametric imaging had following region-based performance: sensitivity of 71.9% and 74.8% (p = 0.394) and positive predictive value of 61.3% and 74.8% (p = 0.013), respectively. In side-specific analyses of cancer images, the specificity was 72.6% and 89.5% (p < 0.001) and the negative predictive value was 78.9% and 83.5% (p = 0.364), respectively. False-negative cancer on multiparametric imaging was smaller (p = 0.002) and more dominant with grade group ≤2 (p = 0.028) than true positive foci. In the peripheral zone, false-positive regions on biparametric imaging turned out to be true negative on multiparametric imaging more frequently compared with the transition zone (78.3% vs. 47.2%, p = 0.018). In contrast, T2-weighted plus dynamic contrast-enhanced imaging had lower specificity than T2-weighted imaging (41.1% vs. 51.6%, p = 0.042). CONCLUSIONS: When using deep learning, multiparametric imaging provides superior performance to biparametric imaging in the specificity and positive predictive value, especially in the peripheral zone. Dynamic contrast-enhanced imaging helps reduce overdiagnosis in multiparametric imaging.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Próstata/patología , Medios de Contraste , Imagen por Resonancia Magnética/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/patología , Espectroscopía de Resonancia Magnética , Estudios Retrospectivos
2.
Skin Res Technol ; 29(8): e13414, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37632180

RESUMEN

BACKGROUND: Appropriate skin treatment and care warrants an accurate prediction of skin moisture. However, current diagnostic tools are costly and time-consuming. Stratum corneum moisture content has been measured with moisture content meters or from a near-infrared image. OBJECTIVE: Here, we establish an artificial intelligence (AI) alternative for conventional skin moisture content measurements. METHODS: Skin feature factors positively or negatively correlated with the skin moisture content were created and selected by using the PolynomialFeatures(3) of scikit-learn. Then, an integrated AI model using, as inputs, a visible-light skin image and the skin feature factors were trained with 914 skin images, the corresponding skin feature factors, and the corresponding skin moisture contents. RESULTS: A regression-type AI model using only a visible-light skin-containing image was insufficiently implemented. To improve the accuracy of the prediction of skin moisture content, we searched for new features through feature engineering ("creation of new factors") correlated with the moisture content from various combinations of the existing skin features, and have found that factors created by combining the brown spot count, the pore count, and/or the visually assessed skin roughness give significant correlation coefficients. Then, an integrated AI deep-learning model using a visible-light skin image and these factors resulted in significantly improved skin moisture content prediction. CONCLUSION: Skin moisture content interacts with the brown spot count, the pore count, and/or the visually assessed skin roughness so that better inference of stratum corneum moisture content can be provided using a common visible-light skin photo image and skin feature factors.


Asunto(s)
Inteligencia Artificial , Piel , Humanos , Piel/diagnóstico por imagen , Epidermis , Administración Cutánea , Luz
3.
BMC Urol ; 21(1): 102, 2021 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-34353306

RESUMEN

BACKGROUND: Recent increased use of medical images induces further burden of their interpretation for physicians. A plain X-ray is a low-cost examination that has low-dose radiation exposure and high availability, although diagnosing urolithiasis using this method is not always easy. Since the advent of a convolutional neural network via deep learning in the 2000s, computer-aided diagnosis (CAD) has had a great impact on automatic image analysis in the urological field. The objective of our study was to develop a CAD system with deep learning architecture to detect urinary tract stones on a plain X-ray and to evaluate the model's accuracy. METHODS: We collected plain X-ray images of 1017 patients with a radio-opaque upper urinary tract stone. X-ray images (n = 827 and 190) were used as the training and test data, respectively. We used a 17-layer Residual Network as a convolutional neural network architecture for patch-wise training. The training data were repeatedly used until the best model accuracy was achieved within 300 runs. The F score, which is a harmonic mean of the sensitivity and positive predictive value (PPV) and represents the balance of the accuracy, was measured to evaluate the model's accuracy. RESULTS: Using deep learning, we developed a CAD model that needed 110 ms to provide an answer for each X-ray image. The best F score was 0.752, and the sensitivity and PPV were 0.872 and 0.662, respectively. When limited to a proximal ureter stone, the sensitivity and PPV were 0.925 and 0.876, respectively, and they were the lowest at mid-ureter. CONCLUSION: CAD of a plain X-ray may be a promising method to detect radio-opaque urinary tract stones with satisfactory sensitivity although the PPV could still be improved. The CAD model detects urinary tract stones quickly and automatically and has the potential to become a helpful screening modality especially for primary care physicians for diagnosing urolithiasis. Further study using a higher volume of data would improve the diagnostic performance of CAD models to detect urinary tract stones on a plain X-ray.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador , Redes Neurales de la Computación , Radiografía , Cálculos Urinarios/diagnóstico por imagen , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Conjuntos de Datos como Asunto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Interpretación de Imagen Radiográfica Asistida por Computador , Sensibilidad y Especificidad
4.
BJU Int ; 122(3): 411-417, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29772101

RESUMEN

OBJECTIVE: To develop a computer-aided diagnosis (CAD) algorithm with a deep learning architecture for detecting prostate cancer on magnetic resonance imaging (MRI) to promote global standardisation and diminish variation in the interpretation of prostate MRI. PATIENTS AND METHODS: We retrospectively reviewed data from 335 patients with a prostate-specific antigen level of <20 ng/mL who underwent MRI and extended systematic prostate biopsy with or without MRI-targeted biopsy. The data were divided into a training data set (n = 301), which was used to develop the CAD algorithm, and two evaluation data sets (n = 34). A deep convolutional neural network (CNN) was trained using MR images labelled as 'cancer' or 'no cancer' confirmed by the above-mentioned biopsy. Using the CAD algorithm that showed the best diagnostic accuracy with the two evaluation data sets, the data set not used for evaluation was analysed, and receiver operating curve analysis was performed. RESULTS: Graphics processing unit computing required 5.5 h to learn to analyse 2 million images. The time required for the CAD algorithm to evaluate a new image was 30 ms/image. The two algorithms showed area under the curve values of 0.645 and 0.636, respectively, in the validation data sets. The number of patients mistakenly diagnosed as having cancer was 16/17 patients and seven of 17 patients in the two validation data sets, respectively. Zero and two oversights were found in the two validation data sets, respectively. CONCLUSION: We developed a CAD system using a CNN algorithm for the fully automated detection of prostate cancer using MRI, which has the potential to provide reproducible interpretation and a greater level of standardisation and consistency.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Próstata/patología , Neoplasias de la Próstata/diagnóstico por imagen , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Área Bajo la Curva , Humanos , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Próstata/diagnóstico por imagen , Estudios Retrospectivos
5.
IEEE Int Conf Rehabil Robot ; 2011: 5975341, 2011.
Artículo en Inglés | MEDLINE | ID: mdl-22275546

RESUMEN

In this paper, a study of assistive devices with multi-modal feedback is conducted to evaluate the efficiency of haptic and auditory information towards the users' mouse operations. Haptic feedback, generated by a combination of wheels driven by motors, is provided through the use of the haptic mouse. Meanwhile, audio feedback either in the form of synthesized directional speech or audio signal. Based on these interfaces, a set of experiments are conducted to compare their efficiencies. The measurement criteria used in this experiment are the distance regarding to the target circle in pixels, the operational time for the task in milliseconds, and opinion in term of understandability and comfortability towards each modal of the tested user interfaces in discrete indices. The experimental results show that with the proper modalities of feedback interfaces for the user, the efficiency can be improved by either the reduction in operational time or the increase of accuracy in pointing the target. Furthermore, the justification is also based on the user's satisfaction towards using the device to conduct the predefined cursor movement task, which occasionally is difficult to understand and interpret by the user. For example of the application adopting the proposed interface system, a web browser application is implemented and explained in this paper.


Asunto(s)
Ceguera/fisiopatología , Dispositivos de Autoayuda , Interfaz Usuario-Computador , Adulto , Computadores , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...