Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
2.
Radiol Phys Technol ; 16(2): 325-337, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37097551

RESUMEN

The Japan Association of Radiological Technologists (JART) and the Japan Medical Imaging and Radiological Systems Industries Association jointly conducted a nationwide survey to reveal the current situation of diagnostic displays in Japan using a questionnaire on the performance and quality control (QC) of diagnostic displays for mammography and common use. The questionnaire for radiological technologists (RTs) was distributed via email to 4519 medical facilities throughout Japan, where RTs affiliated with JART were employed; 613 (13.6%) facilities responded. Diagnostic displays with suitable maximal luminance (500 cd/m2 or higher for mammography and 350 cd/m2 or higher for common use) and resolution (5 megapixels for mammography) have been widely used. However, while 99% of the facilities recognized the necessity of QC, only approximately 60% implemented it. This situation arose due to several barriers to QC implementation, such as insufficient devices, time, staff, knowledge, and the recognition of QC as a duty. The implementation of QC can lead to the avoidance of incidents or accidents caused by a decrease in luminance, variation in luminance response, and the influence of ambient light. Moreover, the barriers discouraging the implementation of QC are mainly related to a lack of human resources and budgets. Therefore, to popularize the QC of diagnostic displays in all facilities, it is crucial to identify countermeasures to eliminate these barriers and to continue positive actions for popularization.


Asunto(s)
Mamografía , Radiología , Humanos , Japón , Control de Calidad , Radiología/métodos , Encuestas y Cuestionarios
5.
Nucleic Acids Res ; 49(W1): W193-W198, 2021 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-34104972

RESUMEN

Exon skipping using antisense oligonucleotides (ASOs) has recently proven to be a powerful tool for mRNA splicing modulation. Several exon-skipping ASOs have been approved to treat genetic diseases worldwide. However, a significant challenge is the difficulty in selecting an optimal sequence for exon skipping. The efficacy of ASOs is often unpredictable, because of the numerous factors involved in exon skipping. To address this gap, we have developed a computational method using machine-learning algorithms that factors in many parameters as well as experimental data to design highly effective ASOs for exon skipping. eSkip-Finder (https://eskip-finder.org) is the first web-based resource for helping researchers identify effective exon skipping ASOs. eSkip-Finder features two sections: (i) a predictor of the exon skipping efficacy of novel ASOs and (ii) a database of exon skipping ASOs. The predictor facilitates rapid analysis of a given set of exon/intron sequences and ASO lengths to identify effective ASOs for exon skipping based on a machine learning model trained by experimental data. We confirmed that predictions correlated well with in vitro skipping efficacy of sequences that were not included in the training data. The database enables users to search for ASOs using queries such as gene name, species, and exon number.


Asunto(s)
Bases de Datos de Ácidos Nucleicos , Exones , Aprendizaje Automático , Oligonucleótidos Antisentido/química , Programas Informáticos , Internet , Intrones , Empalme del ARN , Análisis de Secuencia
7.
Head Neck ; 42(9): 2581-2592, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32542892

RESUMEN

BACKGROUND: There are no published reports evaluating the ability of artificial intelligence (AI) in the endoscopic diagnosis of superficial laryngopharyngeal cancer (SLPC). We presented our newly developed diagnostic AI model for SLPC detection. METHODS: We used RetinaNet for object detection. SLPC and normal laryngopharyngeal mucosal images obtained from narrow-band imaging were used for the learning and validation data sets. Each independent data set comprised 400 SLPC and 800 normal mucosal images. The diagnostic AI model was constructed stage-wise and evaluated at each learning stage using validation data sets. RESULTS: In the validation data sets (100 SLPC cases), the median tumor size was 13.2 mm; flat/elevated/depressed types were found in 77/21/2 cases. Sensitivity, specificity, and accuracy improved each time a learning image was added and were 95.5%, 98.4%, and 97.3%, respectively, after learning all SLPC and normal mucosal images. CONCLUSIONS: The novel AI model is helpful for detection of laryngopharyngeal cancer at an early stage.


Asunto(s)
Aprendizaje Profundo , Neoplasias , Inteligencia Artificial , Humanos , Imagen de Banda Estrecha , Sensibilidad y Especificidad
8.
Surg Endosc ; 34(11): 4924-4931, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-31797047

RESUMEN

BACKGROUND: Automatic surgical workflow recognition is a key component for developing the context-aware computer-assisted surgery (CA-CAS) systems. However, automatic surgical phase recognition focused on colorectal surgery has not been reported. We aimed to develop a deep learning model for automatic surgical phase recognition based on laparoscopic sigmoidectomy (Lap-S) videos, which could be used for real-time phase recognition, and to clarify the accuracies of the automatic surgical phase and action recognitions using visual information. METHODS: The dataset used contained 71 cases of Lap-S. The video data were divided into frame units every 1/30 s as static images. Every Lap-S video was manually divided into 11 surgical phases (Phases 0-10) and manually annotated for each surgical action on every frame. The model was generated based on the training data. Validation of the model was performed on a set of unseen test data. Convolutional neural network (CNN)-based deep learning was also used. RESULTS: The average surgical time was 175 min (± 43 min SD), with the individual surgical phases also showing high variations in the duration between cases. Each surgery started in the first phase (Phase 0) and ended in the last phase (Phase 10), and phase transitions occurred 14 (± 2 SD) times per procedure on an average. The accuracy of the automatic surgical phase recognition was 91.9% and those for the automatic surgical action recognition of extracorporeal action and irrigation were 89.4% and 82.5%, respectively. Moreover, this system could perform real-time automatic surgical phase recognition at 32 fps. CONCLUSIONS: The CNN-based deep learning approach enabled the recognition of surgical phases and actions in 71 Lap-S cases based on manually annotated data. This system could perform automatic surgical phase recognition and automatic target surgical action recognition with high accuracy. Moreover, this study showed the feasibility of real-time automatic surgical phase recognition with high frame rate.


Asunto(s)
Colectomía/métodos , Colon Sigmoide/cirugía , Aprendizaje Profundo , Laparoscopía/métodos , Cirugía Asistida por Computador/métodos , Sistemas de Computación , Humanos , Tempo Operativo , Estudios Retrospectivos , Flujo de Trabajo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...