Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Artif Intell Med ; 150: 102822, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38553162

RESUMEN

BACKGROUND: Stroke is a prevalent disease with a significant global impact. Effective assessment of stroke severity is vital for an accurate diagnosis, appropriate treatment, and optimal clinical outcomes. The National Institutes of Health Stroke Scale (NIHSS) is a widely used scale for quantitatively assessing stroke severity. However, the current manual scoring of NIHSS is labor-intensive, time-consuming, and sometimes unreliable. Applying artificial intelligence (AI) techniques to automate the quantitative assessment of stroke on vast amounts of electronic health records (EHRs) has attracted much interest. OBJECTIVE: This study aims to develop an automatic, quantitative stroke severity assessment framework through automating the entire NIHSS scoring process on Chinese clinical EHRs. METHODS: Our approach consists of two major parts: Chinese clinical named entity recognition (CNER) with a domain-adaptive pre-trained large language model (LLM) and automated NIHSS scoring. To build a high-performing CNER model, we first construct a stroke-specific, densely annotated dataset "Chinese Stroke Clinical Records" (CSCR) from EHRs provided by our partner hospital, based on a stroke ontology that defines semantically related entities for stroke assessment. We then pre-train a Chinese clinical LLM coined "CliRoberta" through domain-adaptive transfer learning and construct a deep learning-based CNER model that can accurately extract entities directly from Chinese EHRs. Finally, an automated, end-to-end NIHSS scoring pipeline is proposed by mapping the extracted entities to relevant NIHSS items and values, to quantitatively assess the stroke severity. RESULTS: Results obtained on a benchmark dataset CCKS2019 and our newly created CSCR dataset demonstrate the superior performance of our domain-adaptive pre-trained LLM and the CNER model, compared with the existing benchmark LLMs and CNER models. The high F1 score of 0.990 ensures the reliability of our model in accurately extracting the entities for the subsequent automatic NIHSS scoring. Subsequently, our automated, end-to-end NIHSS scoring approach achieved excellent inter-rater agreement (0.823) and intraclass consistency (0.986) with the ground truth and significantly reduced the processing time from minutes to a few seconds. CONCLUSION: Our proposed automatic and quantitative framework for assessing stroke severity demonstrates exceptional performance and reliability through directly scoring the NIHSS from diagnostic notes in Chinese clinical EHRs. Moreover, this study also contributes a new clinical dataset, a pre-trained clinical LLM, and an effective deep learning-based CNER model. The deployment of these advanced algorithms can improve the accuracy and efficiency of clinical assessment, and help improve the quality, affordability and productivity of healthcare services.


Asunto(s)
Inteligencia Artificial , Accidente Cerebrovascular , Humanos , Reproducibilidad de los Resultados , Procesamiento de Lenguaje Natural , Lenguaje , Accidente Cerebrovascular/diagnóstico , Registros Electrónicos de Salud , China
2.
IEEE Internet Things J ; 10(5): 3995-4005, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38046398

RESUMEN

The awareness of edge computing is attaining eminence and is largely acknowledged with the rise of Internet of Things (IoT). Edge-enabled solutions offer efficient computing and control at the network edge to resolve the scalability and latency-related concerns. Though, it comes to be challenging for edge computing to tackle diverse applications of IoT as they produce massive heterogeneous data. The IoT-enabled frameworks for Big Data analytics face numerous challenges in their existing structural design, for instance, the high volume of data storage and processing, data heterogeneity, and processing time among others. Moreover, the existing proposals lack effective parallel data loading and robust mechanisms for handling communication overhead. To address these challenges, we propose an optimized IoT-enabled big data analytics architecture for edge-cloud computing using machine learning. In the proposed scheme, an edge intelligence module is introduced to process and store the big data efficiently at the edges of the network with the integration of cloud technology. The proposed scheme is composed of two layers: IoT-edge and Cloud-processing. The data injection and storage is carried out with an optimized MapReduce parallel algorithm. Optimized Yet Another Resource Negotiator (YARN) is used for efficiently managing the cluster. The proposed data design is experimentally simulated with an authentic dataset using Apache Spark. The comparative analysis is decorated with existing proposals and traditional mechanisms. The results justify the efficiency of our proposed work.

3.
Comput Med Imaging Graph ; 102: 102136, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36375284

RESUMEN

Worldwide breast cancer is one of the most frequent and mortal diseases across women. Early, accurate metastasis cancer detection is a significant factor in raising the survival rate among patients. Diverse Computer-Aided Diagnostic (CAD) systems applying medical imaging modalities, have been designed for breast cancer detection. The impact of deep learning in improving CAD systems' performance is undeniable. Among all of the medical image modalities, histopathology (HP) images consist of richer phenotypic details and help keep track of cancer metastasis. Nonetheless, metastasis detection in whole slide images (WSIs) is still problematic because of the enormous size of these images and the massive cost of labelling them. In this paper, we develop a reliable, fast and accurate CAD system for metastasis detection in breast cancer while applying only a small amount of annotated data with lower resolution. This saves considerable time and cost. Unlike other works which apply patch classification for tumor detection, we employ the benefits of attention modules adding to regression and classification, to extract tumor parts simultaneously. Then, we use dense prediction for mask generation and identify individual metastases in WSIs. Experimental outcomes demonstrate the efficiency of our method. It provides more accurate results than other methods that apply the total dataset. The proposed method is about seven times faster than an expert pathologist, while producing even more accurate results than an expert pathologist in tumor detection.


Asunto(s)
Neoplasias de la Mama , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología
5.
Comput Med Imaging Graph ; 87: 101810, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33279760

RESUMEN

Accurate diagnosis of Parkinson's Disease (PD) at its early stages remains a challenge for modern clinicians. In this study, we utilize a convolutional neural network (CNN) approach to address this problem. In particular, we develop a CNN-based network model highly capable of discriminating PD patients based on Single Photon Emission Computed Tomography (SPECT) images from healthy controls. A total of 2723 SPECT images are analyzed in this study, of which 1364 images from the healthy control group, and the other 1359 images are in the PD group. Image normalization process is carried out to enhance the regions of interests (ROIs) necessary for our network to learn distinguishing features from them. A 10-fold cross-validation is implemented to evaluate the performance of the network model. Our approach demonstrates outstanding performance with an accuracy of 99.34 %, sensitivity of 99.04 % and specificity of 99.63 %, outperforming all previously published results. Given the high performance and easy-to-use features of our network, it can be deduced that our approach has the potential to revolutionize the diagnosis of PD and its management.


Asunto(s)
Aprendizaje Profundo , Enfermedad de Parkinson , Humanos , Redes Neurales de la Computación , Enfermedad de Parkinson/diagnóstico por imagen , Tomografía Computarizada de Emisión de Fotón Único
6.
IEEE Trans Pattern Anal Mach Intell ; 43(7): 2400-2412, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31940520

RESUMEN

Datastream analysis aims at extracting discriminative information for classification from continuously incoming samples. It is extremely challenging to detect novel data while incrementally updating the model efficiently and stably, especially for high-dimensional and/or large-scale data streams. This paper proposes an efficient framework for novelty detection and incremental learning for unlabeled chunk data streams. First, an accurate factorization-free kernel discriminative analysis (FKDA-X) is put forward through solving a linear system in the kernel space. FKDA-X produces a Reproducing Kernel Hilbert Space (RKHS), in which unlabeled chunk data can be detected and classified by multiple known-classes in a single decision model with a deterministic classification boundary. Moreover, based on FKDA-X, two optimal methods FKDA-CX and FKDA-C are proposed. FKDA-CX uses the micro-cluster centers of original data as the input to achieve excellent performance in novelty detection. FKDA-C and incremental FKDA-C (IFKDA-C) using the class centers of original data as their input have extremely fast speed in online learning. Theoretical analysis and experimental validation on under-sampled and large-scale real-world datasets demonstrate that the proposed algorithms make it possible to learn unlabeled chunk data streams with significantly lower computational costs and comparable accuracies than the state-of-the-art approaches.

7.
J Digit Imaging ; 32(4): 582-596, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-31144149

RESUMEN

Deep learning-based image segmentation is by now firmly established as a robust tool in image segmentation. It has been widely used to separate homogeneous areas as the first and critical component of diagnosis and treatment pipeline. In this article, we present a critical appraisal of popular methods that have employed deep-learning techniques for medical image segmentation. Moreover, we summarize the most common challenges incurred and suggest possible solutions.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Humanos
8.
Sci Rep ; 8(1): 13884, 2018 Sep 17.
Artículo en Inglés | MEDLINE | ID: mdl-30224678

RESUMEN

The classical wavelet packet transform has been widely applied in the information processing field. It implies that the quantum wavelet packet transform (QWPT) can play an important role in quantum information processing. In this paper, we design quantum circuits of a generalized tensor product (GTP) and a perfect shuffle permutation (PSP). Next, we propose multi-level and multi-dimensional (1D, 2D and 3D) QWPTs, including a Haar QWPT (HQWPT), a D4 QWPT (DQWPT) based on the periodization extension and their inverse transforms for the first time, and prove the correctness based on the GTP and PSP. Furthermore, we analyze the quantum costs and the time complexities of our proposed QWPTs and obtain precise results. The time complexities of HQWPTs is at most 6 on 2n elements, which illustrates high-efficiency of the proposed QWPTs. Simulation experiments demonstrate that the proposed QWPTs are correct and effective.

9.
Med Phys ; 42(4): 1808-17, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25832071

RESUMEN

PURPOSE: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. METHODS: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor's) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. RESULTS: The experimental results demonstrate that the authors' proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors' framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. CONCLUSIONS: A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm optimization method with using the current observation information and adaptive evolutionary factors. The authors proposed framework greatly reduced the guidance errors from (4.3, 7.8) to (3.0 mm, 5.6°), compared to state-of-the-art methods.


Asunto(s)
Algoritmos , Fenómenos Electromagnéticos , Endoscopía/métodos , Cirugía Asistida por Computador/métodos , Endoscopios , Endoscopía/instrumentación , Método de Montecarlo , Fantasmas de Imagen , Cirugía Asistida por Computador/instrumentación , Tomografía Computarizada por Rayos X/instrumentación , Tomografía Computarizada por Rayos X/métodos , Grabación en Video
10.
Med Image Anal ; 24(1): 282-296, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-25660001

RESUMEN

This paper proposes an observation-driven adaptive differential evolution algorithm that fuses bronchoscopic video sequences, electromagnetic sensor measurements, and computed tomography images for accurate and smooth bronchoscope three-dimensional motion tracking. Currently an electromagnetic tracker with a position sensor fixed at the bronchoscope tip is commonly used to estimate bronchoscope movements. The large tracking error from directly using sensor measurements, which may be deteriorated heavily by patient respiratory motion and the magnetic field distortion of the tracker, limits clinical applications. How to effectively use sensor measurements for precise and stable bronchoscope electromagnetic tracking remains challenging. We here exploit an observation-driven adaptive differential evolution framework to address such a challenge and boost the tracking accuracy and smoothness. In our framework, two advantageous points are distinguished from other adaptive differential evolution methods: (1) the current observation including sensor measurements and bronchoscopic video images is used in the mutation equation and the fitness computation, respectively and (2) the mutation factor and the crossover rate are determined adaptively on the basis of the current image observation. The experimental results demonstrate that our framework provides much more accurate and smooth bronchoscope tracking than the state-of-the-art methods. Our approach reduces the tracking error from 3.96 to 2.89 mm, improves the tracking smoothness from 4.08 to 1.62 mm, and increases the visual quality from 0.707 to 0.741.


Asunto(s)
Broncoscopía/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Cirugía Asistida por Computador/métodos , Algoritmos , Humanos , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
11.
Comput Methods Programs Biomed ; 118(2): 147-57, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25547498

RESUMEN

Registration of pre-clinical images to physical space is indispensable for computer-assisted endoscopic interventions in operating rooms. Electromagnetically navigated endoscopic interventions are increasingly performed at current diagnoses and treatments. Such interventions use an electromagnetic tracker with a miniature sensor that is usually attached at an endoscope distal tip to real time track endoscope movements in a pre-clinical image space. Spatial alignment between the electromagnetic tracker (or sensor) and pre-clinical images must be performed to navigate the endoscope to target regions. This paper proposes an adaptive marker-free registration method that uses a multiple point selection strategy. This method seeks to address an assumption that the endoscope is operated along the centerline of an intraluminal organ which is easily violated during interventions. We introduce an adaptive strategy that generates multiple points in terms of sensor measurements and endoscope tip center calibration. From these generated points, we adaptively choose the optimal point, which is the closest to its assigned the centerline of the hollow organ, to perform registration. The experimental results demonstrate that our proposed adaptive strategy significantly reduced the target registration error from 5.32 to 2.59 mm in static phantoms validation, as well as from at least 7.58 mm to 4.71 mm in dynamic phantom validation compared to current available methods.


Asunto(s)
Fenómenos Electromagnéticos , Endoscopía/métodos , Calibración , Tomografía Computarizada por Rayos X
12.
Comput Med Imaging Graph ; 30(6-7): 377-82, 2006.
Artículo en Inglés | MEDLINE | ID: mdl-17067783

RESUMEN

The most notable characteristic of the heart is its movement. Detection of dynamic information describing cardiac movement such as amplitude, speed and acceleration facilitates interpretation of normal and abnormal function. In recent years, the Omni-directional M-mode Echocardiography System (OMES) has been developed as a process that builds moving information from a sequence of echocardiography image frames. OMES detects cardiac movement through construction and analysis of Position-Time Grey Waveform (PTGW) images on some feature points of the boundaries of the ventricles. Image edge detection plays an important role in determining the feature boundary points and their moving directions as the basis for extraction of PTGW images--Spiral Architecture (SA) has proved efficient for image edge detection. SA is a hexagonal image structure in which an image is represented as a collection of hexagonal pixels. There are two operations called spiral addition and spiral multiplication defined on SA. They correspond to image translation and rotation, respectively. In this paper, we perform ventricle boundary detection based on SA using various defined chain codes. The gradient direction of each boundary point is determined at the same time. PTGW images at each boundary point are obtained through a series of spiral additions according to the directions of boundary points. Unlike the OMES system, our new approach is no longer affected by the translation movement of the heart. As its result, three curves representing the amplitude, speed and acceleration of cardiac movement can be easily drawn from the PTGW images obtained. Our approach is more efficient and accurate than OMES, and our results contain a more robust and complete description of cardiac motion.


Asunto(s)
Algoritmos , Ecocardiografía/métodos , Corazón/fisiología , Interpretación de Imagen Asistida por Computador/métodos , Movimiento/fisiología , Contracción Miocárdica/fisiología , Técnica de Sustracción , Aumento de la Imagen/métodos , Almacenamiento y Recuperación de la Información/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA