Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
PLoS One ; 19(8): e0308755, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39146357

RESUMEN

Postoperative nausea and vomiting (PONV) is a common adverse effect of anesthesia. Identifying risk factors for PONV is crucial because it is associated with a longer stay in the post-anesthesia care unit, readmissions, and perioperative costs. This retrospective study used artificial intelligence to analyze data of 37,548 adult patients (aged ≥20 years) who underwent surgery under general anesthesia at Tohoku University Hospital from January 1, 2010 to December 31, 2019. To evaluate PONV, patients who experienced nausea and/or vomiting or used antiemetics within 24 hours after surgery were extracted from postoperative medical and nursing records. We create a model that predicts probability of PONV using the gradient tree boosting model, which is a widely used machine learning algorithm in many applications due to its efficiency and accuracy. The model implementation used the LightGBM framework. Data were available for 33,676 patients. Total blood loss was identified as the strongest contributor to PONV, followed by sex, total infusion volume, and patient's age. Other identified risk factors were duration of surgery (60-400 min), no blood transfusion, use of desflurane for maintenance of anesthesia, laparoscopic surgery, lateral positioning during surgery, propofol not used for maintenance of anesthesia, and epidural anesthesia at the lumbar level. The duration of anesthesia and the use of either sevoflurane or fentanyl were not identified as risk factors for PONV. We used artificial intelligence to evaluate the extent to which risk factors for PONV contribute to the development of PONV. Intraoperative total blood loss was identified as the potential risk factor most strongly associated with PONV, although it may correlate with duration of surgery, and insufficient circulating blood volume. The use of sevoflurane and fentanyl and the anesthesia time were not identified as risk factors for PONV in this study.


Asunto(s)
Aprendizaje Automático , Náusea y Vómito Posoperatorios , Humanos , Náusea y Vómito Posoperatorios/etiología , Náusea y Vómito Posoperatorios/epidemiología , Masculino , Femenino , Factores de Riesgo , Persona de Mediana Edad , Adulto , Estudios Retrospectivos , Anciano , Anestesia General/efectos adversos , Antieméticos/uso terapéutico , Antieméticos/efectos adversos
2.
Comput Methods Programs Biomed ; 245: 108000, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38237449

RESUMEN

BACKGROUND AND OBJECTIVE: High-resolution radiographic images play a pivotal role in the early diagnosis and treatment of skeletal muscle-related diseases. It is promising to enhance image quality by introducing single-image super-resolution (SISR) model into the radiology image field. However, the conventional image pipeline, which can learn a mixed mapping between SR and denoising from the color space and inter-pixel patterns, poses a particular challenge for radiographic images with limited pattern features. To address this issue, this paper introduces a novel approach: Orientation Operator Transformer - O2former. METHODS: We incorporate an orientation operator in the encoder to enhance sensitivity to denoising mapping and to integrate orientation prior. Furthermore, we propose a multi-scale feature fusion strategy to amalgamate features captured by different receptive fields with the directional prior, thereby providing a more effective latent representation for the decoder. Based on these innovative components, we propose a transformer-based SISR model, i.e., O2former, specifically designed for radiographic images. RESULTS: The experimental results demonstrate that our method achieves the best or second-best performance in the objective metrics compared with the competitors at ×4 upsampling factor. For qualitative, more objective details are observed to be recovered. CONCLUSIONS: In this study, we propose a novel framework called O2former for radiological image super-resolution tasks, which improves the reconstruction model's performance by introducing an orientation operator and multi-scale feature fusion strategy. Our approach is promising to further promote the radiographic image enhancement field.


Asunto(s)
Intensificación de Imagen Radiográfica , Radiología , Radiografía , Benchmarking , Suministros de Energía Eléctrica
3.
Sensors (Basel) ; 23(21)2023 Oct 31.
Artículo en Inglés | MEDLINE | ID: mdl-37960560

RESUMEN

JPEG is the international standard for still image encoding and is the most widely used compression algorithm because of its simple encoding process and low computational complexity. Recently, many methods have been developed to improve the quality of JPEG images by using deep learning. However, these methods require the use of high-performance devices since they need to perform neural network computation for decoding images. In this paper, we propose a method to generate high-quality images using deep learning without changing the decoding algorithm. The key idea is to reduce and smooth colors and gradient regions in the original images before JPEG compression. The reduction and smoothing can suppress red block noise and pseudo-contour in the compressed images. Furthermore, high-performance devices are unnecessary for decoding. The proposed method consists of two components: a color transformation network using deep learning and a pseudo-contour suppression model using signal processing. The experimental results showed that the proposed method outperforms standard JPEG in quality measurements correlated with human perception.

4.
IEEE Trans Image Process ; 32: 5837-5851, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37889809

RESUMEN

Scene-text image synthesis techniques that aim to naturally compose text instances on background scene images are very appealing for training deep neural networks due to their ability to provide accurate and comprehensive annotation information. Prior studies have explored generating synthetic text images on two-dimensional and three-dimensional surfaces using rules derived from real-world observations. Some of these studies have proposed generating scene-text images through learning; however, owing to the absence of a suitable training dataset, unsupervised frameworks have been explored to learn from existing real-world data, which might not yield reliable performance. To ease this dilemma and facilitate research on learning-based scene text synthesis, we introduce DecompST, a real-world dataset prepared from some public benchmarks, containing three types of annotations: quadrilateral-level BBoxes, stroke-level text masks, and text-erased images. Leveraging the DecompST dataset, we propose a Learning-Based Text Synthesis engine (LBTS) that includes a text location proposal network (TLPNet) and a text appearance adaptation network (TAANet). TLPNet first predicts the suitable regions for text embedding, after which TAANet adaptively adjusts the geometry and color of the text instance to match the background context. After training, those networks can be integrated and utilized to generate the synthetic dataset for scene text analysis tasks. Comprehensive experiments were conducted to validate the effectiveness of the proposed LBTS along with existing methods, and the experimental results indicate the proposed LBTS can generate better pretraining data for scene text detectors. Our dataset and code are made available at: https://github.com/iiclab/DecompST.

5.
IEEE Trans Image Process ; 30: 9306-9320, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34752394

RESUMEN

Scene text erasing, which replaces text regions with reasonable content in natural images, has drawn significant attention in the computer vision community in recent years. There are two potential subtasks in scene text erasing: text detection and image inpainting. Both subtasks require considerable data to achieve better performance; however, the lack of a large-scale real-world scene-text removal dataset does not allow existing methods to realize their potential. To compensate for the lack of pairwise real-world data, we made considerable use of synthetic text after additional enhancement and subsequently trained our model only on the dataset generated by the improved synthetic text engine. Our proposed network contains a stroke mask prediction module and background inpainting module that can extract the text stroke as a relatively small hole from the cropped text image to maintain more background content for better inpainting results. This model can partially erase text instances in a scene image with a bounding box or work with an existing scene-text detector for automatic scene text erasing. The experimental results from the qualitative and quantitative evaluation on the SCUT-Syn, ICDAR2013, and SCUT-EnsText datasets demonstrate that our method significantly outperforms existing state-of-the-art methods even when they are trained on real-world data.

6.
Molecules ; 26(11)2021 May 24.
Artículo en Inglés | MEDLINE | ID: mdl-34073745

RESUMEN

Feature extraction is essential for chemical property estimation of molecules using machine learning. Recently, graph neural networks have attracted attention for feature extraction from molecules. However, existing methods focus only on specific structural information, such as node relationship. In this paper, we propose a novel graph convolutional neural network that performs feature extraction with simultaneously considering multiple structures. Specifically, we propose feature extraction paths specialized in node, edge, and three-dimensional structures. Moreover, we propose an attention mechanism to aggregate the features extracted by the paths. The attention aggregation enables us to select useful features dynamically. The experimental results showed that the proposed method outperformed previous methods.

7.
Sensors (Basel) ; 21(4)2021 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-33572435

RESUMEN

Recently, attention has surged concerning intelligent sensors using text detection. However, there are challenges in detecting small texts. To solve this problem, we propose a novel text detection CNN (convolutional neural network) architecture sensitive to text scale. We extract multi-resolution feature maps in multi-stage convolution layers that have been employed to prevent losing information and maintain the feature size. In addition, we developed the CNN considering the receptive field size to generate proposal stages. The experimental results show the importance of the receptive field size.

8.
IEEE Comput Graph Appl ; 40(1): 99-111, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31380748

RESUMEN

The automated generation of fonts containing a large number of characters is in high demand. For example, a typical Japanese font requires over 1000 characters. Unfortunately, professional typographers create the majority of fonts, resulting in significant financial and time investments for font generation. The main contribution of this article is the development of a method that automatically generates a target typographic font containing thousands of characters, from a small subset of character images in the target font. We generate characters other than the subset so that a complete font is obtained. We propose a novel font generation method with the capability to deal with various fonts, including a font composed of distinctive strokes, which are difficult for existing methods to handle. We demonstrated the proposed method by generating 2965 characters in 47 fonts. Moreover, objective and subjective evaluations verified that the generated characters are similar to the original characters.

9.
J Imaging ; 6(4)2020 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-34460722

RESUMEN

Pansharpening is a method applied for the generation of high-spatial-resolution multi-spectral (MS) images using panchromatic (PAN) and multi-spectral images. A common challenge in pansharpening is to reduce the spectral distortion caused by increasing the resolution. In this paper, we propose a method for reducing the spectral distortion based on the intensity-hue-saturation (IHS) method targeting satellite images. The IHS method improves the resolution of an RGB image by replacing the intensity of the low-resolution RGB image with that of the high-resolution PAN image. The spectral characteristics of the PAN and MS images are different, and this difference may cause spectral distortion in the pansharpened image. Although many solutions for reducing spectral distortion using a modeled spectrum have been proposed, the quality of the outcomes obtained by these approaches depends on the image dataset. In the proposed technique, we model a low-spatial-resolution PAN image according to a relative spectral response graph, and then the corrected intensity is calculated using the model and the observed dataset. Experiments were conducted on three IKONOS datasets, and the results were evaluated using some major quality metrics. This quantitative evaluation demonstrated the stability of the pansharpened images and the effectiveness of the proposed method.

10.
IEEE Trans Neural Netw ; 20(11): 1783-96, 2009 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-19770092

RESUMEN

Exponential principal component analysis (e-PCA) has been proposed to reduce the dimension of the parameters of probability distributions using Kullback information as a distance between two distributions. It also provides a framework for dealing with various data types such as binary and integer for which the Gaussian assumption on the data distribution is inappropriate. In this paper, we introduce a latent variable model for the e-PCA. Assuming the discrete distribution on the latent variable leads to mixture models with constraint on their parameters. This provides a framework for clustering on the lower dimensional subspace of exponential family distributions. We derive a learning algorithm for those mixture models based on the variational Bayes (VB) method. Although intractable integration is required to implement the algorithm for a subspace, an approximation technique using Laplace's method allows us to carry out clustering on an arbitrary subspace. Combined with the estimation of the subspace, the resulting algorithm performs simultaneous dimensionality reduction and clustering. Numerical experiments on synthetic and real data demonstrate its effectiveness for extracting the structures of data as a visualization technique and its high generalization ability as a density estimation model.


Asunto(s)
Algoritmos , Inteligencia Artificial , Teorema de Bayes , Simulación por Computador , Redes Neurales de la Computación , Análisis de Componente Principal , Interpretación Estadística de Datos , Matemática , Modelos Teóricos
11.
IEEE Trans Image Process ; 16(8): 2139-49, 2007 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-17688218

RESUMEN

Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Gráficos por Computador , Aumento de la Imagen/métodos , Almacenamiento y Recuperación de la Información/métodos , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA