Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 14994, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38951207

RESUMO

Artificially extracted agricultural phenotype information exhibits high subjectivity and low accuracy, while the utilization of image extraction information is susceptible to interference from haze. Furthermore, the effectiveness of the agricultural image dehazing method used for extracting such information is limited due to unclear texture details and color representation in the images. To address these limitations, we propose AgriGAN (unpaired image dehazing via a cycle-consistent generative adversarial network) for enhancing the dehazing performance in agricultural plant phenotyping. The algorithm incorporates an atmospheric scattering model to improve the discriminator model and employs a whole-detail consistent discrimination approach to enhance discriminator efficiency, thereby accelerating convergence towards Nash equilibrium state within the adversarial network. Finally, by training with network adversarial loss + cycle consistent loss, clear images are obtained after dehazing process. Experimental evaluations and comparative analysis were conducted to assess this algorithm's performance, demonstrating improved accuracy in dehazing agricultural images while preserving detailed texture information and mitigating color deviation issues.

2.
J Imaging ; 10(7)2024 Jul 11.
Artigo em Inglês | MEDLINE | ID: mdl-39057735

RESUMO

Haze weather deteriorates image quality, causing images to become blurry with reduced contrast. This makes object edges and features unclear, leading to lower detection accuracy and reliability. To enhance haze removal effectiveness, we propose an image dehazing and fusion network based on the encoder-decoder paradigm (UIDF-Net). This network leverages the Image Fusion Module (MDL-IFM) to fuse the features of dehazed images, producing clearer results. Additionally, to better extract haze information, we introduce a haze encoder (Mist-Encode) that effectively processes different frequency features of images, improving the model's performance in image dehazing tasks. Experimental results demonstrate that the proposed model achieves superior dehazing performance compared to existing algorithms on outdoor datasets.

3.
Sensors (Basel) ; 24(14)2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-39066026

RESUMO

In foggy weather, outdoor safety helmet detection often suffers from low visibility and unclear objects, hindering optimal detector performance. Moreover, safety helmets typically appear as small objects at construction sites, prone to occlusion and difficult to distinguish from complex backgrounds, further exacerbating the detection challenge. Therefore, the real-time and precise detection of safety helmet usage among construction personnel, particularly in adverse weather conditions such as foggy weather, poses a significant challenge. To address this issue, this paper proposes the DST-DETR, a framework for foggy weather safety helmet detection. The DST-DETR framework comprises a dehazing module, PAOD-Net, and an object detection module, ST-DETR, for joint dehazing and detection. Initially, foggy images are restored within PAOD-Net, enhancing the AOD-Net model by introducing a novel convolutional module, PfConv, guided by the parameter-free average attention module (PfAAM). This module enables more focused attention on crucial features in lightweight models, therefore enhancing performance. Subsequently, the MS-SSIM + ℓ2 loss function is employed to bolster the model's robustness, making it adaptable to scenes with intricate backgrounds and variable fog densities. Next, within the object detection module, the ST-DETR model is designed to address small objects. By refining the RT-DETR model, its capability to detect small objects in low-quality images is enhanced. The core of this approach lies in utilizing the variant ResNet-18 as the backbone to make the network lightweight without sacrificing accuracy, followed by effectively integrating the small-object layer into the improved BiFPN neck structure, resulting in CCFF-BiFPN-P2. Various experiments were conducted to qualitatively and quantitatively compare our method with several state-of-the-art approaches, demonstrating its superiority. The results validate that the DST-DETR algorithm is better suited for foggy safety helmet detection tasks in construction scenarios.

4.
Sensors (Basel) ; 24(11)2024 May 26.
Artigo em Inglês | MEDLINE | ID: mdl-38894221

RESUMO

Aiming at the problems of incomplete dehazing, color distortion, and loss of detail and edge information encountered by existing algorithms when processing images of underground coal mines, an image dehazing algorithm for underground coal mines, named CAB CA DSConv Fusion gUNet (CCDF-gUNet), is proposed. First, Dynamic Snake Convolution (DSConv) is introduced to replace traditional convolutions, enhancing the feature extraction capability. Second, residual attention convolution blocks are constructed to simultaneously focus on both local and global information in images. Additionally, the Coordinate Attention (CA) module is utilized to learn the coordinate information of features so that the model can better capture the key information in images. Furthermore, to simultaneously focus on the detail and structural consistency of images, a fusion loss function is introduced. Finally, based on the test verification of the public dataset Haze-4K, the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), and Mean Squared Error (MSE) are 30.72 dB, 0.976, and 55.04, respectively, and on a self-made underground coal mine dataset, they are 31.18 dB, 0.971, and 49.66, respectively. The experimental results show that the algorithm performs well in dehazing, effectively avoids color distortion, and retains image details and edge information, providing some theoretical references for image processing in coal mine surveillance videos.

5.
Sensors (Basel) ; 24(11)2024 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-38894379

RESUMO

In adverse foggy weather conditions, images captured are adversely affected by natural environmental factors, resulting in reduced image contrast and diminished visibility. Traditional image dehazing methods typically rely on prior knowledge, but their efficacy diminishes in practical, complex environments. Deep learning methods have shown promise in single-image dehazing tasks, but often struggle to fully leverage depth and edge information, leading to blurred edges and incomplete dehazing effects. To address these challenges, this paper proposes a deep-guided bilateral grid feature fusion dehazing network. This network extracts depth information through a dedicated module, derives bilateral grid features via Unet, employs depth information to guide the sampling of bilateral grid features, reconstructs features using a dedicated module, and finally estimates dehazed images through two layers of convolutional layers and residual connections with the original images. The experimental results demonstrate the effectiveness of the proposed method on public datasets, successfully removing fog while preserving image details.

6.
Sensors (Basel) ; 24(12)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38931757

RESUMO

Remote sensing images are inevitably affected by the degradation of haze with complex appearance and non-uniform distribution, which remarkably affects the effectiveness of downstream remote sensing visual tasks. However, most current methods principally operate in the original pixel space of the image, which hinders the exploration of the frequency characteristics of remote sensing images, resulting in these models failing to fully exploit their representation ability to produce high-quality images. This paper proposes a frequency-oriented remote sensing dehazing Transformer named FOTformer, to explore information in the frequency domain to eliminate disturbances caused by haze in remote sensing images. It contains three components. Specifically, we developed a frequency-prompt attention evaluator to estimate the self-correlation of features in the frequency domain rather than the spatial domain, improving the image restoration performance. We propose a content reconstruction feed-forward network that captures information between different scales in features and integrates and processes global frequency domain information and local multi-scale spatial information in Fourier space to reconstruct the global content under the guidance of the amplitude spectrum. We designed a spatial-frequency aggregation block to exchange and fuse features from the frequency domain and spatial domain of the encoder and decoder to facilitate the propagation of features from the encoder stream to the decoder and alleviate the problem of information loss in the network. The experimental results show that the FOTformer achieved a more competitive performance against other remote sensing dehazing methods on commonly used benchmark datasets.

7.
Neural Netw ; 176: 106314, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38669785

RESUMO

Recently, Unsupervised algorithms has achieved remarkable performance in image dehazing. However, the CycleGAN framework can lead to confusion in generator learning due to inconsistent data distributions, and the DisentGAN framework lacks effective constraints on generated images, resulting in the loss of image content details and color distortion. Moreover, Squeeze and Excitation channel attention employs only fully connected layers to capture global information, lacking interaction with local information, resulting in inaccurate feature weight allocation for image dehazing. To solve the above problems, in this paper, we propose an Unsupervised Bidirectional Contrastive Reconstruction and Adaptive Fine-Grained Channel Attention Networks (UBRFC-Net). Specifically, an Unsupervised Bidirectional Contrastive Reconstruction Framework (BCRF) is proposed, aiming to establish bidirectional contrastive reconstruction constraints, not only to avoid the generator learning confusion in CycleGAN but also to enhance the constraint capability for clear images and the reconstruction ability of the unsupervised dehazing network. Furthermore, an Adaptive Fine-Grained Channel Attention (FCA) is developed to utilize the correlation matrix to capture the correlation between global and local information at various granularities promotes interaction between them, achieving more efficient feature weight assignment. Experimental results on challenging benchmark datasets demonstrate the superiority of our UBRFC-Net over state-of-the-art unsupervised image dehazing methods. This study successfully introduces an enhanced unsupervised image dehazing approach, addressing limitations of existing methods and achieving superior dehazing results. The source code is available at https://github.com/Lose-Code/UBRFC-Net.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Aprendizado de Máquina não Supervisionado , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Humanos , Aprendizado Profundo
8.
Sensors (Basel) ; 24(7)2024 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-38610257

RESUMO

Images obtained in an unfavorable environment may be affected by haze or fog, leading to fuzzy image details, low contrast, and loss of important information. Recently, significant progress has been achieved in the realm of image dehazing, largely due to the adoption of deep learning techniques. Owing to the lack of modules specifically designed to learn the unique characteristics of haze, existing deep neural network-based methods are impractical for processing images containing haze. In addition, most networks primarily focus on learning clear image information while disregarding potential features in hazy images. To address these limitations, we propose an innovative method called contrastive multiscale transformer for image dehazing (CMT-Net). This method uses the multiscale transformer to enable the network to learn global hazy features at multiple scales. Furthermore, we introduce feature combination attention and a haze-aware module to enhance the network's ability to handle varying concentrations of haze by assigning more weight to regions containing haze. Finally, we design a multistage contrastive learning loss incorporating different positive and negative samples at various stages to guide the network's learning process to restore real and non-hazy images. The experimental findings demonstrate that CMT-Net provides exceptional performance on established datasets and exhibits superior visual outcomes.

9.
Sci Prog ; 107(1): 368504231221407, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38314759

RESUMO

Foggy images affect image analysis and measurement because of their low definition and blurred details. Despite numerous studies on haze in natural images in hazy environments, the recovery effect is not ideal for processing hazy images in sky areas. A dark channel priori technique for processing haze images with sky areas where atmospheric light values are misestimated and halo artefacts are produced, as well as an improved dark channel priori single-image defogging technique based on image segmentation and joint filtering, are proposed. First, an estimation method of the atmospheric illumination value using image segmentation is proposed to obtain the atmospheric illumination value. The probability density distribution function of the haze-grey image was constructed during image segmentation. The probability density distribution function of the grey image value, the K-means clustering technique, and the method for estimating atmospheric illumination values are combined to improve image segmentation techniques and achieve the segmentation of sky and non-sky areas in hazy images. Based on the segmentation threshold, the number of pixels in the sky and non-sky areas, as well as the normalisation results, were counted to calculate the atmospheric illumination values. Second, to address the halo artefact phenomenon, a method for optimising the image transmittance map using joint filtering is proposed. The image transmittance map was optimised by combining fast-guided filtering and weighted least-squares filtering to retain the edge information and smooth the gradient change of the internal region. Finally, gamma correction and automatic level optimisation are used to improve the brightness and contrast of the defogged images. The experimental results show that the proposed technique can effectively achieve sky segmentation. Compared to the traditional dark-channel prior technique, the proposed technique suppress halo artefacts and improve image detail recovery. Compared to other techniques, the proposed technique exhibited excellent performance in subjective and objective evaluations.

10.
Neural Netw ; 173: 106165, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38340469

RESUMO

Single image dehazing is a challenging computer vision task for other high-level applications, e.g., object detection, navigation, and positioning systems. Recently, most existing dehazing methods have followed a "black box" recovery paradigm that obtains the haze-free image from its corresponding hazy input by network learning. Unfortunately, these algorithms ignore the effective utilization of relevant image priors and non-uniform haze distribution problems, causing insufficient or excessive dehazing performance. In addition, they pay little attention to image detail preservation during the dehazing process, thus inevitably producing blurry results. To address the above problems, we propose a novel priors-assisted dehazing network (called PADNet), which fully explores relevant image priors from two new perspectives: attention supervision and detail preservation. For one thing, we leverage the dark channel prior to constrain the attention map generation that denotes the haze pixel position information, thereby better extracting non-uniform feature distributions from hazy images. For another, we find that the residual channel prior of the hazy images contains rich structural information, so it is natural to incorporate it into our dehazing architecture to preserve more structural detail information. Furthermore, since the attention map and dehazed image are simultaneously predicted during the convergence of our model, a self-paced semi-curriculum learning strategy is utilized to alleviate the learning ambiguity. Extensive quantitative and qualitative experiments on several benchmark datasets demonstrate that our PADNet can perform favorably against existing state-of-the-art methods. The code will be available at https://github.com/leandepk/PADNet.


Assuntos
Algoritmos , Benchmarking , Aprendizagem
11.
Sensors (Basel) ; 24(2)2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38276379

RESUMO

Image dehazing has become a crucial prerequisite for most outdoor computer applications. The majority of existing dehazing models can achieve the haze removal problem. However, they fail to preserve colors and fine details. Addressing this problem, we introduce a novel high-performing attention-based dehazing model (ADMC2-net)that successfully incorporates both RGB and HSV color spaces to maintain color properties. This model consists of two parallel densely connected sub-models (RGB and HSV) followed by a new efficient attention module. This attention module comprises pixel-attention and channel-attention mechanisms to get more haze-relevant features. Experimental results analyses can validate that our proposed model (ADMC2-net) can achieve superior results on synthetic and real-world datasets and outperform most of state-of-the-art methods.

12.
Neural Netw ; 172: 106107, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38232424

RESUMO

Image dehazing has received extensive research attention as images collected in hazy weather are limited by low visibility and information dropout. Recently, disentangled representation learning has made excellent progress in various vision tasks. However, existing networks for low-level vision tasks lack efficient feature interaction and delivery mechanisms in the disentanglement process or an evaluation mechanism for the degree of decoupling in the reconstruction process, rendering direct application to image dehazing challenging. We propose a self-guided disentangled representation learning (SGDRL) algorithm with a self-guided disentangled network to realize multi-level progressive feature decoupling through sharing and interaction. The self-guided disentangled (SGD) network extracts image features using the multi-layer backbone network, and attribute features are weighted using the self-guided attention mechanism for the backbone features. In addition, we introduce a disentanglement-guided (DG) module to evaluate the degree of feature decomposition and guide the feature fusion process in the reconstruction stage. Accordingly, we develop SGDRL-based unsupervised and semi-supervised single image dehazing networks. Extensive experiments demonstrate the superiority of the proposed method for real-world image dehazing. The source code is available at https://github.com/dehazing/SGDRL.


Assuntos
Algoritmos , Aprendizagem , Software , Tempo (Meteorologia)
13.
Sensors (Basel) ; 23(21)2023 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-37960616

RESUMO

A binocular vision-based approach for the restoration of images captured in a scattering medium is presented. The scene depth is computed by triangulation using stereo matching. Next, the atmospheric parameters of the medium are determined with an introduced estimator based on the Monte Carlo method. Finally, image restoration is performed using an atmospheric optics model. The proposed approach effectively suppresses optical scattering effects without introducing noticeable artifacts in processed images. The accuracy of the proposed approach in the estimation of atmospheric parameters and image restoration is evaluated using synthetic hazy images constructed from a well-known database. The practical viability of our approach is also confirmed through a real experiment for depth estimation, atmospheric parameter estimation, and image restoration in a scattering medium. The results highlight the applicability of our approach in computer vision applications in challenging atmospheric conditions.

14.
Sensors (Basel) ; 23(22)2023 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-38005632

RESUMO

The tunnel construction area poses significant challenges for the use of vision technology due to the presence of nonhomogeneous haze fields and low-contrast targets. However, existing dehazing algorithms display weak generalization, leading to dehazing failures, incomplete dehazing, or color distortion in this scenario. Therefore, an adversarial dual-branch convolutional neural network (ADN) is proposed in this paper to deal with the above challenges. The ADN utilizes two branches of the knowledge transfer sub-network and the multi-scale dense residual sub-network to process the hazy image and then aggregate the channels. This input is then passed through a discriminator to judge true and false, motivating the network to improve performance. Additionally, a tunnel haze field simulation dataset (Tunnel-HAZE) is established based on the characteristics of nonhomogeneous dust distribution and artificial light sources in the tunnel. Comparative experiments with existing advanced dehazing algorithms indicate an improvement in both PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity) by 4.07 dB and 0.032 dB, respectively. Furthermore, a binocular measurement experiment conducted in a simulated tunnel environment demonstrated a reduction in the relative error of measurement results by 50.5% when compared to the haze image. The results demonstrate the effectiveness and application potential of the proposed method in tunnel construction.

15.
Sensors (Basel) ; 23(19)2023 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-37836932

RESUMO

Aiming to solve the problem of color distortion and loss of detail information in most dehazing algorithms, an end-to-end image dehazing network based on multi-scale feature enhancement is proposed. Firstly, the feature extraction enhancement module is used to capture the detailed information of hazy images and expand the receptive field. Secondly, the channel attention mechanism and pixel attention mechanism of the feature fusion enhancement module are used to dynamically adjust the weights of different channels and pixels. Thirdly, the context enhancement module is used to enhance the context semantic information, suppress redundant information, and obtain the haze density image with higher detail. Finally, our method removes haze, preserves image color, and ensures image details. The proposed method achieved a PSNR score of 33.74, SSIM scores of 0.9843 and LPIPS distance of 0.0040 on the SOTS-outdoor dataset. Compared with representative dehazing methods, it demonstrates better dehazing performance and proves the advantages of the proposed method on synthetic hazy images. Combined with dehazing experiments on real hazy images, the results show that our method can effectively improve dehazing performance while preserving more image details and achieving color fidelity.

16.
Sensors (Basel) ; 23(17)2023 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-37687940

RESUMO

The degradation of visual quality in remote sensing images caused by haze presents significant challenges in interpreting and extracting essential information. To effectively mitigate the impact of haze on image quality, we propose an unsupervised generative adversarial network specifically designed for remote sensing image dehazing. This network includes two generators with identical structures and two discriminators with identical structures. One generator is focused on image dehazing, while the other generates images with added haze. The two discriminators are responsible for distinguishing whether an image is real or generated. The generator, employing an encoder-decoder architecture, is designed based on the proposed multi-scale feature-extraction modules and attention modules. The proposed multi-scale feature-extraction module, comprising three distinct branches, aims to extract features with varying receptive fields. Each branch comprises dilated convolutions and attention modules. The proposed attention module includes both channel and spatial attention components. It guides the feature-extraction network to emphasize haze and texture within the remote sensing image. For enhanced generator performance, a multi-scale discriminator is also designed with three branches. Furthermore, an improved loss function is introduced by incorporating color-constancy loss into the conventional loss framework. In comparison to state-of-the-art methods, the proposed approach achieves the highest peak signal-to-noise ratio and structural similarity index metrics. These results convincingly demonstrate the superior performance of the proposed method in effectively removing haze from remote sensing images.

17.
J Imaging ; 9(9)2023 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-37754947

RESUMO

Image dehazing, a fundamental problem in computer vision, involves the recovery of clear visual cues from images marred by haze. Over recent years, deploying deep learning paradigms has spurred significant strides in image dehazing tasks. However, many dehazing networks aim to enhance performance by adopting intricate network architectures, complicating training, inference, and deployment procedures. This study proposes an end-to-end U-Net dehazing network model with recursive gated convolution and attention mechanisms to improve performance while maintaining a lean network structure. In our approach, we leverage an improved recursive gated convolution mechanism to substitute the original U-Net's convolution blocks with residual blocks and apply the SK fusion module to revamp the skip connection method. We designate this novel U-Net variant as the Dehaze Recursive Gated U-Net (DRGNet). Comprehensive testing across public datasets demonstrates the DRGNet's superior performance in dehazing quality, detail retrieval, and objective evaluation metrics. Ablation studies further confirm the effectiveness of the key design elements.

18.
Neural Netw ; 167: 1-9, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37598543

RESUMO

Most of the existing learning-based dehazing methods require a diverse and large collection of paired hazy/clean images, which is intractable to obtain. Therefore, existing dehazing methods resort to training on synthetic images. This may result in a possible domain shift when treating real scenes. In this paper, we propose a novel unsupervised dehazing (lightweight) network without any reference images to directly predict clear images from the original hazy images, which consists of an interactive fusion module (IFM) and an iterative optimization module (IOM). Specifically, IFM interactively fuses multi-level features to make up for the missing information among deep and shallow features while IOM iteratively optimizes dehazed results to obtain pleasing visual effects. Particularly, based on the observation that hazy images usually suffer from quality degradation, four non-reference visual-quality-driven loss functions are designed to enable the network trained in an unsupervised way, including dark channel loss, contrast loss, saturation loss, and edge sharpness loss. Extensive experiments on two synthetic datasets and one real-world dataset demonstrate that our method performs favorably against the state-of-the-art unsupervised dehazing methods and even matches some supervised methods in terms of metrics such as PSNR, SSIM, and UQI.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação
19.
Sensors (Basel) ; 23(13)2023 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-37447828

RESUMO

Image dehazing based on convolutional neural networks has achieved significant success; however, there are still some problems, such as incomplete dehazing, color deviation, and loss of detailed information. To address these issues, in this study, we propose a multi-scale dehazing network with dark channel priors (MSDN-DCP). First, we introduce a feature extraction module (FEM), which effectively enhances the ability of feature extraction and correlation through a two-branch residual structure. Second, a feature fusion module (FFM) is devised to combine multi-scale features adaptively at different stages. Finally, we propose a dark channel refinement module (DCRM) that implements the dark channel prior theory to guide the network in learning the features of the hazy region, ultimately refining the feature map that the network extracted. We conduct experiments using the Haze4K dataset, and the achieved results include a peak signal-to-noise ratio of 29.57 dB and a structural similarity of 98.1%. The experimental results show that the MSDN-DCP can achieve superior dehazing compared to other algorithms in terms of objective metrics and visual perception.


Assuntos
Algoritmos , Benchmarking , Aprendizagem , Redes Neurais de Computação , Razão Sinal-Ruído
20.
Front Bioeng Biotechnol ; 11: 1054991, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37274169

RESUMO

Background: Osteoporosis is a common degenerative disease with high incidence among aging populations. However, in regular radiographic diagnostics, asymptomatic osteoporosis is often overlooked and does not include tests for bone mineral density or bone trabecular condition. Therefore, we proposed a highly generalized classifier for osteoporosis radiography based on the multiscale fractal, lacunarity, and entropy distributions. Methods: We collected a total of 104 radiographs (92 for training and 12 for testing) of lumbar spine L4 and divided them into three groups (normal, osteopenia, and osteoporosis). In parallel, 174 radiographs (116 for training and 58 for testing) of calcaneus from health and osteoporotic fracture groups were collected. The texture feature data of all the radiographs were pulled out and analyzed. The Davies-Bouldin index was applied to optimize hyperparameters of feature counting. Neighborhood component analysis was performed to reduce feature dimension and increase generalization. A support vector machine classifier was trained with only the most effective six features for each binary classification scenario. The accuracy and sensitivity performance were estimated by calculating the area under the curve. Results: Interpretable feature trends of osteoporotic pathological changes were depicted. On the spine test dataset, the accuracy and sensitivity of binary classifiers were 0.851 (95% CI: 0.730-0.922), 0.813 (95% CI: 0.718-0.878), and 0.936 (95% CI: 0.826-1) for osteoporosis diagnosis; 0.721 (95% CI: 0.578-0.824), 0.675 (95% CI: 0.563-0.772), and 0.774 (95% CI: 0.635-0.878) for osteopenia diagnosis; and 0.935 (95% CI: 0.830-0.968), 0.928 (95% CI: 0.863-0.963), and 0.910 (95% CI: 0.746-1) for osteoporosis diagnosis from osteopenia. On the calcaneus test dataset, they were 0.767 (95% CI: 0.629-0.879), 0.672 (95% CI: 0.545-0.793), and 0.790 (95% CI: 0.621-0.923) for osteoporosis diagnosis. Conclusion: This method showed the capacity of resisting disturbance on lateral spine radiographs and high generalization on the calcaneus dataset. Pixel-wise texture features not only helped to understand osteoporosis on radiographs better but also shed new light on computer-aided osteopenia and osteoporosis diagnosis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA