Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Comput Methods Programs Biomed ; 240: 107718, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37451230

RESUMEN

BACKGROUND AND OBJECTIVES: Cervical cancer affects around 0.5 million women per year, resulting in over 0.3 million fatalities. Therefore, repetitive screening for cervical cancer is of utmost importance. Computer-assisted diagnosis is key for scaling up cervical cancer screening. Current recognition algorithms, however, perform poorly on the whole-slide image (WSI) analysis, fail to generalize for different staining methods and on uneven distribution for subtype imaging, and provide sub-optimal clinical-level interpretations. Herein, we developed CervixFormer-an end-to-end, multi-scale swin transformer-based adversarial ensemble learning framework to assess pre-cancerous and cancer-specific cervical malignant lesions on WSIs. METHODS: The proposed framework consists of (1) a self-attention generative adversarial network (SAGAN) for generating synthetic images during patch-level training to address the class imbalanced problems; (2) a multi-scale transformer-based ensemble learning method for cell identification at various stages, including atypical squamous cells (ASC) and atypical squamous cells of undetermined significance (ASCUS), which have not been demonstrated in previous studies; and (3) a fusion model for concatenating ensemble-based results and producing final outcomes. RESULTS: In the evaluation, the proposed method is first evaluated on a private dataset of 717 annotated samples from six classes, obtaining a high recall and precision of 0.940 and 0.934, respectively, in roughly 1.2 minutes. To further examine the generalizability of CervixFormer, we evaluated it on four independent, publicly available datasets, namely, the CRIC cervix, Mendeley LBC, SIPaKMeD Pap Smear, and Cervix93 Extended Depth of Field image datasets. CervixFormer obtained a fairly better performance on two-, three-, four-, and six-class classification of smear- and cell-level datasets. For clinical interpretation, we used GradCAM to visualize a coarse localization map, highlighting important regions in the WSI. Notably, CervixFormer extracts feature mostly from the cell nucleus and partially from the cytoplasm. CONCLUSIONS: In comparison with the existing state-of-the-art benchmark methods, the CervixFormer outperforms them in terms of recall, accuracy, and computing time.


Asunto(s)
Prueba de Papanicolaou , Neoplasias del Cuello Uterino , Femenino , Humanos , Neoplasias del Cuello Uterino/diagnóstico por imagen , Detección Precoz del Cáncer/métodos , Cuello del Útero/diagnóstico por imagen , Cuello del Útero/patología , Diagnóstico por Computador
2.
Sci Rep ; 12(1): 5795, 2022 04 06.
Artículo en Inglés | MEDLINE | ID: mdl-35388054

RESUMEN

Abrupt and continuous nature of scale variation in a crowded scene is a challenging task to enhance crowd counting accuracy in an image. Existing crowd counting techniques generally used multi-column or single-column dilated convolution to tackle scale variation due to perspective distortion. However, due to multi-column nature, they obtain identical features, whereas, the standard dilated convolution (SDC) with expanded receptive field size has sparse pixel sampling rate. Due to sparse nature of SDC, it is highly challenging to obtain relevant contextual information. Further, features at multiple scale are not extracted despite some inception-based model is not used (which is cost effective). To mitigate theses drawbacks in SDC, we therefore, propose a hierarchical dense dilated deep pyramid feature extraction through convolution neural network (CNN) for single image crowd counting (HDPF). It comprises of three modules: general feature extraction module (GFEM), deep pyramid feature extraction module (PFEM) and fusion module (FM). The GFEM is responsible to obtain task independent general features. Whereas, PFEM plays a vital role to obtain the relevant contextual information due to dense pixel sampling rate caused by densely connected dense stacked dilated convolutional modules (DSDCs). Further, due to dense connections among DSDCs, the final feature map acquires multi-scale information with expanded receptive field as compared to SDC. Due to dense pyramid nature, it is very effective to propagate the extracted feature from lower dilated convolutional layers (DCLs) to middle and higher DCLs, which result in better estimation accuracy. The FM is used to fuse the incoming features extracted by other modules. The proposed technique is tested through simulations on three well known datasets: Shanghaitech (Part-A), Shanghaitech (Part-B) and Venice. Results justify its relative effectiveness in terms of selected performance.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Fusión Génica , Procesamiento de Imagen Asistido por Computador/métodos , Manejo de Especímenes
4.
Front Neurosci ; 16: 1050777, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36699527

RESUMEN

Alzheimer's is an acute degenerative disease affecting the elderly population all over the world. The detection of disease at an early stage in the absence of a large-scale annotated dataset is crucial to the clinical treatment for the prevention and early detection of Alzheimer's disease (AD). In this study, we propose a transfer learning base approach to classify various stages of AD. The proposed model can distinguish between normal control (NC), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and AD. In this regard, we apply tissue segmentation to extract the gray matter from the MRI scans obtained from the Alzheimer's Disease National Initiative (ADNI) database. We utilize this gray matter to tune the pre-trained VGG architecture while freezing the features of the ImageNet database. It is achieved through the addition of a layer with step-wise freezing of the existing blocks in the network. It not only assists transfer learning but also contributes to learning new features efficiently. Extensive experiments are conducted and results demonstrate the superiority of the proposed approach.

5.
Sensors (Basel) ; 21(10)2021 May 17.
Artículo en Inglés | MEDLINE | ID: mdl-34067707

RESUMEN

Crowd counting is a challenging task due to large perspective, density, and scale variations. CNN-based crowd counting techniques have achieved significant performance in sparse to dense environments. However, crowd counting in high perspective-varying scenes (images) is getting harder due to different density levels occupied by the same number of pixels. In this way large variations for objects in the same spatial area make it difficult to count accurately. Further, existing CNN-based crowd counting methods are used to extract rich deep features; however, these features are used locally and disseminated while propagating through intermediate layers. This results in high counting errors, especially in dense and high perspective-variation scenes. Further, class-specific responses along channel dimensions are underestimated. To address these above mentioned issues, we therefore propose a CNN-based dense feature extraction network for accurate crowd counting. Our proposed model comprises three main modules: (1) backbone network, (2) dense feature extraction modules (DFEMs), and (3) channel attention module (CAM). The backbone network is used to obtain general features with strong transfer learning ability. The DFEM is composed of multiple sub-modules called dense stacked convolution modules (DSCMs), densely connected with each other. In this way features extracted from lower and middle-lower layers are propagated to higher layers through dense connections. In addition, combinations of task independent general features obtained by the former modules and task-specific features obtained by later ones are incorporated to obtain high counting accuracy in large perspective-varying scenes. Further, to exploit the class-specific response between background and foreground, CAM is incorporated at the end to obtain high-level features along channel dimensions for better counting accuracy. Moreover, we have evaluated the proposed method on three well known datasets: Shanghaitech (Part-A), Shanghaitech (Part-B), and Venice. The performance of the proposed technique justifies its relative effectiveness in terms of selected performance compared to state-of-the-art techniques.

6.
Sensors (Basel) ; 20(1)2019 Dec 19.
Artículo en Inglés | MEDLINE | ID: mdl-31861734

RESUMEN

Traditional handcrafted crowd-counting techniques in an image are currently transformed via machine-learning and artificial-intelligence techniques into intelligent crowd-counting techniques. This paradigm shift offers many advanced features in terms of adaptive monitoring and the control of dynamic crowd gatherings. Adaptive monitoring, identification/recognition, and the management of diverse crowd gatherings can improve many crowd-management-related tasks in terms of efficiency, capacity, reliability, and safety. Despite many challenges, such as occlusion, clutter, and irregular object distribution and nonuniform object scale, convolutional neural networks are a promising technology for intelligent image crowd counting and analysis. In this article, we review, categorize, analyze (limitations and distinctive features), and provide a detailed performance evaluation of the latest convolutional-neural-network-based crowd-counting techniques. We also highlight the potential applications of convolutional-neural-network-based crowd-counting techniques. Finally, we conclude this article by presenting our key observations, providing strong foundation for future research directions while designing convolutional-neural-network-based crowd-counting techniques. Further, the article discusses new advancements toward understanding crowd counting in smart cities using the Internet of Things (IoT).

7.
Sensors (Basel) ; 15(11): 29149-81, 2015 Nov 17.
Artículo en Inglés | MEDLINE | ID: mdl-26593924

RESUMEN

Most applications of underwater wireless sensor networks (UWSNs) demand reliable data delivery over a longer period in an efficient and timely manner. However, the harsh and unpredictable underwater environment makes routing more challenging as compared to terrestrial WSNs. Most of the existing schemes deploy mobile sensors or a mobile sink (MS) to maximize data gathering. However, the relatively high deployment cost prevents their usage in most applications. Thus, this paper presents an autonomous underwater vehicle (AUV)-aided efficient data-gathering (AEDG) routing protocol for reliable data delivery in UWSNs. To prolong the network lifetime, AEDG employs an AUV for data collection from gateways and uses a shortest path tree (SPT) algorithm while associating sensor nodes with the gateways. The AEDG protocol also limits the number of associated nodes with the gateway nodes to minimize the network energy consumption and to prevent the gateways from overloading. Moreover, gateways are rotated with the passage of time to balance the energy consumption of the network. To prevent data loss, AEDG allows dynamic data collection at the AUV depending on the limited number of member nodes that are associated with each gateway. We also develop a sub-optimal elliptical trajectory of AUV by using a connected dominating set (CDS) to further facilitate network throughput maximization. The performance of the AEDG is validated via simulations, which demonstrate the effectiveness of AEDG in comparison to two existing UWSN routing protocols in terms of the selected performance metrics.


Asunto(s)
Redes de Comunicación de Computadores , Modelos Teóricos , Tecnología Inalámbrica , Algoritmos , Océanos y Mares
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...