Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 70
Filtrar
1.
J Med Imaging (Bellingham) ; 11(2): 024502, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38510544

RESUMO

Purpose: The diagnosis of primary bone tumors is challenging as the initial complaints are often non-specific. The early detection of bone cancer is crucial for a favorable prognosis. Incidentally, lesions may be found on radiographs obtained for other reasons. However, these early indications are often missed. We propose an automatic algorithm to detect bone lesions in conventional radiographs to facilitate early diagnosis. Detecting lesions in such radiographs is challenging. First, the prevalence of bone cancer is very low; any method must show high precision to avoid a prohibitive number of false alarms. Second, radiographs taken in health maintenance organizations (HMOs) or emergency departments (EDs) suffer from inherent diversity due to different X-ray machines, technicians, and imaging protocols. This diversity poses a major challenge to any automatic analysis method. Approach: We propose training an off-the-shelf object detection algorithm to detect lesions in radiographs. The novelty of our approach stems from a dedicated preprocessing stage that directly addresses the diversity of the data. The preprocessing consists of self-supervised region-of-interest detection using vision transformer (ViT), and a foreground-based histogram equalization for contrast enhancement to relevant regions only. Results: We evaluate our method via a retrospective study that analyzes bone tumors on radiographs acquired from January 2003 to December 2018 under diverse acquisition protocols. Our method obtains 82.43% sensitivity at a 1.5% false-positive rate and surpasses existing preprocessing methods. For lesion detection, our method achieves 82.5% accuracy and an IoU of 0.69. Conclusions: The proposed preprocessing method enables effectively coping with the inherent diversity of radiographs acquired in HMOs and EDs.

2.
IEEE Trans Image Process ; 33: 108-122, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38039164

RESUMO

We present two deep unfolding neural networks for the simultaneous tasks of background subtraction and foreground detection in video. Unlike conventional neural networks based on deep feature extraction, we incorporate domain-knowledge models by considering a masked variation of the robust principal component analysis problem (RPCA). With this approach, we separate video clips into low-rank and sparse components, respectively corresponding to the backgrounds and foreground masks indicating the presence of moving objects. Our models, coined ROMAN-S and ROMAN-R, map the iterations of two alternating direction of multipliers methods (ADMM) to trainable convolutional layers, and the proximal operators are mapped to non-linear activation functions with trainable thresholds. This approach leads to lightweight networks with enhanced interpretability that can be trained on limited data. In ROMAN-S, the correlation in time of successive binary masks is controlled with side-information based on l1 - l1 minimization. ROMAN-R enhances the foreground detection by learning a dictionary of atoms to represent the moving foreground in a high-dimensional feature space and by using reweighted- l1 - l1 minimization. Experiments are conducted on both synthetic and real video datasets, for which we also include an analysis of the generalization to unseen clips. Comparisons are made with existing deep unfolding RPCA neural networks, which do not use a mask formulation for the foreground, and with a 3D U-Net baseline. Results show that our proposed models outperform other deep unfolding networks, as well as the untrained optimization algorithms. ROMAN-R, in particular, is competitive with the U-Net baseline for foreground detection, with the additional advantage of providing video backgrounds and requiring substantially fewer training parameters and smaller training sets.

3.
Sci Rep ; 13(1): 16450, 2023 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-37777523

RESUMO

Post-operative urinary retention is a medical condition where patients cannot urinate despite having a full bladder. Ultrasound imaging of the bladder is used to estimate urine volume for early diagnosis and management of urine retention. Moreover, the use of bladder ultrasound can reduce the need for an indwelling urinary catheter and the risk of catheter-associated urinary tract infection. Wearable ultrasound devices combined with machine-learning based bladder volume estimation algorithms reduce the burdens of nurses in hospital settings and improve outpatient care. However, existing algorithms are memory and computation intensive, thereby demanding the use of expensive GPUs. In this paper, we develop and validate a low-compute memory-efficient deep learning model for accurate bladder region segmentation and urine volume calculation. B-mode ultrasound bladder images of 360 patients were divided into training and validation sets; another 74 patients were used as the test dataset. Our 1-bit quantized models with 4-bits and 6-bits skip connections achieved an accuracy within [Formula: see text] and [Formula: see text], respectively, of a full precision state-of-the-art neural network (NN) without any floating-point operations and with an [Formula: see text] and [Formula: see text] reduction in memory requirements to fit under 150 kB. The means and standard deviations of the volume estimation errors, relative to estimates from ground-truth clinician annotations, were [Formula: see text] ml and [Formula: see text] ml, respectively. This lightweight NN can be easily integrated on the wearable ultrasound device for automated and continuous monitoring of urine volume. Our approach can potentially be extended to other clinical applications, such as monitoring blood pressure and fetal heart rate.


Assuntos
Bexiga Urinária , Retenção Urinária , Humanos , Bexiga Urinária/diagnóstico por imagem , Algoritmos , Redes Neurais de Computação , Ultrassonografia/métodos , Retenção Urinária/diagnóstico por imagem
4.
Bioinformatics ; 39(6)2023 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-37267161

RESUMO

MOTIVATION: Imaging Spatial Transcriptomics techniques characterize gene expression in cells in their native context by imaging barcoded probes for mRNA with single molecule resolution. However, the need to acquire many rounds of high-magnification imaging data limits the throughput and impact of existing methods. RESULTS: We describe the Joint Sparse method for Imaging Transcriptomics, an algorithm for decoding lower magnification Imaging Spatial Transcriptomics data than that used in standard experimental workflows. Joint Sparse method for Imaging Transcriptomics incorporates codebook knowledge and sparsity assumptions into an optimization problem, which is less reliant on well separated optical signals than current pipelines. Using experimental data obtained by performing Multiplexed Error-Robust Fluorescence in situ Hybridization on tissue from mouse brain, we demonstrate that Joint Sparse method for Imaging Transcriptomics enables improved throughput and recovery performance over standard decoding methods. AVAILABILITY AND IMPLEMENTATION: Software implementation of JSIT, together with example files, is available at https://github.com/jpbryan13/JSIT.


Assuntos
Perfilação da Expressão Gênica , Transcriptoma , Animais , Camundongos , Hibridização in Situ Fluorescente/métodos , Perfilação da Expressão Gênica/métodos , Software , Algoritmos
5.
IEEE J Biomed Health Inform ; 27(6): 2806-2817, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-37028312

RESUMO

Non-contact technology for monitoring the vital signs of multiple individuals, such as respiration and heartbeat, has been investigated in recent years due to the rising cardiopulmonary morbidity, the risk of disease transmission, and the heavy burden on medical staff. Frequency-modulated continuous wave (FMCW) radars have shown great promise in meeting these needs, even using a single-input-single-output (SISO) setup. However, contemporary techniques for non-contact vital signs monitoring (NCVSM) via SISO FMCW radar, are based on simplistic models and present difficulties in coping with noisy environments containing multiple objects. In this work, we first develop an extended model for multi-person NCVSM via SISO FMCW radar. Then, by utilizing the sparse nature of the modeled signals in conjunction with human-typical cardiopulmonary features, we present accurate localization and NCVSM of multiple individuals in a cluttered scenario, even with only a single channel. Specifically, we provide a joint-sparse recovery mechanism to localize people and develop a robust method for NCVSM called Vital Signs-based Dictionary Recovery (VSDR), which uses a dictionary-based approach to search for the rates of respiration and heartbeat over high-resolution grids corresponding to human cardiopulmonary activity. The advantages of our method are illustrated through examples that combine the proposed model with in-vivo data of 30 individuals. We demonstrate accurate human localization in a noisy scenario that includes both static and vibrating objects and show that our VSDR approach outperforms existing NCVSM techniques based on several statistical metrics. The findings support the widespread use of FMCW radars with the proposed algorithms in healthcare.


Assuntos
Radar , Processamento de Sinais Assistido por Computador , Humanos , Sinais Vitais , Monitorização Fisiológica/métodos , Frequência Cardíaca , Algoritmos
6.
Ultrasound Med Biol ; 49(3): 677-698, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36635192

RESUMO

Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions. Conventionally, reconstruction algorithms have been derived from physical principles. These algorithms rely on assumptions and approximations of the underlying measurement model, limiting image quality in settings where these assumptions break down. Conversely, more sophisticated solutions based on statistical modeling or careful parameter tuning or derived from increased model complexity can be sensitive to different environments. Recently, deep learning-based methods, which are optimized in a data-driven fashion, have gained popularity. These model-agnostic techniques often rely on generic model structures and require vast training data to converge to a robust solution. A relatively new paradigm combines the power of the two: leveraging data-driven deep learning and exploiting domain knowledge. These model-based solutions yield high robustness and require fewer parameters and training data than conventional neural networks. In this work we provide an overview of these techniques from the recent literature and discuss a wide variety of ultrasound applications. We aim to inspire the reader to perform further research in this area and to address the opportunities within the field of ultrasound signal processing. We conclude with a future perspective on model-based deep learning techniques for medical ultrasound.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Ultrassonografia , Algoritmos , Radiografia , Processamento de Imagem Assistida por Computador/métodos
7.
Diagnostics (Basel) ; 12(12)2022 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-36553220

RESUMO

Antral follicle Count (AFC) is a non-invasive biomarker used to assess ovarian reserves through transvaginal ultrasound (TVUS) imaging. Antral follicles' diameter is usually in the range of 2-10 mm. The primary aim of ovarian reserve monitoring is to measure the size of ovarian follicles and the number of antral follicles. Manual follicle measurement is inhibited by operator time, expertise and the subjectivity of delineating the two axes of the follicles. This necessitates an automated framework capable of quantifying follicle size and count in a clinical setting. This paper proposes a novel Harmonic Attention-based U-Net network, HaTU-Net, to precisely segment the ovary and follicles in ultrasound images. We replace the standard convolution operation with a harmonic block that convolves the features with a window-based discrete cosine transform (DCT). Additionally, we proposed a harmonic attention mechanism that helps to promote the extraction of rich features. The suggested technique allows for capturing the most relevant features, such as boundaries, shape, and textural patterns, in the presence of various noise sources (i.e., shadows, poor contrast between tissues, and speckle noise). We evaluated the proposed model on our in-house private dataset of 197 patients undergoing TransVaginal UltraSound (TVUS) exam. The experimental results on an independent test set confirm that HaTU-Net achieved a Dice coefficient score of 90% for ovaries and 81% for antral follicles, an improvement of 2% and 10%, respectively, when compared to a standard U-Net. Further, we accurately measure the follicle size, yielding the recall, and precision rates of 91.01% and 76.49%, respectively.

8.
IEEE Trans Image Process ; 31: 3553-3564, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35544506

RESUMO

Background foreground separation (BFS) is a popular computer vision problem where dynamic foreground objects are separated from the static background of a scene. Typically, this is performed using consumer cameras because of their low cost, human interpretability, and high resolution. Yet, cameras and the BFS algorithms that process their data have common failure modes due to lighting changes, highly reflective surfaces, and occlusion. One solution is to incorporate an additional sensor modality that provides robustness to such failure modes. In this paper, we explore the ability of a cost-effective radar system to augment the popular Robust PCA technique for BFS. We apply the emerging technique of algorithm unrolling to yield real-time computation, feedforward inference, and strong generalization in comparison with traditional deep learning methods. We benchmark on the RaDICaL dataset to demonstrate both quantitative improvements of incorporating radar data and qualitative improvements that confirm robustness to common failure modes of image-based methods.

9.
Artigo em Inglês | MEDLINE | ID: mdl-35312618

RESUMO

Traditional beamforming of medical ultrasound images relies on sampling rates significantly higher than the actual Nyquist rate of the received signals. This results in large amounts of data to store and process, imposing hardware and software challenges on the development of ultrasound machinery and algorithms, and impacting the resulting performance. In light of the capabilities demonstrated by deep learning methods over the past years across a variety of fields, including medical imaging, it is natural to consider their ability to recover high-quality ultrasound images from partial data. Here, we propose an approach for deep-learning-based reconstruction of B-mode images from temporally and spatially sub-sampled channel data. We begin by considering sub-Nyquist sampled data, time-aligned in the frequency domain and transformed back to the time domain. The data are further sampled spatially so that only a subset of the received signals is acquired. The partial data is used to train an encoder-decoder convolutional neural network (CNN), using as targets minimum-variance (MV) beamformed signals that were generated from the original, fully-sampled data. Our approach yields high-quality B-mode images, with up to two times higher resolution than previously proposed reconstruction approaches (NESTA) from compressed data as well as delay-and-sum (DAS) beamforming of the fully-sampled data. In terms of contrast-to- noise ratio (CNR), our results are comparable to MV beamforming of the fully-sampled data, and provide up to 2 dB higher CNR values than DAS and NESTA, thus enabling better and more efficient imaging than what is used in clinical practice today.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Ultrassonografia/métodos
10.
J Comput Biol ; 29(1): 45-55, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34986029

RESUMO

Non-negative matrix factorization (NMF) is a fundamental matrix decomposition technique that is used primarily for dimensionality reduction and is increasing in popularity in the biological domain. Although finding a unique NMF is generally not possible, there are various iterative algorithms for NMF optimization that converge to locally optimal solutions. Such techniques can also serve as a starting point for deep learning methods that unroll the algorithmic iterations into layers of a deep network. In this study, we develop unfolded deep networks for NMF and several regularized variants in both a supervised and an unsupervised setting. We apply our method to various mutation data sets to reconstruct their underlying mutational signatures and their exposures. We demonstrate the increased accuracy of our approach over standard formulations in analyzing simulated and real mutation data.


Assuntos
Algoritmos , Análise Mutacional de DNA/estatística & dados numéricos , Aprendizado Profundo , Neoplasias da Mama/genética , Biologia Computacional , Simulação por Computador , Bases de Dados Genéticas/estatística & dados numéricos , Feminino , Humanos , Mutação , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado , Aprendizado de Máquina não Supervisionado
11.
IEEE Trans Med Imaging ; 41(3): 571-581, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34606447

RESUMO

Lung ultrasound (LUS) is a cheap, safe and non-invasive imaging modality that can be performed at patient bed-side. However, to date LUS is not widely adopted due to lack of trained personnel required for interpreting the acquired LUS frames. In this work we propose a framework for training deep artificial neural networks for interpreting LUS, which may promote broader use of LUS. When using LUS to evaluate a patient's condition, both anatomical phenomena (e.g., the pleural line, presence of consolidations), as well as sonographic artifacts (such as A- and B-lines) are of importance. In our framework, we integrate domain knowledge into deep neural networks by inputting anatomical features and LUS artifacts in the form of additional channels containing pleural and vertical artifacts masks along with the raw LUS frames. By explicitly supplying this domain knowledge, standard off-the-shelf neural networks can be rapidly and efficiently finetuned to accomplish various tasks on LUS data, such as frame classification or semantic segmentation. Our framework allows for a unified treatment of LUS frames captured by either convex or linear probes. We evaluated our proposed framework on the task of COVID-19 severity assessment using the ICLUS dataset. In particular, we finetuned simple image classification models to predict per-frame COVID-19 severity score. We also trained a semantic segmentation model to predict per-pixel COVID-19 severity annotations. Using the combined raw LUS frames and the detected lines for both tasks, our off-the-shelf models performed better than complicated models specifically designed for these tasks, exemplifying the efficacy of our framework.


Assuntos
COVID-19 , COVID-19/diagnóstico por imagem , Humanos , Pulmão/diagnóstico por imagem , Redes Neurais de Computação , SARS-CoV-2 , Ultrassonografia/métodos
12.
Artigo em Inglês | MEDLINE | ID: mdl-34699355

RESUMO

Efficient ultrasound (US) systems that produce high-quality images can improve current clinical diagnosis capabilities by making the imaging process much more affordable and accessible to users. The most common technique for generating B-mode US images is delay-and-sum (DAS) beamforming, where an appropriate delay is introduced to signals sampled and processed at each transducer element. However, sampling rates that are much higher than the Nyquist rate of the signal are required for high-resolution DAS beamforming, leading to large amounts of data, making remote processing of channel data impractical. Moreover, the production of US images that exhibit high resolution and good image contrast requires a large set of transducer elements, which further increases the data size. Previous works suggest methods for reduction in sampling rate and in array size. In this work, we introduce compressed Fourier domain convolutional beamforming, combining Fourier domain beamforming (FDBF), sparse convolutional beamforming, and compressed sensing methods. This allows reducing both the number of array elements and the sampling rate in each element while achieving high-resolution images. Using in vivo data, we demonstrate that the proposed method can generate B-mode images using 142 times less data than DAS. Our results pave the way toward efficient US and demonstrate that high-resolution US images can be produced using sub-Nyquist sampling in time and space.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Transdutores , Ultrassonografia/métodos
13.
Artigo em Inglês | MEDLINE | ID: mdl-34224351

RESUMO

Deep learning for ultrasound image formation is rapidly garnering research support and attention, quickly rising as the latest frontier in ultrasound image formation, with much promise to balance both image quality and display speed. Despite this promise, one challenge with identifying optimal solutions is the absence of unified evaluation methods and datasets that are not specific to a single research group. This article introduces the largest known international database of ultrasound channel data and describes the associated evaluation methods that were initially developed for the challenge on ultrasound beamforming with deep learning (CUBDL), which was offered as a component of the 2020 IEEE International Ultrasonics Symposium. We summarize the challenge results and present qualitative and quantitative assessments using both the initially closed CUBDL evaluation test dataset (which was crowd-sourced from multiple groups around the world) and additional in vivo breast ultrasound data contributed after the challenge was completed. As an example quantitative assessment, single plane wave images from the CUBDL Task 1 dataset produced a mean generalized contrast-to-noise ratio (gCNR) of 0.67 and a mean lateral resolution of 0.42 mm when formed with delay-and-sum beamforming, compared with a mean gCNR as high as 0.81 and a mean lateral resolution as low as 0.32 mm when formed with networks submitted by the challenge winners. We also describe contributed CUBDL data that may be used for training of future networks. The compiled database includes a total of 576 image acquisition sequences. We additionally introduce a neural-network-based global sound speed estimator implementation that was necessary to fairly evaluate the results obtained with this international database. The integration of CUBDL evaluation methods, evaluation code, network weights from the challenge winners, and all datasets described herein are publicly available (visit https://cubdl.jhu.edu for details).


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Imagens de Fantasmas , Ultrassonografia
14.
Artigo em Inglês | MEDLINE | ID: mdl-34185640

RESUMO

The most common technique for generating B-mode ultrasound (US) images is delay-and-sum (DAS) beamforming, where the signals received at the transducer array are sampled before an appropriate delay is applied. This necessitates sampling rates exceeding the Nyquist rate and the use of a large number of antenna elements to ensure sufficient image quality. Recently, we proposed methods to reduce the sampling rate and the array size relying on image recovery using iterative algorithms based on compressed sensing (CS) and the finite rate of innovation (FRI) frameworks. Iterative algorithms typically require a large number of iterations, making them difficult to use in real time. In this article, we propose a reconstruction method from sub-Nyquist samples in the time and spatial domain, which is based on unfolding the iterative shrinkage thresholding algorithm (ISTA), resulting in an efficient and interpretable deep network. The inputs to our network are the subsampled beamformed signals after summation and delay in the frequency domain, requiring only a subset of the US signal to be stored for recovery. Our method allows reducing the number of array elements, sampling rate, and computational time while ensuring high-quality imaging performance. Using in vivo data, we demonstrate that the proposed method yields high-quality images while reducing the data volume traditionally used up to 36 times. In terms of image resolution and contrast, our technique outperforms previously suggested methods as well as DAS and minimum-variance (MV) beamforming, paving the way to real-time applicable recovery methods.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Ultrassonografia
15.
Opt Express ; 29(9): 12772-12786, 2021 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-33985027

RESUMO

Image scanning microscopy (ISM), an upgraded successor of the ubiquitous confocal microscope, facilitates up to two-fold improvement in lateral resolution, and has become an indispensable element in the toolbox of the bio-imaging community. Recently, super-resolution optical fluctuation image scanning microscopy (SOFISM) integrated the analysis of intensity-fluctuations information into the basic ISM architecture, to enhance its resolving power. Both of these techniques typically rely on pixel-reassignment as a fundamental processing step, in which the parallax of different detector elements to the sample is compensated by laterally shifting the point spread function (PSF). Here, we propose an alternative analysis approach, based on the recent high-performing sparsity-based super-resolution correlation microscopy (SPARCOM) method. Through measurements of DNA origami nano-rulers and fixed cells labeled with organic dye, we experimentally show that confocal SPARCOM (cSPARCOM), which circumvents pixel-reassignment altogether, provides enhanced resolution compared to pixel-reassigned based analysis. Thus, cSPARCOM further promotes the effectiveness of ISM, and particularly that of correlation based ISM implementations such as SOFISM, where the PSF deviates significantly from spatial invariance.

16.
Eur Radiol ; 31(12): 9654-9663, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34052882

RESUMO

OBJECTIVES: In the midst of the coronavirus disease 2019 (COVID-19) outbreak, chest X-ray (CXR) imaging is playing an important role in diagnosis and monitoring of patients with COVID-19. We propose a deep learning model for detection of COVID-19 from CXRs, as well as a tool for retrieving similar patients according to the model's results on their CXRs. For training and evaluating our model, we collected CXRs from inpatients hospitalized in four different hospitals. METHODS: In this retrospective study, 1384 frontal CXRs, of COVID-19 confirmed patients imaged between March and August 2020, and 1024 matching CXRs of non-COVID patients imaged before the pandemic, were collected and used to build a deep learning classifier for detecting patients positive for COVID-19. The classifier consists of an ensemble of pre-trained deep neural networks (DNNS), specifically, ReNet34, ReNet50¸ ReNet152, and vgg16, and is enhanced by data augmentation and lung segmentation. We further implemented a nearest-neighbors algorithm that uses DNN-based image embeddings to retrieve the images most similar to a given image. RESULTS: Our model achieved accuracy of 90.3%, (95% CI: 86.3-93.7%) specificity of 90% (95% CI: 84.3-94%), and sensitivity of 90.5% (95% CI: 85-94%) on a test dataset comprising 15% (350/2326) of the original images. The AUC of the ROC curve is 0.96 (95% CI: 0.93-0.97). CONCLUSION: We provide deep learning models, trained and evaluated on CXRs that can assist medical efforts and reduce medical staff workload in handling COVID-19. KEY POINTS: • A machine learning model was able to detect chest X-ray (CXR) images of patients tested positive for COVID-19 with accuracy and detection rate above 90%. • A tool was created for finding existing CXR images with imaging characteristics most similar to a given CXR, according to the model's image embeddings.


Assuntos
COVID-19 , Humanos , Redes Neurais de Computação , Estudos Retrospectivos , SARS-CoV-2 , Raios X
17.
Proc Natl Acad Sci U S A ; 118(17)2021 04 27.
Artigo em Inglês | MEDLINE | ID: mdl-33888586

RESUMO

Federated learning (FL) enables edge devices, such as Internet of Things devices (e.g., sensors), servers, and institutions (e.g., hospitals), to collaboratively train a machine learning (ML) model without sharing their private data. FL requires devices to exchange their ML parameters iteratively, and thus the time it requires to jointly learn a reliable model depends not only on the number of training steps but also on the ML parameter transmission time per step. In practice, FL parameter transmissions are often carried out by a multitude of participating devices over resource-limited communication networks, for example, wireless networks with limited bandwidth and power. Therefore, the repeated FL parameter transmission from edge devices induces a notable delay, which can be larger than the ML model training time by orders of magnitude. Hence, communication delay constitutes a major bottleneck in FL. Here, a communication-efficient FL framework is proposed to jointly improve the FL convergence time and the training loss. In this framework, a probabilistic device selection scheme is designed such that the devices that can significantly improve the convergence speed and training loss have higher probabilities of being selected for ML model transmission. To further reduce the FL convergence time, a quantization method is proposed to reduce the volume of the model parameters exchanged among devices, and an efficient wireless resource allocation scheme is developed. Simulation results show that the proposed FL framework can improve the identification accuracy and convergence time by up to 3.6% and 87% compared to standard FL.

18.
Pancreas ; 50(3): 251-279, 2021 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-33835956

RESUMO

ABSTRACT: Despite considerable research efforts, pancreatic cancer is associated with a dire prognosis and a 5-year survival rate of only 10%. Early symptoms of the disease are mostly nonspecific. The premise of improved survival through early detection is that more individuals will benefit from potentially curative treatment. Artificial intelligence (AI) methodology has emerged as a successful tool for risk stratification and identification in general health care. In response to the maturity of AI, Kenner Family Research Fund conducted the 2020 AI and Early Detection of Pancreatic Cancer Virtual Summit (www.pdac-virtualsummit.org) in conjunction with the American Pancreatic Association, with a focus on the potential of AI to advance early detection efforts in this disease. This comprehensive presummit article was prepared based on information provided by each of the interdisciplinary participants on one of the 5 following topics: Progress, Problems, and Prospects for Early Detection; AI and Machine Learning; AI and Pancreatic Cancer-Current Efforts; Collaborative Opportunities; and Moving Forward-Reflections from Government, Industry, and Advocacy. The outcome from the robust Summit conversations, to be presented in a future white paper, indicate that significant progress must be the result of strategic collaboration among investigators and institutions from multidisciplinary backgrounds, supported by committed funders.


Assuntos
Inteligência Artificial , Biomarcadores Tumorais/genética , Carcinoma Ductal Pancreático/genética , Detecção Precoce de Câncer/métodos , Genômica/métodos , Neoplasias Pancreáticas/genética , Carcinoma Ductal Pancreático/diagnóstico , Carcinoma Ductal Pancreático/terapia , Humanos , Comunicação Interdisciplinar , Neoplasias Pancreáticas/diagnóstico , Neoplasias Pancreáticas/terapia , Prognóstico , Análise de Sobrevida
19.
Artigo em Inglês | MEDLINE | ID: mdl-33755562

RESUMO

Real-time 3-D ultrasound (US) provides a complete visualization of inner body organs and blood vasculature, crucial for diagnosis and treatment of diverse diseases. However, 3-D systems require massive hardware due to the huge number of transducer elements and consequent data size. This increases cost significantly and limit both frame rate and image quality, thus preventing the 3-D US from being common practice in clinics worldwide. A recent study presented a technique called sparse convolutional beamforming algorithm (SCOBA), which obtains improved image quality while allowing notable element reduction in the context of 2-D focused imaging. In this article, we build upon previous work and introduce a nonlinear beamformer for 3-D imaging, called COBA-3D, consisting of 2-D spatial convolution of the in-phase and quadrature received signals. The proposed technique considers diverging-wave transmission and achieves improved image resolution and contrast compared with standard delay-and-sum beamforming while enabling a high frame rate. Incorporating 2-D sparse arrays into our method creates SCOBA-3D: a sparse beamformer that offers significant element reduction and, thus, allows performing 3-D imaging with the resources typically available for 2-D setups. To create 2-D thinned arrays, we present a scalable and systematic way to design 2-D fractal sparse arrays. The proposed framework paves the way for affordable ultrafast US devices that perform high-quality 3-D imaging, as demonstrated using phantom and ex-vivo data.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Algoritmos , Imagens de Fantasmas , Ultrassonografia
20.
Entropy (Basel) ; 23(1)2021 Jan 13.
Artigo em Inglês | MEDLINE | ID: mdl-33450996

RESUMO

Quantizers play a critical role in digital signal processing systems. Recent works have shown that the performance of acquiring multiple analog signals using scalar analog-to-digital converters (ADCs) can be significantly improved by processing the signals prior to quantization. However, the design of such hybrid quantizers is quite complex, and their implementation requires complete knowledge of the statistical model of the analog signal. In this work we design data-driven task-oriented quantization systems with scalar ADCs, which determine their analog-to-digital mapping using deep learning tools. These mappings are designed to facilitate the task of recovering underlying information from the quantized signals. By using deep learning, we circumvent the need to explicitly recover the system model and to find the proper quantization rule for it. Our main target application is multiple-input multiple-output (MIMO) communication receivers, which simultaneously acquire a set of analog signals, and are commonly subject to constraints on the number of bits. Our results indicate that, in a MIMO channel estimation setup, the proposed deep task-bask quantizer is capable of approaching the optimal performance limits dictated by indirect rate-distortion theory, achievable using vector quantizers and requiring complete knowledge of the underlying statistical model. Furthermore, for a symbol detection scenario, it is demonstrated that the proposed approach can realize reliable bit-efficient hybrid MIMO receivers capable of setting their quantization rule in light of the task.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...