Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 103
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Sensors (Basel) ; 22(24)2022 Dec 15.
Article in English | MEDLINE | ID: mdl-36560248

ABSTRACT

A robust-accurate estimation of fluid flow is the main building block of a distributed virtual flow meter. Unfortunately, a big leap in algorithm development would be required for this objective to come to fruition, mainly due to the inability of current machine learning algorithms to make predictions outside the training data distribution. To improve predictions outside the training distribution, we explore the continual learning (CL) paradigm for accurately estimating the characteristics of fluid flow in pipelines. A significant challenge facing CL is the concept of catastrophic forgetting. In this paper, we provide a novel approach for how to address the forgetting problem via compressing the distributed sensor data to increase the capacity of the CL memory bank using a compressive learning algorithm. Through extensive experiments, we show that our approach provides around 8% accuracy improvement compared to other CL algorithms when applied to a real-world distributed sensor dataset collected from an oilfield. Noticeable accuracy improvement is also achieved when using our proposed approach with the CL benchmark datasets, achieving state-of-the-art accuracies for the CIFAR-10 dataset on blurry10 and blurry30 settings of 80.83% and 88.91%, respectively.

2.
Sensors (Basel) ; 22(3)2022 Feb 06.
Article in English | MEDLINE | ID: mdl-35161977

ABSTRACT

Respiratory diseases constitute one of the leading causes of death worldwide and directly affect the patient's quality of life. Early diagnosis and patient monitoring, which conventionally include lung auscultation, are essential for the efficient management of respiratory diseases. Manual lung sound interpretation is a subjective and time-consuming process that requires high medical expertise. The capabilities that deep learning offers could be exploited in order that robust lung sound classification models can be designed. In this paper, we propose a novel hybrid neural model that implements the focal loss (FL) function to deal with training data imbalance. Features initially extracted from short-time Fourier transform (STFT) spectrograms via a convolutional neural network (CNN) are given as input to a long short-term memory (LSTM) network that memorizes the temporal dependencies between data and classifies four types of lung sounds, including normal, crackles, wheezes, and both crackles and wheezes. The model was trained and tested on the ICBHI 2017 Respiratory Sound Database and achieved state-of-the-art results using three different data splitting strategies-namely, sensitivity 47.37%, specificity 82.46%, score 64.92% and accuracy 73.69% for the official 60/40 split, sensitivity 52.78%, specificity 84.26%, score 68.52% and accuracy 76.39% using interpatient 10-fold cross validation, and sensitivity 60.29% and accuracy 74.57% using leave-one-out cross validation.


Subject(s)
Quality of Life , Respiratory Sounds , Auscultation , Humans , Lung/diagnostic imaging , Neural Networks, Computer , Respiratory Sounds/diagnosis
3.
Radiology ; 299(1): E167-E176, 2021 04.
Article in English | MEDLINE | ID: mdl-33231531

ABSTRACT

Background There are characteristic findings of coronavirus disease 2019 (COVID-19) on chest images. An artificial intelligence (AI) algorithm to detect COVID-19 on chest radiographs might be useful for triage or infection control within a hospital setting, but prior reports have been limited by small data sets, poor data quality, or both. Purpose To present DeepCOVID-XR, a deep learning AI algorithm to detect COVID-19 on chest radiographs, that was trained and tested on a large clinical data set. Materials and Methods DeepCOVID-XR is an ensemble of convolutional neural networks developed to detect COVID-19 on frontal chest radiographs, with reverse-transcription polymerase chain reaction test results as the reference standard. The algorithm was trained and validated on 14 788 images (4253 positive for COVID-19) from sites across the Northwestern Memorial Health Care System from February 2020 to April 2020 and was then tested on 2214 images (1192 positive for COVID-19) from a single hold-out institution. Performance of the algorithm was compared with interpretations from five experienced thoracic radiologists on 300 random test images using the McNemar test for sensitivity and specificity and the DeLong test for the area under the receiver operating characteristic curve (AUC). Results A total of 5853 patients (mean age, 58 years ± 19 [standard deviation]; 3101 women) were evaluated across data sets. For the entire test set, accuracy of DeepCOVID-XR was 83%, with an AUC of 0.90. For 300 random test images (134 positive for COVID-19), accuracy of DeepCOVID-XR was 82%, compared with that of individual radiologists (range, 76%-81%) and the consensus of all five radiologists (81%). DeepCOVID-XR had a significantly higher sensitivity (71%) than one radiologist (60%, P < .001) and significantly higher specificity (92%) than two radiologists (75%, P < .001; 84%, P = .009). AUC of DeepCOVID-XR was 0.88 compared with the consensus AUC of 0.85 (P = .13 for comparison). With consensus interpretation as the reference standard, the AUC of DeepCOVID-XR was 0.95 (95% CI: 0.92, 0.98). Conclusion DeepCOVID-XR, an artificial intelligence algorithm, detected coronavirus disease 2019 on chest radiographs with a performance similar to that of experienced thoracic radiologists in consensus. © RSNA, 2020 Supplemental material is available for this article. See also the editorial by van Ginneken in this issue.


Subject(s)
Artificial Intelligence , COVID-19/diagnostic imaging , Lung/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Thoracic/methods , Algorithms , Datasets as Topic , Female , Humans , Male , Middle Aged , SARS-CoV-2 , Sensitivity and Specificity , United States
4.
NMR Biomed ; 34(1): e4405, 2021 01.
Article in English | MEDLINE | ID: mdl-32875668

ABSTRACT

Highly accelerated real-time cine MRI using compressed sensing (CS) is a promising approach to achieve high spatio-temporal resolution and clinically acceptable image quality in patients with arrhythmia and/or dyspnea. However, its lengthy image reconstruction time may hinder its clinical translation. The purpose of this study was to develop a neural network for reconstruction of non-Cartesian real-time cine MRI k-space data faster (<1 min per slice with 80 frames) than graphics processing unit (GPU)-accelerated CS reconstruction, without significant loss in image quality or accuracy in left ventricular (LV) functional parameters. We introduce a perceptual complex neural network (PCNN) that trains on complex-valued MRI signal and incorporates a perceptual loss term to suppress incoherent image details. This PCNN was trained and tested with multi-slice, multi-phase, cine images from 40 patients (20 for training, 20 for testing), where the zero-filled images were used as input and the corresponding CS reconstructed images were used as practical ground truth. The resulting images were compared using quantitative metrics (structural similarity index (SSIM) and normalized root mean square error (NRMSE)) and visual scores (conspicuity, temporal fidelity, artifacts, and noise scores), individually graded on a five-point scale (1, worst; 3, acceptable; 5, best), and LV ejection fraction (LVEF). The mean processing time per slice with 80 frames for PCNN was 23.7 ± 1.9 s for pre-processing (Step 1, same as CS) and 0.822 ± 0.004 s for dealiasing (Step 2, 166 times faster than CS). Our PCNN produced higher data fidelity metrics (SSIM = 0.88 ± 0.02, NRMSE = 0.014 ± 0.004) compared with CS. While all the visual scores were significantly different (P < 0.05), the median scores were all 4.0 or higher for both CS and PCNN. LVEFs measured from CS and PCNN were strongly correlated (R2 = 0.92) and in good agreement (mean difference = -1.4% [2.3% of mean]; limit of agreement = 10.6% [17.6% of mean]). The proposed PCNN is capable of rapid reconstruction (25 s per slice with 80 frames) of non-Cartesian real-time cine MRI k-space data, without significant loss in image quality or accuracy in LV functional parameters.


Subject(s)
Algorithms , Image Processing, Computer-Assisted , Magnetic Resonance Imaging, Cine , Neural Networks, Computer , Aged , Data Compression , Female , Humans , Male
5.
Opt Express ; 28(8): 12108-12120, 2020 Apr 13.
Article in English | MEDLINE | ID: mdl-32403711

ABSTRACT

Light field microscopy (LFM) is an emerging technology for high-speed wide-field 3D imaging by capturing 4D light field of 3D volumes. However, its 3D imaging capability comes at a cost of lateral resolution. In addition, the lateral resolution is not uniform across depth in the light field dconvolution reconstructions. To address these problems, here, we propose a snapshot multifocal light field microscopy (MFLFM) imaging method. The underlying concept of the MFLFM is to collect multiple focal shifted light fields simultaneously. We show that by focal stacking those focal shifted light fields, the depth-of-field (DOF) of the LFM can be further improved but without sacrificing the lateral resolution. Also, if all differently focused light fields are utilized together in the deconvolution, the MFLFM could achieve a high and uniform lateral resolution within a larger DOF. We present a house-built MFLFM system by placing a diffractive optical element at the Fourier plane of a conventional LFM. The optical performance of the MFLFM are analyzed and given. Both simulations and proof-of-principle experimental results are provided to demonstrate the effectiveness and benefits of the MFLFM. We believe that the proposed snapshot MFLFM has potential to enable high-speed and high resolution 3D imaging applications.

6.
Sensors (Basel) ; 20(18)2020 Sep 16.
Article in English | MEDLINE | ID: mdl-32948056

ABSTRACT

Pansharpening is a technique that fuses a low spatial resolution multispectral image and a high spatial resolution panchromatic one to obtain a multispectral image with the spatial resolution of the latter while preserving the spectral information of the multispectral image. In this paper we propose a variational Bayesian methodology for pansharpening. The proposed methodology uses the sensor characteristics to model the observation process and Super-Gaussian sparse image priors on the expected characteristics of the pansharpened image. The pansharpened image, as well as all model and variational parameters, are estimated within the proposed methodology. Using real and synthetic data, the quality of the pansharpened images is assessed both visually and quantitatively and compared with other pansharpening methods. Theoretical and experimental results demonstrate the effectiveness, efficiency, and flexibility of the proposed formulation.

7.
Opt Express ; 26(21): 27381-27402, 2018 Oct 15.
Article in English | MEDLINE | ID: mdl-30469808

ABSTRACT

Realizing both high temporal and spatial resolution across a large volume is a key challenge for 3D fluorescent imaging. Towards achieving this objective, we introduce an interferometric multifocus microscopy (iMFM) system, a combination of multifocus microscopy (MFM) with two opposing objective lenses. We show that the proposed iMFM is capable of simultaneously producing multiple focal plane interferometry that provides axial super-resolution and hence isotropic 3D resolution with a single exposure. We design and simulate the iMFM microscope by employing two special diffractive optical elements. The point spread function of this new iMFM microscope is simulated and the image formation model is given. For reconstruction, we use the Richardson-Lucy deconvolution algorithm with total variation regularization for 3D extended object recovery, and a maximum likelihood estimator (MLE) for single molecule tracking. A method for determining an initial axial position of the molecule is also proposed to improve the convergence of the MLE. We demonstrate both theoretically and numerically that isotropic 3D nanoscopic localization accuracy is achievable with an axial imaging range of 2um when tracking a fluorescent molecule in three dimensions and that the diffraction limited axial resolution can be improved by 3-4 times in the single shot wide-field 3D extended object recovery. We believe that iMFM will be a useful tool in 3D dynamic event imaging that requires both high temporal and spatial resolution.

8.
Angew Chem Int Ed Engl ; 57(34): 10910-10914, 2018 Aug 20.
Article in English | MEDLINE | ID: mdl-29940088

ABSTRACT

Nonlinear unmixing of hyperspectral reflectance data is one of the key problems in quantitative imaging of painted works of art. The approach presented is to interrogate a hyperspectral image cube by first decomposing it into a set of reflectance curves representing pure basis pigments and second to estimate the scattering and absorption coefficients of each pigment in a given pixel to produce estimates of the component fractions. This two-step algorithm uses a deep neural network to qualitatively identify the constituent pigments in any unknown spectrum and, based on the pigment(s) present and Kubelka-Munk theory to estimate the pigment concentration on a per-pixel basis. Using hyperspectral data acquired on a set of mock-up paintings and a well-characterized illuminated folio from the 15th century, the performance of the proposed algorithm is demonstrated for pigment recognition and quantitative estimation of concentration.

9.
Opt Express ; 25(1): 250-262, 2017 Jan 09.
Article in English | MEDLINE | ID: mdl-28085818

ABSTRACT

Compressed sensing has been discussed separately in spatial and temporal domains. Compressive holography has been introduced as a method that allows 3D tomographic reconstruction at different depths from a single 2D image. Coded exposure is a temporal compressed sensing method for high speed video acquisition. In this work, we combine compressive holography and coded exposure techniques and extend the discussion to 4D reconstruction in space and time from one coded captured image. In our prototype, digital in-line holography was used for imaging macroscopic, fast moving objects. The pixel-wise temporal modulation was implemented by a digital micromirror device. In this paper we demonstrate 10× temporal super resolution with multiple depths recovery from a single image. Two examples are presented for the purpose of recording subtle vibrations and tracking small particles within 5 ms.

10.
Opt Express ; 23(12): 15992-6007, 2015 Jun 15.
Article in English | MEDLINE | ID: mdl-26193574

ABSTRACT

We present a prototype compressive video camera that encodes scene movement using a translated binary photomask in the optical path. The encoded recording can then be used to reconstruct multiple output frames from each captured image, effectively synthesizing high speed video. The use of a printed binary mask allows reconstruction at higher spatial resolutions than has been previously demonstrated. In addition, we improve upon previous work by investigating tradeoffs in mask design and reconstruction algorithm selection. We identify a mask design that consistently provides the best performance across multiple reconstruction strategies in simulation, and verify it with our prototype hardware. Finally, we compare reconstruction algorithms and identify the best choice in terms of balancing reconstruction quality and speed.

11.
J Opt Soc Am A Opt Image Sci Vis ; 32(11): 2002-20, 2015 Nov 01.
Article in English | MEDLINE | ID: mdl-26560915

ABSTRACT

The image processing technique known as superresolution (SR) has the potential to allow engineers to specify lower resolution and, therefore, less expensive cameras for a given task by enhancing the base camera's resolution. This is especially true in the remote detection and classification of objects in the environment, such as aircraft or human faces. Performing each of these tasks requires a minimum image "sharpness" which is quantified by a maximum resolvable spatial frequency, which is, in turn, a function of the camera optics, pixel sampling density, and signal-to-noise ratio. Much of the existing SR literature focuses on SR performance metrics for candidate algorithms, such as perceived image quality or peak SNR. These metrics can be misleading because they also credit deblurring and/or denoising in addition to true SR. In this paper, we propose a new, task-based metric where the performance of an SR algorithm is, instead, directly tied to the probability of successfully detecting critical spatial frequencies within the scene.

12.
Sci Rep ; 14(1): 7803, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38565586

ABSTRACT

Room temperature semiconductor radiation detectors (RTSD) for X-ray and γ -ray detection are vital tools for medical imaging, astrophysics and other applications. CdZnTe (CZT) has been the main RTSD for more than three decades with desired detection properties. In a typical pixelated configuration, CZT have electrodes on opposite ends. For advanced event reconstruction algorithms at sub-pixel level, detailed characterization of the RTSD is required in three dimensional (3D) space. However, 3D characterization of the material defects and charge transport properties in the sub-pixel regime is a labor intensive process with skilled manpower and novel experimental setups. Presently, state-of-art characterization is done over the bulk of the RTSD considering homogenous properties. In this paper, we propose a novel physics based machine learning (PBML) model to characterize the RTSD over a discretized sub-pixelated 3D volume which is assumed. Our novel approach is the first to characterize a full 3D charge transport model of the RTSD. In this work, we first discretize the RTSD between a pixelated electrodes spatially into 3 dimensions-x, y, and z. The resulting discretizations are termed as voxels in 3D space. In each voxel, the different physics based charge transport properties such as drift, trapping, detrapping and recombination of charges are modeled as trainable model weights. The drift of the charges considers second order non-linear motion which is observed in practice with the RTSDs. Based on the electron-hole pair injections as input to the PBML model, and signals at the electrodes, free and trapped charges (electrons and holes) as outputs of the model, the PBML model determines the trainable weights by backpropagating the loss function. The trained weights of the model represents one-to-one relation to that of the actual physical charge transport properties in a voxelized detector.

13.
Comput Biol Med ; 176: 108557, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38728995

ABSTRACT

BACKGROUND: Heart failure (HF), a global health challenge, requires innovative diagnostic and management approaches. The rapid evolution of deep learning (DL) in healthcare necessitates a comprehensive review to evaluate these developments and their potential to enhance HF evaluation, aligning clinical practices with technological advancements. OBJECTIVE: This review aims to systematically explore the contributions of DL technologies in the assessment of HF, focusing on their potential to improve diagnostic accuracy, personalize treatment strategies, and address the impact of comorbidities. METHODS: A thorough literature search was conducted across four major electronic databases: PubMed, Scopus, Web of Science and IEEE Xplore, yielding 137 articles that were subsequently categorized into five primary application areas: cardiovascular disease (CVD) classification, HF detection, image analysis, risk assessment, and other clinical analyses. The selection criteria focused on studies utilizing DL algorithms for HF assessment, not limited to HF detection but extending to any attempt in analyzing and interpreting HF-related data. RESULTS: The analysis revealed a notable emphasis on CVD classification and HF detection, with DL algorithms showing significant promise in distinguishing between affected individuals and healthy subjects. Furthermore, the review highlights DL's capacity to identify underlying cardiomyopathies and other comorbidities, underscoring its utility in refining diagnostic processes and tailoring treatment plans to individual patient needs. CONCLUSIONS: This review establishes DL as a key innovation in HF management, highlighting its role in advancing diagnostic accuracy and personalized care. The insights provided advocate for the integration of DL in clinical settings and suggest directions for future research to enhance patient outcomes in HF care.


Subject(s)
Deep Learning , Heart Failure , Humans , Heart Failure/diagnosis
14.
PLoS One ; 19(3): e0299528, 2024.
Article in English | MEDLINE | ID: mdl-38466739

ABSTRACT

BACKGROUND: Rates of depression and addiction have risen drastically over the past decade, but the lack of integrative techniques remains a barrier to accurate diagnoses of these mental illnesses. Changes in reward/aversion behavior and corresponding brain structures have been identified in those with major depressive disorder (MDD) and cocaine-dependence polysubstance abuse disorder (CD). Assessment of statistical interactions between computational behavior and brain structure may quantitatively segregate MDD and CD. METHODS: Here, 111 participants [40 controls (CTRL), 25 MDD, 46 CD] underwent structural brain MRI and completed an operant keypress task to produce computational judgment metrics. Three analyses were performed: (1) linear regression to evaluate groupwise (CTRL v. MDD v. CD) differences in structure-behavior associations, (2) qualitative and quantitative heatmap assessment of structure-behavior association patterns, and (3) the k-nearest neighbor machine learning approach using brain structure and keypress variable inputs to discriminate groups. RESULTS: This study yielded three primary findings. First, CTRL, MDD, and CD participants had distinct structure-behavior linear relationships, with only 7.8% of associations overlapping between any two groups. Second, the three groups had statistically distinct slopes and qualitatively distinct association patterns. Third, a machine learning approach could discriminate between CTRL and CD, but not MDD participants. CONCLUSIONS: These findings demonstrate that variable interactions between computational behavior and brain structure, and the patterns of these interactions, segregate MDD and CD. This work raises the hypothesis that analysis of interactions between operant tasks and structural neuroimaging might aide in the objective classification of MDD, CD and other mental health conditions.


Subject(s)
Depressive Disorder, Major , Substance-Related Disorders , Humans , Depressive Disorder, Major/diagnostic imaging , Brain/diagnostic imaging , Magnetic Resonance Imaging , Substance-Related Disorders/psychology
15.
Comput Med Imaging Graph ; 112: 102327, 2024 03.
Article in English | MEDLINE | ID: mdl-38194768

ABSTRACT

Automated semantic segmentation of histopathological images is an essential task in Computational Pathology (CPATH). The main limitation of Deep Learning (DL) to address this task is the scarcity of expert annotations. Crowdsourcing (CR) has emerged as a promising solution to reduce the individual (expert) annotation cost by distributing the labeling effort among a group of (non-expert) annotators. Extracting knowledge in this scenario is challenging, as it involves noisy annotations. Jointly learning the underlying (expert) segmentation and the annotators' expertise is currently a commonly used approach. Unfortunately, this approach is frequently carried out by learning a different neural network for each annotator, which scales poorly when the number of annotators grows. For this reason, this strategy cannot be easily applied to real-world CPATH segmentation. This paper proposes a new family of methods for CR segmentation of histopathological images. Our approach consists of two coupled networks: a segmentation network (for learning the expert segmentation) and an annotator network (for learning the annotators' expertise). We propose to estimate the annotators' behavior with only one network that receives the annotator ID as input, achieving scalability on the number of annotators. Our family is composed of three different models for the annotator network. Within this family, we propose a novel modeling of the annotator network in the CR segmentation literature, which considers the global features of the image. We validate our methods on a real-world dataset of Triple Negative Breast Cancer images labeled by several medical students. Our new CR modeling achieves a Dice coefficient of 0.7827, outperforming the well-known STAPLE (0.7039) and being competitive with the supervised method with expert labels (0.7723). The code is available at https://github.com/wizmik12/CRowd_Seg.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans
16.
Med Image Anal ; 95: 103162, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38593644

ABSTRACT

Active Learning (AL) has the potential to solve a major problem of digital pathology: the efficient acquisition of labeled data for machine learning algorithms. However, existing AL methods often struggle in realistic settings with artifacts, ambiguities, and class imbalances, as commonly seen in the medical field. The lack of precise uncertainty estimations leads to the acquisition of images with a low informative value. To address these challenges, we propose Focused Active Learning (FocAL), which combines a Bayesian Neural Network with Out-of-Distribution detection to estimate different uncertainties for the acquisition function. Specifically, the weighted epistemic uncertainty accounts for the class imbalance, aleatoric uncertainty for ambiguous images, and an OoD score for artifacts. We perform extensive experiments to validate our method on MNIST and the real-world Panda dataset for the classification of prostate cancer. The results confirm that other AL methods are 'distracted' by ambiguities and artifacts which harm the performance. FocAL effectively focuses on the most informative images, avoiding ambiguities and artifacts during acquisition. For both experiments, FocAL outperforms existing AL approaches, reaching a Cohen's kappa of 0.764 with only 0.69% of the labeled Panda data.


Subject(s)
Prostatic Neoplasms , Humans , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/pathology , Male , Machine Learning , Bayes Theorem , Algorithms , Image Interpretation, Computer-Assisted/methods , Artifacts , Neural Networks, Computer
17.
Npj Ment Health Res ; 3(1): 29, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38890545

ABSTRACT

Anxiety, a condition characterized by intense fear and persistent worry, affects millions each year and, when severe, is distressing and functionally impairing. Numerous machine learning frameworks have been developed and tested to predict features of anxiety and anxiety traits. This study extended these approaches by using a small set of interpretable judgment variables (n = 15) and contextual variables (demographics, perceived loneliness, COVID-19 history) to (1) understand the relationships between these variables and (2) develop a framework to predict anxiety levels [derived from the State Trait Anxiety Inventory (STAI)]. This set of 15 judgment variables, including loss aversion and risk aversion, models biases in reward/aversion judgments extracted from an unsupervised, short (2-3 min) picture rating task (using the International Affective Picture System) that can be completed on a smartphone. The study cohort consisted of 3476 de-identified adult participants from across the United States who were recruited using an email survey database. Using a balanced Random Forest approach with these judgment and contextual variables, STAI-derived anxiety levels were predicted with up to 81% accuracy and 0.71 AUC ROC. Normalized Gini scores showed that the most important predictors (age, loneliness, household income, employment status) contributed a total of 29-31% of the cumulative relative importance and up to 61% was contributed by judgment variables. Mediation/moderation statistics revealed that the interactions between judgment and contextual variables appears to be important for accurately predicting anxiety levels. Median shifts in judgment variables described a behavioral profile for individuals with higher anxiety levels that was characterized by less resilience, more avoidance, and more indifference behavior. This study supports the hypothesis that distinct constellations of 15 interpretable judgment variables, along with contextual variables, could yield an efficient and highly scalable system for mental health assessment. These results contribute to our understanding of underlying psychological processes that are necessary to characterize what causes variance in anxiety conditions and its behaviors, which can impact treatment development and efficacy.

18.
JMIR Public Health Surveill ; 10: e47979, 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38315620

ABSTRACT

BACKGROUND: Despite COVID-19 vaccine mandates, many chose to forgo vaccination, raising questions about the psychology underlying how judgment affects these choices. Research shows that reward and aversion judgments are important for vaccination choice; however, no studies have integrated such cognitive science with machine learning to predict COVID-19 vaccine uptake. OBJECTIVE: This study aims to determine the predictive power of a small but interpretable set of judgment variables using 3 machine learning algorithms to predict COVID-19 vaccine uptake and interpret what profile of judgment variables was important for prediction. METHODS: We surveyed 3476 adults across the United States in December 2021. Participants answered demographic, COVID-19 vaccine uptake (ie, whether participants were fully vaccinated), and COVID-19 precaution questions. Participants also completed a picture-rating task using images from the International Affective Picture System. Images were rated on a Likert-type scale to calibrate the degree of liking and disliking. Ratings were computationally modeled using relative preference theory to produce a set of graphs for each participant (minimum R2>0.8). In total, 15 judgment features were extracted from these graphs, 2 being analogous to risk and loss aversion from behavioral economics. These judgment variables, along with demographics, were compared between those who were fully vaccinated and those who were not. In total, 3 machine learning approaches (random forest, balanced random forest [BRF], and logistic regression) were used to test how well judgment, demographic, and COVID-19 precaution variables predicted vaccine uptake. Mediation and moderation were implemented to assess statistical mechanisms underlying successful prediction. RESULTS: Age, income, marital status, employment status, ethnicity, educational level, and sex differed by vaccine uptake (Wilcoxon rank sum and chi-square P<.001). Most judgment variables also differed by vaccine uptake (Wilcoxon rank sum P<.05). A similar area under the receiver operating characteristic curve (AUROC) was achieved by the 3 machine learning frameworks, although random forest and logistic regression produced specificities between 30% and 38% (vs 74.2% for BRF), indicating a lower performance in predicting unvaccinated participants. BRF achieved high precision (87.8%) and AUROC (79%) with moderate to high accuracy (70.8%) and balanced recall (69.6%) and specificity (74.2%). It should be noted that, for BRF, the negative predictive value was <50% despite good specificity. For BRF and random forest, 63% to 75% of the feature importance came from the 15 judgment variables. Furthermore, age, income, and educational level mediated relationships between judgment variables and vaccine uptake. CONCLUSIONS: The findings demonstrate the underlying importance of judgment variables for vaccine choice and uptake, suggesting that vaccine education and messaging might target varying judgment profiles to improve uptake. These methods could also be used to aid vaccine rollouts and health care preparedness by providing location-specific details (eg, identifying areas that may experience low vaccination and high hospitalization).


Subject(s)
COVID-19 Vaccines , COVID-19 , Adult , Humans , Judgment , Cross-Sectional Studies , COVID-19/epidemiology , COVID-19/prevention & control , Vaccination , Cognitive Science , Ethnicity
19.
Heliyon ; 10(7): e28539, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38596055

ABSTRACT

Left atrial (LA) fibrosis plays a vital role as a mediator in the progression of atrial fibrillation. 3D late gadolinium-enhancement (LGE) MRI has been proven effective in identifying LA fibrosis. Image analysis of 3D LA LGE involves manual segmentation of the LA wall, which is both lengthy and challenging. Automated segmentation poses challenges owing to the diverse intensities in data from various vendors, the limited contrast between LA and surrounding tissues, and the intricate anatomical structures of the LA. Current approaches relying on 3D networks are computationally intensive since 3D LGE MRIs and the networks are large. Regarding this issue, most researchers came up with two-stage methods: initially identifying the LA center using a scaled-down version of the MRIs and subsequently cropping the full-resolution MRIs around the LA center for final segmentation. We propose a lightweight transformer-based 3D architecture, Usformer, designed to precisely segment LA volume in a single stage, eliminating error propagation associated with suboptimal two-stage training. The transposed attention facilitates capturing the global context in large 3D volumes without significant computation requirements. Usformer outperforms the state-of-the-art supervised learning methods in terms of accuracy and speed. First, with the smallest Hausdorff Distance (HD) and Average Symmetric Surface Distance (ASSD), it achieved a dice score of 93.1% and 92.0% in the 2018 Atrial Segmentation Challenge and our local institutional dataset, respectively. Second, the number of parameters and computation complexity are largely reduced by 2.8x and 3.8x, respectively. Moreover, Usformer does not require a large dataset. When only 16 labeled MRI scans are used for training, Usformer achieves a 92.1% dice score in the challenge dataset. The proposed Usformer delineates the boundaries of the LA wall relatively accurately, which may assist in the clinical translation of LA LGE for planning catheter ablation of atrial fibrillation.

20.
Article in English | MEDLINE | ID: mdl-38885105

ABSTRACT

Cough is an important symptom in children with acute and chronic respiratory disease. Daily cough is common in Cystic Fibrosis (CF) and increased cough is a symptom of pulmonary exacerbation. To date, cough assessment is primarily subjective in clinical practice and research. Attempts to develop objective, automatic cough counting tools have faced reliability issues in noisy environments and practical barriers limiting long-term use. This single-center pilot study evaluated usability, acceptability and performance of a mechanoacoustic sensor (MAS), previously used for cough classification in adults, in 36 children with CF over brief and multi-day periods in four cohorts. Children whose health was at baseline and who had symptoms of pulmonary exacerbation were included. We trained, validated, and deployed custom deep learning algorithms for accurate cough detection and classification from other vocalization or artifacts with an overall area under the receiver-operator characteristic curve (AUROC) of 0.96 and average precision (AP) of 0.93. Child and parent feedback led to a redesign of the MAS towards a smaller, more discreet device acceptable for daily use in children. Additional improvements optimized power efficiency and data management. The MAS's ability to objectively measure cough and other physiologic signals across clinic, hospital, and home settings is demonstrated, particularly aided by an AUROC of 0.97 and AP of 0.96 for motion artifact rejection. Examples of cough frequency and physiologic parameter correlations with participant-reported outcomes and clinical measurements for individual patients are presented. The MAS is a promising tool in objective longitudinal evaluation of cough in children with CF.

SELECTION OF CITATIONS
SEARCH DETAIL