Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 131
Filter
1.
Rev Invest Clin ; 2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38359843

ABSTRACT

Background: Pan-immuno-inflammation value (PIV) is a new and comprehensive index that reflects both the immune response and systemic inflammation in the body. Objective: The aim of this study was to investigate the prognostic relevance of PIV in predicting in-hospital mortality in acute pulmonary embolism (PE) patients and to compare it with the well-known risk scoring system, PE severity index (PESI), which is commonly used for a short-term mortality prediction in such patients. Methods: In total, 373 acute PE patients diagnosed with contrast-enhanced computed tomography were included in the study. Detailed cardiac evaluation of each patient was performed and PESI and PIV were calculated. Results: In total, 60 patients died during their hospital stay. The multivariable logistic regression analysis revealed that baseline heart rate, N-terminal pro-B-type natriuretic peptide, lactate dehydrogenase, PIV, and PESI were independent risk factors for in-hospital mortality in acute PE patients. When comparing with PESI, PIV was non-inferior in terms of predicting the survival status in patients with acute PE. Conclusion: In our study, we found that the PIV was statistically significant in predicting in-hospital mortality in acute PE patients and was non-inferior to the PESI.

2.
Acad Radiol ; 2024 Jan 22.
Article in English | MEDLINE | ID: mdl-38262813

ABSTRACT

RATIONALE AND OBJECTIVES: Efficiently detecting and characterizing metastatic bone lesions on staging CT is crucial for prostate cancer (PCa) care. However, it demands significant expert time and additional imaging such as PET/CT. We aimed to develop an ensemble of two automated deep learning AI models for 1) bone lesion detection and segmentation and 2) benign vs. metastatic lesion classification on staging CTs and to compare its performance with radiologists. MATERIALS AND METHODS: This retrospective study developed two AI models using 297 staging CT scans (81 metastatic) with 4601 benign and 1911 metastatic lesions in PCa patients. Metastases were validated by follow-up scans, bone biopsy, or PET/CT. Segmentation AI (3DAISeg) was developed using the lesion contours delineated by a radiologist. 3DAISeg performance was evaluated with the Dice similarity coefficient, and classification AI (3DAIClass) performance on AI and radiologist contours was assessed with F1-score and accuracy. Training/validation/testing data partitions of 70:15:15 were used. A multi-reader study was performed with two junior and two senior radiologists within a subset of the testing dataset (n = 36). RESULTS: In 45 unseen staging CT scans (12 metastatic PCa) with 669 benign and 364 metastatic lesions, 3DAISeg detected 73.1% of metastatic (266/364) and 72.4% of benign lesions (484/669). Each scan averaged 12 extra segmentations (range: 1-31). All metastatic scans had at least one detected metastatic lesion, achieving a 100% patient-level detection. The mean Dice score for 3DAISeg was 0.53 (median: 0.59, range: 0-0.87). The F1 for 3DAIClass was 94.8% (radiologist contours) and 92.4% (3DAISeg contours), with a median false positive of 0 (range: 0-3). Using radiologist contours, 3DAIClass had PPV and NPV rates comparable to junior and senior radiologists: PPV (semi-automated approach AI 40.0% vs. Juniors 32.0% vs. Seniors 50.0%) and NPV (AI 96.2% vs. Juniors 95.7% vs. Seniors 91.9%). When using 3DAISeg, 3DAIClass mimicked junior radiologists in PPV (pure-AI 20.0% vs. Juniors 32.0% vs. Seniors 50.0%) but surpassed seniors in NPV (pure-AI 93.8% vs. Juniors 95.7% vs. Seniors 91.9%). CONCLUSION: Our lesion detection and classification AI model performs on par with junior and senior radiologists in discerning benign and metastatic lesions on staging CTs obtained for PCa.

3.
IEEE J Biomed Health Inform ; 28(3): 1273-1284, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38051612

ABSTRACT

Monitoring of prevalent airborne diseases such as COVID-19 characteristically involves respiratory assessments. While auscultation is a mainstream method for preliminary screening of disease symptoms, its utility is hampered by the need for dedicated hospital visits. Remote monitoring based on recordings of respiratory sounds on portable devices is a promising alternative, which can assist in early assessment of COVID-19 that primarily affects the lower respiratory tract. In this study, we introduce a novel deep learning approach to distinguish patients with COVID-19 from healthy controls given audio recordings of cough or breathing sounds. The proposed approach leverages a novel hierarchical spectrogram transformer (HST) on spectrogram representations of respiratory sounds. HST embodies self-attention mechanisms over local windows in spectrograms, and window size is progressively grown over model stages to capture local to global context. HST is compared against state-of-the-art conventional and deep-learning baselines. Demonstrations on crowd-sourced multi-national datasets indicate that HST outperforms competing methods, achieving over 90% area under the receiver operating characteristic curve (AUC) in detecting COVID-19 cases.


Subject(s)
COVID-19 , Respiratory Sounds , Humans , Respiratory Sounds/diagnosis , COVID-19/diagnosis , Auscultation , Cough , Electric Power Supplies
4.
Article in English | MEDLINE | ID: mdl-38082949

ABSTRACT

Accurate segmentation of organs-at-risks (OARs) is a precursor for optimizing radiation therapy planning. Existing deep learning-based multi-scale fusion architectures have demonstrated a tremendous capacity for 2D medical image segmentation. The key to their success is aggregating global context and maintaining high resolution representations. However, when translated into 3D segmentation problems, existing multi-scale fusion architectures might underperform due to their heavy computation overhead and substantial data diet. To address this issue, we propose a new OAR segmentation framework, called OARFocalFuseNet, which fuses multi-scale features and employs focal modulation for capturing global-local context across multiple scales. Each resolution stream is enriched with features from different resolution scales, and multi-scale information is aggregated to model diverse contextual ranges. As a result, feature representations are further boosted. The comprehensive comparisons in our experimental setup with OAR segmentation as well as multi-organ segmentation show that our proposed OARFocalFuseNet outperforms the recent state-of-the-art methods on publicly available OpenKBP datasets and Synapse multi-organ segmentation. Both of the proposed methods (3D-MSF and OARFocalFuseNet) showed promising performance in terms of standard evaluation metrics. Our best performing method (OARFocalFuseNet) obtained a dice coefficient of 0.7995 and hausdorff distance of 5.1435 on OpenKBP datasets and dice coefficient of 0.8137 on Synapse multi-organ segmentation dataset. Our code is available at https://github.com/NoviceMAn-prog/OARFocalFuse.


Subject(s)
Organs at Risk , Tomography, X-Ray Computed , Tomography, X-Ray Computed/methods , Radiotherapy Planning, Computer-Assisted/methods
5.
Article in English | MEDLINE | ID: mdl-38083589

ABSTRACT

Colorectal cancer (CRC) is one of the most common causes of cancer and cancer-related mortality worldwide. Performing colon cancer screening in a timely fashion is the key to early detection. Colonoscopy is the primary modality used to diagnose colon cancer. However, the miss rate of polyps, adenomas and advanced adenomas remains significantly high. Early detection of polyps at the precancerous stage can help reduce the mortality rate and the economic burden associated with colorectal cancer. Deep learning-based computer-aided diagnosis (CADx) system may help gastroenterologists to identify polyps that may otherwise be missed, thereby improving the polyp detection rate. Additionally, CADx system could prove to be a cost-effective system that improves long-term colorectal cancer prevention. In this study, we proposed a deep learning-based architecture for automatic polyp segmentation called Transformer ResU-Net (TransResU-Net). Our proposed architecture is built upon residual blocks with ResNet-50 as the backbone and takes advantage of the transformer self-attention mechanism as well as dilated convolution(s). Our experimental results on two publicly available polyp segmentation benchmark datasets showed that TransResU-Net obtained a highly promising dice score and a real-time speed. With high efficacy in our performance metrics, we concluded that TransResU-Net could be a strong benchmark for building a real-time polyp detection system for the early diagnosis, treatment, and prevention of colorectal cancer. The source code of the proposed TransResU-Net is publicly available at https://github.com/nikhilroxtomar/TransResUNet.


Subject(s)
Adenoma , Colonic Neoplasms , Colonic Polyps , Colorectal Neoplasms , Humans , Colorectal Neoplasms/diagnosis , Early Detection of Cancer , Colonic Polyps/diagnostic imaging , Colonic Neoplasms/diagnostic imaging , Adenoma/diagnostic imaging
6.
ArXiv ; 2023 Oct 02.
Article in English | MEDLINE | ID: mdl-38106459

ABSTRACT

Pediatric brain and spinal cancers remain the leading cause of cancer-related death in children. Advancements in clinical decision-support in pediatric neuro-oncology utilizing the wealth of radiology imaging data collected through standard care, however, has significantly lagged other domains. Such data is ripe for use with predictive analytics such as artificial intelligence (AI) methods, which require large datasets. To address this unmet need, we provide a multi-institutional, large-scale pediatric dataset of 23,101 multi-parametric MRI exams acquired through routine care for 1,526 brain tumor patients, as part of the Children's Brain Tumor Network. This includes longitudinal MRIs across various cancer diagnoses, with associated patient-level clinical information, digital pathology slides, as well as tissue genotype and omics data. To facilitate downstream analysis, treatment-naïve images for 370 subjects were processed and released through the NCI Childhood Cancer Data Initiative via the Cancer Data Service. Through ongoing efforts to continuously build these imaging repositories, our aim is to accelerate discovery and translational AI models with real-world data, to ultimately empower precision medicine for children.

7.
NPJ Digit Med ; 6(1): 220, 2023 Nov 27.
Article in English | MEDLINE | ID: mdl-38012349

ABSTRACT

Machine learning and deep learning are two subsets of artificial intelligence that involve teaching computers to learn and make decisions from any sort of data. Most recent developments in artificial intelligence are coming from deep learning, which has proven revolutionary in almost all fields, from computer vision to health sciences. The effects of deep learning in medicine have changed the conventional ways of clinical application significantly. Although some sub-fields of medicine, such as pediatrics, have been relatively slow in receiving the critical benefits of deep learning, related research in pediatrics has started to accumulate to a significant level, too. Hence, in this paper, we review recently developed machine learning and deep learning-based solutions for neonatology applications. We systematically evaluate the roles of both classical machine learning and deep learning in neonatology applications, define the methodologies, including algorithmic developments, and describe the remaining challenges in the assessment of neonatal diseases by using PRISMA 2020 guidelines. To date, the primary areas of focus in neonatology regarding AI applications have included survival analysis, neuroimaging, analysis of vital parameters and biosignals, and retinopathy of prematurity diagnosis. We have categorically summarized 106 research articles from 1996 to 2022 and discussed their pros and cons, respectively. In this systematic review, we aimed to further enhance the comprehensiveness of the study. We also discuss possible directions for new AI models and the future of neonatology with the rising power of AI, suggesting roadmaps for the integration of AI into neonatal intensive care units.

8.
J Infect Dis ; 228(Suppl 4): S322-S336, 2023 10 03.
Article in English | MEDLINE | ID: mdl-37788501

ABSTRACT

The mass production of the graphics processing unit and the coronavirus disease 2019 (COVID-19) pandemic have provided the means and the motivation, respectively, for rapid developments in artificial intelligence (AI) and medical imaging techniques. This has led to new opportunities to improve patient care but also new challenges that must be overcome before these techniques are put into practice. In particular, early AI models reported high performances but failed to perform as well on new data. However, these mistakes motivated further innovation focused on developing models that were not only accurate but also stable and generalizable to new data. The recent developments in AI in response to the COVID-19 pandemic will reap future dividends by facilitating, expediting, and informing other medical AI applications and educating the broad academic audience on the topic. Furthermore, AI research on imaging animal models of infectious diseases offers a unique problem space that can fill in evidence gaps that exist in clinical infectious disease research. Here, we aim to provide a focused assessment of the AI techniques leveraged in the infectious disease imaging research space, highlight the unique challenges, and discuss burgeoning solutions.


Subject(s)
COVID-19 , Communicable Diseases , Humans , Artificial Intelligence , Pandemics , Diagnostic Imaging/methods , Communicable Diseases/diagnostic imaging
9.
Front Radiol ; 3: 1175473, 2023.
Article in English | MEDLINE | ID: mdl-37810757

ABSTRACT

Purpose: The goal of this work is to explore the best optimizers for deep learning in the context of medical image segmentation and to provide guidance on how to design segmentation networks with effective optimization strategies. Approach: Most successful deep learning networks are trained using two types of stochastic gradient descent (SGD) algorithms: adaptive learning and accelerated schemes. Adaptive learning helps with fast convergence by starting with a larger learning rate (LR) and gradually decreasing it. Momentum optimizers are particularly effective at quickly optimizing neural networks within the accelerated schemes category. By revealing the potential interplay between these two types of algorithms [LR and momentum optimizers or momentum rate (MR) in short], in this article, we explore the two variants of SGD algorithms in a single setting. We suggest using cyclic learning as the base optimizer and integrating optimal values of learning rate and momentum rate. The new optimization function proposed in this work is based on the Nesterov accelerated gradient optimizer, which is more efficient computationally and has better generalization capabilities compared to other adaptive optimizers. Results: We investigated the relationship of LR and MR under an important problem of medical image segmentation of cardiac structures from MRI and CT scans. We conducted experiments using the cardiac imaging dataset from the ACDC challenge of MICCAI 2017, and four different architectures were shown to be successful for cardiac image segmentation problems. Our comprehensive evaluations demonstrated that the proposed optimizer achieved better results (over a 2% improvement in the dice metric) than other optimizers in the deep learning literature with similar or lower computational cost in both single and multi-object segmentation settings. Conclusions: We hypothesized that the combination of accelerated and adaptive optimization methods can have a drastic effect in medical image segmentation performances. To this end, we proposed a new cyclic optimization method (Cyclic Learning/Momentum Rate) to address the efficiency and accuracy problems in deep learning-based medical image segmentation. The proposed strategy yielded better generalization in comparison to adaptive optimizers.

10.
Curr Opin Gastroenterol ; 39(5): 436-447, 2023 09 01.
Article in English | MEDLINE | ID: mdl-37523001

ABSTRACT

PURPOSE OF REVIEW: Early and accurate diagnosis of pancreatic cancer is crucial for improving patient outcomes, and artificial intelligence (AI) algorithms have the potential to play a vital role in computer-aided diagnosis of pancreatic cancer. In this review, we aim to provide the latest and relevant advances in AI, specifically deep learning (DL) and radiomics approaches, for pancreatic cancer diagnosis using cross-sectional imaging examinations such as computed tomography (CT) and magnetic resonance imaging (MRI). RECENT FINDINGS: This review highlights the recent developments in DL techniques applied to medical imaging, including convolutional neural networks (CNNs), transformer-based models, and novel deep learning architectures that focus on multitype pancreatic lesions, multiorgan and multitumor segmentation, as well as incorporating auxiliary information. We also discuss advancements in radiomics, such as improved imaging feature extraction, optimized machine learning classifiers and integration with clinical data. Furthermore, we explore implementing AI-based clinical decision support systems for pancreatic cancer diagnosis using medical imaging in practical settings. SUMMARY: Deep learning and radiomics with medical imaging have demonstrated strong potential to improve diagnostic accuracy of pancreatic cancer, facilitate personalized treatment planning, and identify prognostic and predictive biomarkers. However, challenges remain in translating research findings into clinical practice. More studies are required focusing on refining these methods, addressing significant limitations, and developing integrative approaches for data analysis to further advance the field of pancreatic cancer diagnosis.


Subject(s)
Deep Learning , Pancreatic Neoplasms , Humans , Artificial Intelligence , Pancreas , Pancreatic Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
11.
J Med Imaging (Bellingham) ; 10(2): 024002, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36891503

ABSTRACT

Purpose: We perform anatomical landmarking for craniomaxillofacial (CMF) bones without explicitly segmenting them. Toward this, we propose a simple, yet efficient, deep network architecture, called relational reasoning network (RRN), to accurately learn the local and the global relations among the landmarks in CMF bones; specifically, mandible, maxilla, and nasal bones. Approach: The proposed RRN works in an end-to-end manner, utilizing learned relations of the landmarks based on dense-block units. For a given few landmarks as input, RRN treats the landmarking process similar to a data imputation problem where predicted landmarks are considered missing. Results: We applied RRN to cone-beam computed tomography scans obtained from 250 patients. With a fourfold cross-validation technique, we obtained an average root mean squared error of < 2 mm per landmark. Our proposed RRN has revealed unique relationships among the landmarks that help us in inferring informativeness of the landmark points. The proposed system identifies the missing landmark locations accurately even when severe pathology or deformations are present in the bones. Conclusions: Accurately identifying anatomical landmarks is a crucial step in deformation analysis and surgical planning for CMF surgeries. Achieving this goal without the need for explicit bone segmentation addresses a major limitation of segmentation-based approaches, where segmentation failure (as often is the case in bones with severe pathology or deformation) could easily lead to incorrect landmarking. To the best of our knowledge, this is the first-of-its-kind algorithm finding anatomical relations of the objects using deep learning.

12.
Neuroimaging Clin N Am ; 33(2): 279-297, 2023 May.
Article in English | MEDLINE | ID: mdl-36965946

ABSTRACT

Advanced imaging techniques are needed to assist in providing a prognosis for patients with traumatic brain injury (TBI), particularly mild TBI (mTBI). Diffusion tensor imaging (DTI) is one promising advanced imaging technique, but has shown variable results in patients with TBI and is not without limitations, especially when considering individual patients. Efforts to resolve these limitations are being explored and include developing advanced diffusion techniques, creating a normative database, improving study design, and testing machine learning algorithms. This article will review the fundamentals of DTI, providing an overview of the current state of its utility in evaluating and providing prognosis in patients with TBI.


Subject(s)
Brain Concussion , Brain Injuries, Traumatic , Humans , Diffusion Tensor Imaging/methods , Brain Injuries, Traumatic/diagnostic imaging , Diffusion Magnetic Resonance Imaging , Prognosis , Brain/diagnostic imaging
13.
Med Biol Eng Comput ; 61(1): 285-295, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36414816

ABSTRACT

One of the techniques for achieving unique and reliable information in medicine is renal scintigraphy. A key step for quantitative renal scintigraphy is segmentation of the kidneys. Here, an automatic segmentation framework was proposed for computer-aided renal scintigraphy procedures. To extract kidney boundary in dynamic renal scintigraphic images, a multi-step approach was proposed. This technique is featured with key steps, namely, localization and segmentation. At first, the ROI of each kidney was estimated using Otsu's thresholding, anatomical constraint, and integral projection, which is done in an automatic process. Afterwards, the ROI obtained for the kidneys was used as the initial contours to create the final counter of kidneys using geometric active contours. At this step and for the segmentation, an improved variational level set was utilized through Mumford-Shah formulation. Using e.cam gamma camera system (SIEMENS), 30 data sets were used to assess the proposed method. By comparing the manually outlined borders, the performance of the proposed method was shown. Different measures were used to examine the performance. It was found that the proposed segmentation method managed to extract the kidney boundary in renal scintigraphic images. The proposed technique achieved a sensitivity of 95.15% and a specificity of 95.33%. In addition, the section under the curve in the ROC analysis was equal to 0.974. The proposed technique successfully segmented the renal contour in dynamic renal scintigraphy. Using all the data sets, a correct segmentation of the kidney was performed. In addition, the technique was successful with noisy and low-resolution images and challenging cases with close interfering activities such as liver and spleen activities.


Subject(s)
Algorithms , Kidney , Kidney/diagnostic imaging , Abdomen , Liver , Computers , Image Processing, Computer-Assisted/methods
14.
Mach Learn Med Imaging ; 14349: 134-143, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38274402

ABSTRACT

Intraductal Papillary Mucinous Neoplasm (IPMN) cysts are pre-malignant pancreas lesions, and they can progress into pancreatic cancer. Therefore, detecting and stratifying their risk level is of ultimate importance for effective treatment planning and disease control. However, this is a highly challenging task because of the diverse and irregular shape, texture, and size of the IPMN cysts as well as the pancreas. In this study, we propose a novel computer-aided diagnosis pipeline for IPMN risk classification from multi-contrast MRI scans. Our proposed analysis framework includes an efficient volumetric self-adapting segmentation strategy for pancreas delineation, followed by a newly designed deep learning-based classification scheme with a radiomics-based predictive approach. We test our proposed decision-fusion model in multi-center data sets of 246 multi-contrast MRI scans and obtain superior performance to the state of the art (SOTA) in this field. Our ablation studies demonstrate the significance of both radiomics and deep learning modules for achieving the new SOTA performance compared to international guidelines and published studies (81.9% vs 61.3% in accuracy). Our findings have important implications for clinical decision-making. In a series of rigorous experiments on multi-center data sets (246 MRI scans from five centers), we achieved unprecedented performance (81.9% accuracy). The code is available upon publication.

15.
IEEE Int Conf Comput Vis Workshops ; 2023: 2646-2655, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38298808

ABSTRACT

Accurate medical image segmentation is of utmost importance for enabling automated clinical decision procedures. However, prevailing supervised deep learning approaches for medical image segmentation encounter significant challenges due to their heavy dependence on extensive labeled training data. To tackle this issue, we propose a novel self-supervised algorithm, S3-Net, which integrates a robust framework based on the proposed Inception Large Kernel Attention (I-LKA) modules. This architectural enhancement makes it possible to comprehensively capture contextual information while preserving local intricacies, thereby enabling precise semantic segmentation. Furthermore, considering that lesions in medical images often exhibit deformations, we leverage deformable convolution as an integral component to effectively capture and delineate lesion deformations for superior object boundary definition. Additionally, our self-supervised strategy emphasizes the acquisition of invariance to affine transformations, which is commonly encountered in medical scenarios. This emphasis on robustness with respect to geometric distortions significantly enhances the model's ability to accurately model and handle such distortions. To enforce spatial consistency and promote the grouping of spatially connected image pixels with similar feature representations, we introduce a spatial consistency loss term. This aids the network in effectively capturing the relationships among neighboring pixels and enhancing the overall segmentation quality. The S3-Net approach iteratively learns pixel-level feature representations for image content clustering in an end-to-end manner. Our experimental results on skin lesion and lung organ segmentation tasks show the superior performance of our method compared to the SOTA approaches.

16.
Med Image Comput Comput Assist Interv ; 14222: 736-746, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38299070

ABSTRACT

Vision Transformer (ViT) models have demonstrated a breakthrough in a wide range of computer vision tasks. However, compared to the Convolutional Neural Network (CNN) models, it has been observed that the ViT models struggle to capture high-frequency components of images, which can limit their ability to detect local textures and edge information. As abnormalities in human tissue, such as tumors and lesions, may greatly vary in structure, texture, and shape, high-frequency information such as texture is crucial for effective semantic segmentation tasks. To address this limitation in ViT models, we propose a new technique, Laplacian-Former, that enhances the self-attention map by adaptively re-calibrating the frequency information in a Laplacian pyramid. More specifically, our proposed method utilizes a dual attention mechanism via efficient attention and frequency attention while the efficient attention mechanism reduces the complexity of self-attention to linear while producing the same output, selectively intensifying the contribution of shape and texture features. Furthermore, we introduce a novel efficient enhancement multi-scale bridge that effectively transfers spatial information from the encoder to the decoder while preserving the fundamental features. We demonstrate the efficacy of Laplacian-former on multi-organ and skin lesion segmentation tasks with +1.87% and +0.76% dice scores compared to SOTA approaches, respectively. Our implementation is publically available at GitHub.

17.
Sensors (Basel) ; 22(23)2022 Dec 06.
Article in English | MEDLINE | ID: mdl-36502261

ABSTRACT

Condition assessment of civil engineering structures has been an active research area due to growing concerns over the safety of aged as well as new civil structures. Utilization of emerging immersive visualization technologies such as Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) in the architectural, engineering, and construction (AEC) industry has demonstrated that these visualization tools can be paradigm-shifting. Extended Reality (XR), an umbrella term for VR, AR, and MR technologies, has found many diverse use cases in the AEC industry. Despite this exciting trend, there is no review study on the usage of XR technologies for the condition assessment of civil structures. Thus, the present paper aims to fill this gap by presenting a literature review encompassing the utilization of XR technologies for the condition assessment of civil structures. This study aims to provide essential information and guidelines for practitioners and researchers on using XR technologies to maintain the integrity and safety of civil structures.


Subject(s)
Augmented Reality , Virtual Reality , Engineering , Technology
18.
Pancreas ; 51(6): 586-592, 2022 07 01.
Article in English | MEDLINE | ID: mdl-36206463

ABSTRACT

ABSTRACT: This core component of the Diabetes RElated to Acute pancreatitis and its Mechanisms (DREAM) study will examine the hypothesis that advanced magnetic resonance imaging (MRI) techniques can reflect underlying pathophysiologic changes and provide imaging biomarkers that predict diabetes mellitus (DM) after acute pancreatitis (AP). A subset of participants in the DREAM study will enroll and undergo serial MRI examinations using a specific research protocol. The aim of the study is to differentiate at-risk individuals from those who remain euglycemic by identifying parenchymal features after AP. Performing longitudinal MRI will enable us to observe and understand the natural history of post-AP DM. We will compare MRI parameters obtained by interrogating tissue properties in euglycemic, prediabetic, and incident diabetes subjects and correlate them with metabolic, genetic, and immunological phenotypes. Differentiating imaging parameters will be combined to develop a quantitative composite risk score. This composite risk score will potentially have the ability to monitor the risk of DM in clinical practice or trials. We will use artificial intelligence, specifically deep learning, algorithms to optimize the predictive ability of MRI. In addition to the research MRI, the DREAM study will also correlate clinical computed tomography and MRI scans with DM development.


Subject(s)
Diabetes Mellitus, Type 1 , Pancreatitis , Acute Disease , Artificial Intelligence , Biomarkers , Diabetes Mellitus, Type 1/complications , Diabetes Mellitus, Type 1/diagnosis , Humans , Magnetic Resonance Imaging/methods , Pancreatitis/diagnostic imaging , Pancreatitis/etiology
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 5030-5034, 2022 07.
Article in English | MEDLINE | ID: mdl-36086321

ABSTRACT

In our comprehensive experiments and evaluations, we show that it is possible to generate multiple contrast (even all synthetically) and use synthetically generated images to train an image segmentation engine. We showed promising segmentation results tested on real multi-contrast MRI scans when delineating muscle, fat, bone and bone marrow, all trained on synthetic images. Based on synthetic image training, our segmentation results were as high as 93.91%, 94.11%, 91.63%, 95.33%, for muscle, fat, bone, and bone marrow delineation, respectively. Results were not significantly different from the ones obtained when real images were used for segmentation training: 94.68%, 94.67%, 95.91%, and 96.82%, respectively. Clinical relevance- Synthetically generated images could potentially be used in large-scale training of deep networks for segmentation purpose. Small data set problem of many clinical imaging problems can potentially be addressed with the proposed algorithm.


Subject(s)
Algorithms , Magnetic Resonance Imaging , Magnetic Resonance Imaging/methods , Records
20.
medRxiv ; 2022 Dec 22.
Article in English | MEDLINE | ID: mdl-36172131

ABSTRACT

The success of artificial intelligence in clinical environments relies upon the diversity and availability of training data. In some cases, social media data may be used to counterbalance the limited amount of accessible, well-curated clinical data, but this possibility remains largely unexplored. In this study, we mined YouTube to collect voice data from individuals with self-declared positive COVID-19 tests during time periods in which Omicron was the predominant variant1,2,3, while also sampling non-Omicron COVID-19 variants, other upper respiratory infections (URI), and healthy subjects. The resulting dataset was used to train a DenseNet model to detect the Omicron variant from voice changes. Our model achieved 0.85/0.80 specificity/sensitivity in separating Omicron samples from healthy samples and 0.76/0.70 specificity/sensitivity in separating Omicron samples from symptomatic non-COVID samples. In comparison with past studies, which used scripted voice samples, we showed that leveraging the intra-sample variance inherent to unscripted speech enhanced generalization. Our work introduced novel design paradigms for audio-based diagnostic tools and established the potential of social media data to train digital diagnostic models suitable for real-world deployment.

SELECTION OF CITATIONS
SEARCH DETAIL
...