Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
1.
J Med Imaging (Bellingham) ; 10(6): 066501, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38074629

RESUMO

Purpose: Previous studies have demonstrated that three-dimensional (3D) volumetric renderings of magnetic resonance imaging (MRI) brain data can be used to identify patients using facial recognition. We have shown that facial features can be identified on simulation-computed tomography (CT) images for radiation oncology and mapped to face images from a database. We aim to determine whether CT images can be anonymized using anonymization software that was designed for T1-weighted MRI data. Approach: Our study examines (1) the ability of off-the-shelf anonymization algorithms to anonymize CT data and (2) the ability of facial recognition algorithms to identify whether faces could be detected from a database of facial images. Our study generated 3D renderings from 57 head CT scans from The Cancer Imaging Archive database. Data were anonymized using AFNI (deface, reface, and 3Dskullstrip) and FSL's BET. Anonymized data were compared to the original renderings and passed through facial recognition algorithms (VGG-Face, FaceNet, DLib, and SFace) using a facial database (labeled faces in the wild) to determine what matches could be found. Results: Our study found that all modules were able to process CT data and that AFNI's 3Dskullstrip and FSL's BET data consistently showed lower reidentification rates compared to the original. Conclusions: The results from this study highlight the potential usage of anonymization algorithms as a clinical standard for deidentifying brain CT data. Our study demonstrates the importance of continued vigilance for patient privacy in publicly shared datasets and the importance of continued evaluation of anonymization methods for CT data.

2.
J Med Imaging (Bellingham) ; 10(Suppl 1): S11920, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37234196

RESUMO

Purpose: In this work, we endeavor to investigate how texture information may contribute to the response of a blur measure (BM) with motivation rooted in mammography. This is vital as the interpretation of the BM is typically not evaluated with respect to texture present in an image. We are particularly concerned with lower scales of blur (≤1 mm) as this blur is least likely to be detected but can still have a detrimental effect on detectability of microcalcifications. Approach: Three sets of linear models, where BM response was modeled as a linear combination of texture information determined by texture measures (TMs), were constructed from three different datasets of equal-blur-level images; one of computer-generated mammogram-like clustered lumpy background (CLB) images and two image sets derived from the Brodatz texture images. The linear models were refined by removing those TMs that are not significantly non-zero across all three datasets for each BM. We use five levels of Gaussian blur to blur the CLB images and assess the ability of the BMs and TMs to separate the images based on blur level. Results: We found that many TMs used frequently in the reduced linear models, mimicked the structure of the BMs that they modeled. Surprisingly, while none of the BMs could separate the CLB images across all levels of blur, a group of TMs could. These TMs occurred infrequently in the reduced linear models meaning that they rely on different information compared with that used by the BMs. Conclusion: These results confirm our hypothesis that BMs can be influenced by texture information in an image. That a subset of TMs performed better than all BMs on the blur classification problem with the CLB images further shows that conventional BMs may not be the optimal tool for blur classification in mammogram images.

3.
J Med Imaging (Bellingham) ; 10(1): 014005, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36820234

RESUMO

Purpose: Segmenting medical images accurately and reliably is important for disease diagnosis and treatment. It is a challenging task because of the wide variety of objects' sizes, shapes, and scanning modalities. Recently, many convolutional neural networks have been designed for segmentation tasks and have achieved great success. Few studies, however, have fully considered the sizes of objects; thus, most demonstrate poor performance for small object segmentation. This can have a significant impact on the early detection of diseases. Approach: We propose a context axial reverse attention network (CaraNet) to improve the segmentation performance on small objects compared with several recent state-of-the-art models. CaraNet applies axial reserve attention and channel-wise feature pyramid modules to dig the feature information of small medical objects. We evaluate our model by six different measurement metrics. Results: We test our CaraNet on segmentation datasets for brain tumor (BraTS 2018) and polyp (Kvasir-SEG, CVC-ColonDB, CVC-ClinicDB, CVC-300, and ETIS-LaribPolypDB). Our CaraNet achieves the top-rank mean Dice segmentation accuracy, and results show a distinct advantage of CaraNet in the segmentation of small medical objects. Conclusions: We proposed CaraNet to segment small medical objects and outperform state-of-the-art methods.

4.
Comput Biol Med ; 154: 106579, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36706569

RESUMO

-Deep learning techniques are proving instrumental in identifying, classifying, and quantifying patterns in medical images. Segmentation is one of the important applications in medical image analysis. The U-Net has become the predominant deep-learning approach to medical image segmentation tasks. Existing U-Net based models have limitations in several respects, however, including: the requirement for millions of parameters in the U-Net, which consumes considerable computational resources and memory; the lack of global information; and incomplete segmentation in difficult cases. To remove some of those limitations, we built on our previous work and applied two modifications to improve the U-Net model: 1) we designed and added the dilated channel-wise CNN module and 2) we simplified the U-shape network. We then proposed a novel light-weight architecture, the Channel-wise Feature Pyramid Network for Medicine (CFPNet-M). To evaluate our method, we selected five datasets from different imaging modalities: thermography, electron microscopy, endoscopy, dermoscopy, and digital retinal images. We compared its performance with several models having a variety of complexities. We used the Tanimoto similarity instead of the Jaccard index for gray-level image comparisons. The CFPNet-M achieves segmentation results on all five medical datasets that are comparable to existing methods, yet require only 8.8 MB memory, and just 0.65 million parameters, which is about 2% of U-Net. Unlike other deep-learning segmentation methods, this new approach is suitable for real-time application: its inference speed can reach 80 frames per second when implemented on a single RTX 2070Ti GPU with an input image size of 256 × 192 pixels.


Assuntos
Medicina , Termografia , Processamento de Imagem Assistida por Computador
5.
Biomed Signal Process Control ; 79: 104250, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36188130

RESUMO

Automatic segmentation of infected regions in computed tomography (CT) images is necessary for the initial diagnosis of COVID-19. Deep-learning-based methods have the potential to automate this task but require a large amount of data with pixel-level annotations. Training a deep network with annotated lung cancer CT images, which are easier to obtain, can alleviate this problem to some extent. However, this approach may suffer from a reduction in performance when applied to unseen COVID-19 images during the testing phase, caused by the difference in the image intensity and object region distribution between the training set and test set. In this paper, we proposed a novel unsupervised method for COVID-19 infection segmentation that aims to learn the domain-invariant features from lung cancer and COVID-19 images to improve the generalization ability of the segmentation network for use with COVID-19 CT images. First, to address the intensity difference, we proposed a novel data augmentation module based on Fourier Transform, which transfers the annotated lung cancer data into the style of COVID-19 image. Secondly, to reduce the distribution difference, we designed a teacher-student network to learn rotation-invariant features for segmentation. The experiments demonstrated that even without getting access to the annotations of the COVID-19 CT images during the training phase, the proposed network can achieve a state-of-the-art segmentation performance on COVID-19 infection.

6.
J Med Imaging (Bellingham) ; 9(Suppl 1): S12209, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36034746

RESUMO

Image processing has contributed greatly to the clinical applications of medical imaging. Many of the major developments have been stimulated by and reported at the Image Processing (IP) conference held annually as part of the SPIE Medical Imaging meeting. The evolution, focus, and impact of the IP conference is reviewed.

7.
Gynecol Oncol ; 166(1): 165-172, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35491268

RESUMO

OBJECTIVE: To assess trends in guideline-adherent chemoradiation therapy (GA-CRT) for locally advanced cervical cancer relative to Patient Protection and Affordable Care Act (ACA) implementation. METHODS: National Cancer Database patients treated with chemoradiation for locally advanced cervical cancer (FIGO 2018 Stage IB3-IVA) from 2004 to 2016 were included. GA-CRT was defined according to NCCN guidelines and included: 1) delivery of external beam radiation, 2) brachytherapy, and 3) chemotherapy, 4) no radical hysterectomy. Logistic regression was used to determine trends in GA-CRT relative to the ACA. Survival was also estimated using Kaplan-Meier analysis. RESULTS: 37,772 patients met inclusion criteria (Pre-ACA:16,169; Post-ACA:21,673). A total of 33,116 patients had squamous cell carcinoma and 4626 patients had other histologies. Forty-five percent of patients had lymph node-positive disease. A total of 14.6% of patients had Stage I disease, 41.8% had Stage II disease, 36.4% had Stage III disease, and 7.9% had Stage IVA disease. On multivariable analysis, medicare insurance (OR 0.91; 95%CI: 0.84-0.99 compared to commercial insurance), non-squamous histology (OR 0.83; 95%CI: 0.77-0.89 for adenocarcinoma) and increasing Charlson-Deyo score were associated with decreased odds of receiving GA care. Increasing T-stage was associated with greater receipt of GA-CRT. The percentage of the population that received guideline adherent care increased post-ACA (Pre-ACA 28%; Post-ACA 34%; p < 0.001). Adherence to treatment guidelines increased 2-year survival by 15% (GA 76%; Not GA 61%; p < 0.001). Increased 2-year survival was seen in the post-ACA cohort (Pre-ACA 62%; Post-ACA 69%; p < 0.001). CONCLUSIONS: Implementation of the ACA was associated with improved GA-CRT and survival in patients with locally advanced cervical cancer.


Assuntos
Carcinoma de Células Escamosas , Neoplasias do Colo do Útero , Idoso , Quimiorradioterapia , Feminino , Humanos , Medicare , Patient Protection and Affordable Care Act , Estados Unidos/epidemiologia , Neoplasias do Colo do Útero/patologia
8.
Epilepsia ; 63(3): 629-640, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34984672

RESUMO

OBJECTIVE: This study was undertaken to identify shared functional network characteristics among focal epilepsies of different etiologies, to distinguish epilepsy patients from controls, and to lateralize seizure focus using functional connectivity (FC) measures derived from resting state functional magnetic resonance imaging (MRI). METHODS: Data were taken from 103 adult and 65 pediatric focal epilepsy patients (with or without lesion on MRI) and 109 controls across four epilepsy centers. We used three whole-brain FC measures: parcelwise connectivity matrix, mean FC, and degree of FC. We trained support vector machine models with fivefold cross-validation (1) to distinguish patients from controls and (2) to lateralize the hemisphere of seizure onset in patients. We reported the regions and connections with the highest importance from each model as the common FC differences between the compared groups. RESULTS: FC measures related to the default mode and limbic networks had higher importance relative to other networks for distinguishing epilepsy patients from controls. In lateralization models, regions related to somatosensory, visual, default mode, and basal ganglia showed higher importance. The epilepsy versus control classification model trained using a 400-parcel connectivity matrix achieved a median testing accuracy of 75.6% (median area under the curve [AUC] = .83) in repeated independent testing. Lateralization accuracy using the 400-parcel connectivity matrix reached a median accuracy of 64.0% (median AUC = .69). SIGNIFICANCE: Machine learning models revealed common FC alterations in a heterogeneous group of patients with focal epilepsies. The distribution of the most altered regions supports the hypothesis that shared functional alteration exists beyond the seizure onset zone and its epileptic network. We showed that FC measures can distinguish patients from controls, and further lateralize focal epilepsies. Future studies are needed to confirm these findings by using larger numbers of epilepsy patients.


Assuntos
Epilepsias Parciais , Adulto , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Criança , Epilepsias Parciais/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética/métodos , Convulsões
9.
Pract Radiat Oncol ; 12(2): 120-124, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34649005

RESUMO

Previous studies have demonstrated that patients can be identified from 3-dimensional (3D) reconstructions of computed tomography (CT) or magnetic resonance imaging data of the brain or head and neck. This presents a privacy and security concern for scan data released to public data sets. It is unknown whether thermoplastic immobilization masks used for treatment planning in radiation therapy are sufficient to prevent facial recognition. Our study sought to evaluate whether patients with an immobilization mask could be identified on 3D reconstructions of scan data. Our study reconstructed 3D images from simulation CT (SIM-CT) scans of 35 patients and compared these to original patient photographs to test if the thermoplastic mask obfuscated facial features. Blind review from 4 facial recognition algorithms and a human (radiation oncologist) was evaluated for the ability to match 3D reconstructions of patients scans to patient images. The matching procedure was repeated against an expanded testing data set of the 35 patient photographs plus 13,233 facial photographs from the "Labeled Faces in the Wild" data set (13,268 photographs in total). Facial recognition algorithms were able to match a maximum of 83% (range, 60%-83%) of patients to the corresponding images. Radiation Oncologist blinded review correctly matched 80% of patients to the corresponding images. Ethnicity and facial hair were the most common reasons for patient mismatch. In the expanded testing data set, algorithms were also able to match a maximum of 83% (range, 57%-83%) of patients. The majority of patients were able to be identified through computer algorithm or human review even under a SIM-CT mask. These results suggest there is a potential privacy and security concern when SIM-CT data are released to publicly available data sets.


Assuntos
Privacidade , Tomografia Computadorizada por Raios X , Algoritmos , Cabeça , Humanos , Imageamento Tridimensional/métodos , Imobilização/métodos , Pescoço , Tomografia Computadorizada por Raios X/métodos
10.
Appl Intell (Dordr) ; 52(6): 6340-6353, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34764618

RESUMO

Automatic segmentation of infection areas in computed tomography (CT) images has proven to be an effective diagnostic approach for COVID-19. However, due to the limited number of pixel-level annotated medical images, accurate segmentation remains a major challenge. In this paper, we propose an unsupervised domain adaptation based segmentation network to improve the segmentation performance of the infection areas in COVID-19 CT images. In particular, we propose to utilize the synthetic data and limited unlabeled real COVID-19 CT images to jointly train the segmentation network. Furthermore, we develop a novel domain adaptation module, which is used to align the two domains and effectively improve the segmentation network's generalization capability to the real domain. Besides, we propose an unsupervised adversarial training scheme, which encourages the segmentation network to learn the domain-invariant feature, so that the robust feature can be used for segmentation. Experimental results demonstrate that our method can achieve state-of-the-art segmentation performance on COVID-19 CT images.

11.
Brachytherapy ; 20(5): 1053-1061, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34088594

RESUMO

PURPOSE: To provide an assessment of safety regarding high-dose-rate after-loading brachytherapy (HDR-BT) based on adverse events reported to the OpenFDA, an open access database maintained by the United States Food and Drug Administration (FDA). METHODS: OpenFDA was queried for HDR-BT events between 1993 and 2019. A brachytherapist categorized adverse events (AEs) based on disease site, applicator, manufacturer, event type, dosimetry impact, and outcomes. Important findings are summarized. RESULTS: 372 AEs were reported between 1993 and 2019, with a downwards trend after 2014. Nearly half of AEs (48.9%) were caused by a device malfunction, and 27.4% resulted in patient injury. Breast (49.2%) and Gyn (23.7%) were the most common disease sites of AEs. Applicator breaks cause the majority of AEs (64.2%) and breast balloon implants were the most common applicator to malfunction (38.7%). User error contributed to only 16.7% of events. 11.0% of events required repair of the afterloader. There were no reported staff injuries or patient deaths from an AE, however 24.7% of patients received resultant incorrect radiation dose, 16.4% required additional procedures to rectify the AE, and 3.0% resulted in unintended radiation to staff. CONCLUSION: The OpenFDA database has shown a decreasing trend in AEs since 2014 for HDR-BT. Most AEs are not caused by user error and do not cause patient injury or incorrect radiation dose. Investigation into methods to prevent failures and improve applicators such as the breast balloon could improve safety. These results support the continued use of HDR-BT as a safe treatment modality for cancer.


Assuntos
Braquiterapia , Braquiterapia/métodos , Humanos , Radiometria , Dosagem Radioterapêutica , Estados Unidos/epidemiologia , United States Food and Drug Administration
12.
Abdom Radiol (NY) ; 46(9): 4266-4277, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33813624

RESUMO

OBJECTIVE: To predict the histologic grade and type of small papillary renal cell carcinomas (pRCCs) using texture analysis and machine learning algorithms. METHODS: This was a retrospective HIPAA-compliant study. 24 noncontrast (NC), 22 corticomedullary (CM) phase, and 24 nephrographic (NG) phase CTs of small (< 4 cm) surgically resected pRCCs were identified. Surgical pathology classified the tumors as low- or high-Fuhrman histologic grade and type 1 or 2. The axial image with the largest cross-sectional tumor area was exported and segmented. Six histogram and 31 texture (20 gray-level co-occurrences and 11 gray-level run-lengths) features were calculated for each tumor in each phase. Feature values in low- versus high-grade and type 1 versus 2 pRCCs were compared. Area under the receiver operating curve (AUC) was calculated for each feature to assess prediction of histologic grade and type of pRCCs in each phase. Histogram, texture, and combined histogram and texture feature sets were used to train and test three classification algorithms (support vector machine (SVM), random forest, and histogram-based gradient boosting decision tree (HGBDT)) with stratified shuffle splits and threefold cross-validation; AUCs were calculated for each algorithm in each phase to assess prediction of histologic grade and type of pRCCs. RESULTS: Individual histogram and texture features did not have statistically significant differences between low- and high-grade or type 1 and type 2 pRCCs across all phases. Individual features had low predictive power for tumor grade or type in all phases (AUC < 0.70). HGBDT was highly accurate at predicting pRCC histologic grade and type using histogram, texture or combined histogram and texture feature data from the CM phase (AUCs = 0.97-1.0). All algorithms had highest AUCs using CM phase feature data sets; AUCs decreased using feature sets from NC or NG phases. CONCLUSIONS: The histologic grade and type of small pRCCs can be predicted with classification algorithms using CM histogram and texture features, which outperform NC and NG phase image data. The accurate prediction of pRCC histologic grade and type may be able to further guide management of patients with small (< 4 cm) pRCCs being considered for active surveillance.


Assuntos
Carcinoma de Células Renais , Neoplasias Renais , Carcinoma de Células Renais/diagnóstico por imagem , Estudos Transversais , Estudos de Viabilidade , Humanos , Neoplasias Renais/diagnóstico por imagem , Redes Neurais de Computação , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
13.
IEEE J Biomed Health Inform ; 25(2): 441-452, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33275588

RESUMO

Coronavirus disease 2019 (COVID-19) is an ongoing global pandemic that has spread rapidly since December 2019. Real-time reverse transcription polymerase chain reaction (rRT-PCR) and chest computed tomography (CT) imaging both play an important role in COVID-19 diagnosis. Chest CT imaging offers the benefits of quick reporting, a low cost, and high sensitivity for the detection of pulmonary infection. Recently, deep-learning-based computer vision methods have demonstrated great promise for use in medical imaging applications, including X-rays, magnetic resonance imaging, and CT imaging. However, training a deep-learning model requires large volumes of data, and medical staff faces a high risk when collecting COVID-19 CT data due to the high infectivity of the disease. Another issue is the lack of experts available for data labeling. In order to meet the data requirements for COVID-19 CT imaging, we propose a CT image synthesis approach based on a conditional generative adversarial network that can effectively generate high-quality and realistic COVID-19 CT images for use in deep-learning-based medical imaging tasks. Experimental results show that the proposed method outperforms other state-of-the-art image synthesis methods with the generated COVID-19 CT images and indicates promising for various machine learning applications including semantic segmentation and classification.


Assuntos
COVID-19/diagnóstico por imagem , Aprendizado Profundo , Tomografia Computadorizada por Raios X , Humanos , Pulmão/diagnóstico por imagem , Radiografia Torácica , SARS-CoV-2
14.
Brain Sci ; 10(7)2020 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-32708912

RESUMO

BACKGROUND: Gulf War Illness (GWI) and Chronic Fatigue Syndrome (CFS) are two debilitating disorders that share similar symptoms of chronic pain, fatigue, and exertional exhaustion after exercise. Many physicians continue to believe that both are psychosomatic disorders and to date no underlying etiology has been discovered. As such, uncovering objective biomarkers is important to lend credibility to criteria for diagnosis and to help differentiate the two disorders. METHODS: We assessed cognitive differences in 80 subjects with GWI and 38 with CFS by comparing corresponding fMRI scans during 2-back working memory tasks before and after exercise to model brain activation during normal activity and after exertional exhaustion, respectively. Voxels were grouped by the count of total activity into the Automated Anatomical Labeling (AAL) atlas and used in an "ensemble" series of machine learning algorithms to assess if a multi-regional pattern of differences in the fMRI scans could be detected. RESULTS: A K-Nearest Neighbor (70%/81%), Linear Support Vector Machine (SVM) (70%/77%), Decision Tree (82%/82%), Random Forest (77%/78%), AdaBoost (69%/81%), Naïve Bayes (74%/78%), Quadratic Discriminant Analysis (QDA) (73%/75%), Logistic Regression model (82%/82%), and Neural Net (76%/77%) were able to differentiate CFS from GWI before and after exercise with an average of 75% accuracy in predictions across all models before exercise and 79% after exercise. An iterative feature selection and removal process based on Recursive Feature Elimination (RFE) and Random Forest importance selected 30 regions before exercise and 33 regions after exercise that differentiated CFS from GWI across all models, and produced the ultimate best accuracies of 82% before exercise and 82% after exercise by Logistic Regression or Decision Tree by a single model, and 100% before and after exercise when selected by any six or more models. Differential activation on both days included the right anterior insula, left putamen, and bilateral orbital frontal, ventrolateral prefrontal cortex, superior, inferior, and precuneus (medial) parietal, and lateral temporal regions. Day 2 had the cerebellum, left supplementary motor area and bilateral pre- and post-central gyri. Changes between days included the right Rolandic operculum switching to the left on Day 2, and the bilateral midcingulum switching to the left anterior cingulum. CONCLUSION: We concluded that CFS and GWI are significantly differentiable using a pattern of fMRI activity based on an ensemble machine learning model.

15.
Brain Sci ; 10(5)2020 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-32466139

RESUMO

Gulf War Illness (GWI) is a debilitating condition characterized by dysfunction of cognition, pain, fatigue, sleep, and diverse somatic symptoms with no known underlying pathology. As such, uncovering objective biomarkers such as differential regions of activity within a Functional Magnetic Resonance Imaging (fMRI) scan is important to enhance validity of the criteria for diagnosis. Symptoms are exacerbated by mild activity, and exertional exhaustion is a key complaint amongst sufferers. We modeled this exertional exhaustion by having GWI (n = 80) and sedentary control (n = 31) subjects perform submaximal exercise stress tests on two consecutive days. Cognitive differences were assessed by comparing fMRI scans performed during 2-Back working memory tasks before and after the exercise. Machine learning algorithms were used to identify differences in brain activation patterns between the two groups on Day 1 (before exercise) and Day 2 (after exercise). The numbers of voxels with t > 3.17 (corresponding to p < 0.001 uncorrected) were determined for brain regions defined by the Automated Anatomical Labeling (AAL) atlas. Data were divided 70:30 into training and test sets. Recursive feature selection identified twenty-nine regions of interest (ROIs) that significantly distinguished GWI from control on Day 1 and 28 ROIs on Day 2. Ten regions were present in both models between the two days, including right anterior insula, orbital frontal cortex, thalamus, bilateral temporal poles, and left supramarginal gyrus and cerebellar Crus 1. The models had 70% accuracy before exercise on Day 1 and 85% accuracy after exercise on Day 2, indicating the logistic regression model significantly differentiated subjects with GWI from the sedentary control group. Exercise caused changes in these patterns that may indicate the cognitive differences caused by exertional exhaustion. A second set of predictive models was able to classify previously identified GWI exercise subgroups START, STOPP, and POTS for both Days 1 and Days 2 with 67% and 69% accuracy respectively. This study was the first of its kind to differentiate GWI and the three sub-phenotypes START, STOPP, and POTS from a sedentary control using a logistic regression estimation method.

16.
Med Phys ; 47(4): 1786-1795, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32017120

RESUMO

PURPOSE: To use machine-learning algorithms and blur measure (BM) operators to automatically detect motion blur in mammograms. Motion blur has been reported to reduce lesion detection performance and mask small abnormalities, resulting in failure to detect them until they reach more advanced stages. Automatic detection of blur could support the clinical decision-making process during the mammography exam by allowing for an immediate retake, thereby preventing unnecessary expense, time, and patient anxiety. METHODS: Blur was simulated mathematically to mimic the real blur seen in clinical practice. The blur point-spread-function (PSF) mask is generated by distributing pixel intensity of an image pixel moving under random motion within the range of blur effect (the maximum amount of tissue motion allowed). The random motion trajectory vector is generated on a super-sampled image frame to accommodate smaller substeps; the vector was then sampled on a regular pixel grid using subpixel linear interpolation to generate the blur PSF mask. This randomly generated motion trajectory is constrained by several factors: the effects of variations in tissue elasticity, imaging exposure time, and size of blur effect (motion boundary in millimeters) were examined. The blur mask is convolved with a mammogram to create blur. Five motion blur magnitudes (0.1, 0.25, 0.5, 1.0, and 1.5 mm) were simulated on 244 and 434 mammograms from the INbreast and DDSM databases, respectively. Blur was quantified using nine BM operators for each mammogram and at each blur level. The mammograms were assigned to training (70%) and testing (30%) datasets to train three machine-learning classifiers: Ensemble Bagged Trees, fine Gaussian SVM, and weighted KNN, to distinguish five levels of blurred from unblurred mammograms, using six-way classification. RESULTS: For the INbreast mammograms, the average classification accuracies were 87.7%, 85.7%, and 85.7% for Ensemble Bagged Trees, fine Gaussian SVM, and weighted KNN, respectively, and the average classification accuracies for DDSM were 93.5%, 93.6%, and 92.7% for Ensemble Bagged Trees, fine Gaussian SVM, and weighted KNN, respectively. CONCLUSIONS: Preliminary results show the potential to detect simulated blur automatically using those methods. Although limited work has been done to quantify the effects of motion blur on radiologists' performance, there is evidence that motion blur might not be detected visually by a human observer and could negatively affect radiologists' lesion detection performance. As of this date, no other study has investigated the ability of machine-learning classifiers and BM operators to detect motion blur in mammograms.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Mamografia , Movimento , Automação , Aprendizado de Máquina , Transdução de Sinais
17.
Abdom Radiol (NY) ; 45(3): 789-798, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31822969

RESUMO

PURPOSE: To predict the histologic grade of small clear cell renal cell carcinomas (ccRCCs) using texture analysis and machine learning algorithms. METHODS: Fifty-two noncontrast (NC), 26 corticomedullary (CM) phase, and 35 nephrographic (NG) phase CTs of small (< 4 cm) surgically resected ccRCCs were retrospectively identified. Surgical pathology classified the tumors as low- or high-Fuhrman histologic grade. The axial image with the largest cross-sectional tumor area was exported and segmented. Six histogram and 31 texture (gray-level co-occurrences (GLC) and gray-level run-lengths (GLRL)) features were calculated for each tumor in each phase. T testing compared feature values in low- and high-grade ccRCCs, with a (Benjamini-Hochberg) false discovery rate of 10%. Area under the receiver operating curve (AUC) was calculated for each feature to assess prediction of low- and high-grade ccRCCs in each phase. Histogram, texture, and combined histogram and texture data sets were used to train and test four algorithms (k-nearest neighbor (KNN), support vector machine (SVM), random forests, and decision tree) with tenfold cross-validation; AUCs were calculated for each algorithm in each phase to assess prediction of low- and high-grade ccRCCs. RESULTS: Zero, 23, and 0 features in the NC, CM, and NG phases had statistically significant differences between low and high-grade ccRCCs. CM histogram skewness and GLRL short run emphasis had the highest AUCs (0.82) in predicting histologic grade. All four algorithms had the highest AUCs (0.97) predicting histologic grade using CM histogram features. The algorithms' AUCs decreased using histogram or texture features from NC or NG phases. CONCLUSION: The histologic grade of small ccRCCs can be accurately predicted with machine learning algorithms using CM histogram features, which outperform NC and NG phase image data.


Assuntos
Carcinoma de Células Renais/diagnóstico por imagem , Carcinoma de Células Renais/patologia , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/patologia , Aprendizado de Máquina , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Gradação de Tumores , Projetos Piloto , Estudos Retrospectivos
18.
J Med Imaging (Bellingham) ; 6(3): 031411, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30915386

RESUMO

The convolutional neural network (CNN) is a promising technique to detect breast cancer based on mammograms. Training the CNN from scratch, however, requires a large amount of labeled data. Such a requirement usually is infeasible for some kinds of medical image data such as mammographic tumor images. Because improvement of the performance of a CNN classifier requires more training data, the creation of new training images, image augmentation, is one solution to this problem. We applied the generative adversarial network (GAN) to generate synthetic mammographic images from the digital database for screening mammography (DDSM). From the DDSM, we cropped two sets of regions of interest (ROIs) from the images: normal and abnormal (cancer/tumor). Those ROIs were used to train the GAN, and the GAN then generated synthetic images. For comparison with the affine transformation augmentation methods, such as rotation, shifting, scaling, etc., we used six groups of ROIs [three simple groups: affine augmented, GAN synthetic, real (original), and three mixture groups of any two of the three simple groups] for each to train a CNN classifier from scratch. And, we used real ROIs that were not used in training to validate classification outcomes. Our results show that, to classify the normal ROIs and abnormal ROIs from DDSM, adding GAN-generated ROIs in the training data can help the classifier prevent overfitting, and on validation accuracy, the GAN performs about 3.6% better than affine transformations for image augmentation. Therefore, GAN could be an ideal augmentation approach. The images augmented by GAN or affine transformation cannot substitute for real images to train CNN classifiers because the absence of real images in the training set will cause over-fitting.

19.
Stereotact Funct Neurosurg ; 97(5-6): 313-318, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31910428

RESUMO

BACKGROUND: Brain stimulation is utilized to treat a variety of neurological disorders. Clinical brain stimulation technologies currently utilize charge-balanced pulse stimulation. The brain may better respond to other stimulation waveforms. This study was designed to evaluate the motor threshold of the brain to stimulation with various waveforms. METHODS: Three stimulation waveforms were utilized on rats with surgically implanted brain electrodes: pulses, square waves, and random waveform. The peak-to-peak stimulation voltage was increased in a step-wise manner until motor signs were elicited. RESULTS: The random waveform had the highest motor threshold with brain stimulation compared to the other waveforms. Random waveform stimulation reached maximum voltage without motor side effects while stimulating through both 1 and 8 electrodes. In contrast, the stimulation thresholds for motor side effects of the other two waveforms were on average less than half of the maximum voltage and lower for stimulation through 8 electrodes than stimulation through 1 electrode (p < 0.0005). CONCLUSION: The random waveform was better tolerated than the other waveforms and may allow for the use of higher stimulation voltage without side effects.


Assuntos
Eletrodos Implantados , Lobo Frontal/fisiologia , Modelos Animais , Atividade Motora/fisiologia , Limiar Sensorial/fisiologia , Animais , Estimulação Elétrica/instrumentação , Estimulação Elétrica/métodos , Humanos , Masculino , Modelos Teóricos , Ratos , Ratos Sprague-Dawley
20.
Biomed Opt Express ; 9(5): 2189-2204, 2018 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-29760980

RESUMO

In vivo autofluorescence hyperspectral imaging of moving objects can be challenging due to motion artifacts and to the limited amount of acquired photons. To address both limitations, we selectively reduced the number of spectral bands while maintaining accurate target identification. Several downsampling approaches were applied to data obtained from the atrial tissue of adult pigs with sites of radiofrequency ablation lesions. Standard image qualifiers such as the mean square error, the peak signal-to-noise ratio, the structural similarity index map, and an accuracy index of lesion component images were used to quantify the effects of spectral binning, an increased spectral distance between individual bands, as well as random combinations of spectral bands. Results point to several quantitative strategies for deriving combinations of a small number of spectral bands that can successfully detect target tissue. Insights from our studies can be applied to a wide range of applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...