Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
Clin Pharmacol Ther ; 115(4): 745-757, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-37965805

RESUMEN

In 2020, Novartis Pharmaceuticals Corporation and the U.S. Food and Drug Administration (FDA) started a 4-year scientific collaboration to approach complex new data modalities and advanced analytics. The scientific question was to find novel radio-genomics-based prognostic and predictive factors for HR+/HER- metastatic breast cancer under a Research Collaboration Agreement. This collaboration has been providing valuable insights to help successfully implement future scientific projects, particularly using artificial intelligence and machine learning. This tutorial aims to provide tangible guidelines for a multi-omics project that includes multidisciplinary expert teams, spanning across different institutions. We cover key ideas, such as "maintaining effective communication" and "following good data science practices," followed by the four steps of exploratory projects, namely (1) plan, (2) design, (3) develop, and (4) disseminate. We break each step into smaller concepts with strategies for implementation and provide illustrations from our collaboration to further give the readers actionable guidance.


Asunto(s)
Inteligencia Artificial , Multiómica , Humanos , Aprendizaje Automático , Genómica
2.
BMC Res Notes ; 16(1): 185, 2023 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-37620937

RESUMEN

OBJECTIVE: Scar tissue is an identified cause for the development of malignant ventricular arrhythmias in patients of myocardial infarction, which ultimately leads to cardiac death, a fatal outcome. We aim to evaluate the left ventricular endocardial Scar tissue pattern using Radon descriptor-based machine learning. We performed automated Left ventricle (LV) segmentation to find the LV endocardial wall, performed morphological operations, and marked the region of the scar tissue on the endocardial wall of LV. Motivated by a Radon descriptor-based machine learning approach; the patches of 17 patients from Computer tomography (CT) images of the heart were used and categorized into "endocardial Scar tissue" and "normal tissue" groups. The ten feature vectors are extracted from patches using Radon descriptors and fed into a traditional machine learning model. RESULTS: The decision tree has shown the best performance with 98.07% accuracy. This study is the first attempt to provide a Radon transform-based machine learning method to distinguish patterns between "endocardial Scar tissue" and "normal tissue" groups. Our proposed research method could be potentially used in advanced interventions.


Asunto(s)
Ventrículos Cardíacos , Radón , Humanos , Ventrículos Cardíacos/diagnóstico por imagen , Cicatriz/diagnóstico por imagen , Corazón , Aprendizaje Automático
3.
J Imaging ; 9(2)2023 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-36826952

RESUMEN

The present study explores the efficacy of Machine Learning and Artificial Neural Networks in age assessment using the root length of the second and third molar teeth. A dataset of 1000 panoramic radiographs with intact second and third molars ranging from 12 to 25 years was archived. The length of the mesial and distal roots was measured using ImageJ software. The dataset was classified in three ways based on the age distribution: 2-Class, 3-Class, and 5-Class. We used Support Vector Machine (SVM), Random Forest (RF), and Logistic Regression models to train, test, and analyze the root length measurements. The mesial root of the third molar on the right side was a good predictor of age. The SVM showed the highest accuracy of 86.4% for 2-class, 66% for 3-class, and 42.8% for 5-Class. The RF showed the highest accuracy of 47.6% for 5-Class. Overall the present study demonstrated that the Deep Learning model (fully connected model) performed better than the Machine Learning models, and the mesial root length of the right third molar was a good predictor of age. Additionally, a combination of different root lengths could be informative while building a Machine Learning model.

4.
J Clin Med ; 11(23)2022 Nov 28.
Artículo en Inglés | MEDLINE | ID: mdl-36498594

RESUMEN

BACKGROUND: Wearable device technology has recently been involved in the healthcare industry substantially. India is the world's third largest market for wearable devices and is projected to expand at a compound annual growth rate of ~26.33%. However, there is a paucity of literature analyzing the factors determining the acceptance of wearable healthcare device technology among low-middle-income countries. METHODS: This cross-sectional, web-based survey aims to analyze the perceptions affecting the adoption and usage of wearable devices among the Indian population aged 16 years and above. RESULTS: A total of 495 responses were obtained. In all, 50.3% were aged between 25-50 years and 51.3% belonged to the lower-income group. While 62.2% of the participants reported using wearable devices for managing their health, 29.3% were using them daily. technology and task fitness (TTF) showed a significant positive correlation with connectivity (r = 0.716), health care (r = 0.780), communication (r = 0.637), infotainment (r = 0.598), perceived usefulness (PU) (r = 0.792), and perceived ease of use (PEOU) (r = 0.800). Behavioral intention (BI) to use wearable devices positively correlated with PEOU (r = 0.644) and PU (r = 0.711). All factors affecting the use of wearable devices studied had higher mean scores among participants who were already using wearable devices. Male respondents had significantly higher mean scores for BI (p = 0.034) and PEOU (p = 0.009). Respondents older than 25 years of age had higher mean scores for BI (p = 0.027) and Infotainment (p = 0.032). CONCLUSIONS: This study found a significant correlation with the adoption and acceptance of wearable devices for healthcare management in the Indian context.

5.
BMC Res Notes ; 15(1): 299, 2022 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-36109768

RESUMEN

OBJECTIVE: Atrial Fibrillation (A-fib) is an abnormal heartbeat condition in which the heart races and beats in an uncontrollable way. It is observed that the presence of increased epicardial fat/fatty tissue in the atrium can lead to A-fib. Persistent homology using topological features can be used to recapitulate enormous amounts of spatially complicated medical data into a visual code to identify a specific pattern of epicardial fat tissue with non-fat tissue. Our aim is to evaluate the topological pattern of left atrium epicardial fat tissue with non-fat tissue. RESULTS: A topological data analysis approach was acquired to study the imaging pattern between the left atrium epicardial fat tissue and non-fat tissue patches. The patches of eight patients from CT images of the left atrium heart were used and categorized into "left atrium epicardial fat tissue" and "non-fat tissue" groups. The features that distinguish the "epicardial fat tissue" and "non-fat tissue" groups are extracted using persistent homology (PH). Our result reveals that our proposed research can discriminate between left atrium epicardial fat tissue and non-fat tissue. Specifically, the range of Betti numbers in the epicardial tissue is smaller (0-30) than the non-fat tissue (0-100), indicating that non-fat tissue has good topology.


Asunto(s)
Fibrilación Atrial , Pericardio , Tejido Adiposo/diagnóstico por imagen , Fibrilación Atrial/diagnóstico por imagen , Atrios Cardíacos/diagnóstico por imagen , Humanos , Pericardio/diagnóstico por imagen
6.
Turk J Urol ; 48(4): 262-267, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35913441

RESUMEN

Artificial intelligence is used in predicting the clinical outcomes before minimally invasive treatments for benign prostatic hyperplasia, to address the insufficient reliability despite multiple assessment parameters, such as flow rates and symptom scores. Various models of artificial intelligence and its contemporary applications in benign prostatic hyperplasia are reviewed and discussed. A search strategy adapted to identify and review the literature on the application of artificial intelligence with a dedicated search string with the following keywords: "Machine Learning," "Artificial Intelligence," AND "Benign Prostate Enlargement" OR "BPH" OR "Benign Prostatic Hyperplasia" was included and categorized. Review articles, editorial comments, and non-urologic studies were excluded. In the present review, 1600 patients were included from 4 studies that used different classifiers such as fuzzy systems, computer-based vision systems, and clinical data mining to study the applications of artificial intelligence in diagnoses and severity prediction and determine clinical factors responsible for treatment response in benign prostatic hyperplasia. The accuracy to correctly diagnose benign prostatic hyperplasia by Fuzzy systems was 90%, while that of computer-based vision system was 96.3%. Data mining achieved sensitivity and specificity of 70% and 50%, respectively, in correctly predicting the clinical response to medical treatment in benign prostatic hyperplasia. Artificial intelligence is gaining attraction in urology, with the potential to improve diagnostics and patient care. The results of artificial intelligence-based applications in benign prostatic hyperplasia are promising but lack generalizability of results. However, in the future, we will see a shift in the clinical paradigm as artificial intelligence applications will find their place in the guidelines and revolutionize the decision-making process.

7.
Environ Sci Pollut Res Int ; 29(58): 88302-88317, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35829883

RESUMEN

Soil invertebrates serve as an outstanding biological indicator of the terrestrial ecosystem and overall soil quality, considering their high sensitivity when compared to other indicators of soil quality. In this study, the available soil ecotoxicity data (pEC50) against the soil invertebrate Folsomia candida (C. name: Springtail) (n = 45) were collated from the database of ECOTOX (cfpub.epa.gov/ecotox) and subjected to QSAR analysis using 2D descriptors. Four partial least squares (PLS) models were built based on the features selected through genertic algorithm followed by the best subset selection. These four models were then used as inputs for Intelligent Consensus Predictor version 1.2 (PLS version) to get the final consensus predictions, using the best selection of predictions (compound-wise) from four "qualified" individual models. Both internal and external validations metrics of the consensus predictions are well- balanced and within the acceptable range as per the OECD criteria. The consensus model was found to be better than the previous developed models for this endpoint. Predictions were also made using the Chemical Read-across approach, which showed even better external validation metric values than the consensus predictions. From the selected features in the QSAR models, it has been found out that molecular weight and presence of a di-thiophosphate group, electron donor groups, and polyhalogen substitutions have a significant impact on the soil ecotoxicity. The soil ecotoxicological risk assessment of organic chemicals can therefore be prioritized by these features. The models developed from diverse structural organic compounds can be applied to any new query compound for data gap filling.


Asunto(s)
Artrópodos , Contaminantes del Suelo , Animales , Suelo , Ecosistema , Consenso , Ecotoxicología , Contaminantes del Suelo/toxicidad , Compuestos Orgánicos , Relación Estructura-Actividad Cuantitativa
8.
Front Surg ; 9: 862322, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35360424

RESUMEN

The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment. Concerns about newer digital technologies becoming a new source of inaccuracy and data breaches have arisen as a result of its use. Mistakes in the procedure or protocol in the field of healthcare can have devastating consequences for the patient who is the victim of the error. Because patients come into contact with physicians at moments in their lives when they are most vulnerable, it is crucial to remember this. Currently, there are no well-defined regulations in place to address the legal and ethical issues that may arise due to the use of artificial intelligence in healthcare settings. This review attempts to address these pertinent issues highlighting the need for algorithmic transparency, privacy, and protection of all the beneficiaries involved and cybersecurity of associated vulnerabilities.

9.
Ir J Med Sci ; 191(4): 1473-1483, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34398394

RESUMEN

Data science is an interdisciplinary field that extracts knowledge and insights from many structural and unstructured data, using scientific methods, data mining techniques, machine-learning algorithms, and big data. The healthcare industry generates large datasets of useful information on patient demography, treatment plans, results of medical examinations, insurance, etc. The data collected from the Internet of Things (IoT) devices attract the attention of data scientists. Data science provides aid to process, manage, analyze, and assimilate the large quantities of fragmented, structured, and unstructured data created by healthcare systems. This data requires effective management and analysis to acquire factual results. The process of data cleansing, data mining, data preparation, and data analysis used in healthcare applications is reviewed and discussed in the article. The article provides an insight into the status and prospects of big data analytics in healthcare, highlights the advantages, describes the frameworks and techniques used, briefs about the challenges faced currently, and discusses viable solutions. Data science and big data analytics can provide practical insights and aid in the decision-making of strategic decisions concerning the health system. It helps build a comprehensive view of patients, consumers, and clinicians. Data-driven decision-making opens up new possibilities to boost healthcare quality.


Asunto(s)
Macrodatos , Ciencia de los Datos , Minería de Datos/métodos , Atención a la Salud , Humanos , Aprendizaje Automático
10.
J Imaging ; 9(1)2022 Dec 31.
Artículo en Inglés | MEDLINE | ID: mdl-36662108

RESUMEN

BACKGROUND AND OBJECTIVES: Brain Tumor Fusion-based Segments and Classification-Non-enhancing tumor (BTFSC-Net) is a hybrid system for classifying brain tumors that combine medical image fusion, segmentation, feature extraction, and classification procedures. MATERIALS AND METHODS: to reduce noise from medical images, the hybrid probabilistic wiener filter (HPWF) is first applied as a preprocessing step. Then, to combine robust edge analysis (REA) properties in magnetic resonance imaging (MRI) and computed tomography (CT) medical images, a fusion network based on deep learning convolutional neural networks (DLCNN) is developed. Here, the brain images' slopes and borders are detected using REA. To separate the sick region from the color image, adaptive fuzzy c-means integrated k-means (HFCMIK) clustering is then implemented. To extract hybrid features from the fused image, low-level features based on the redundant discrete wavelet transform (RDWT), empirical color features, and texture characteristics based on the gray-level cooccurrence matrix (GLCM) are also used. Finally, to distinguish between benign and malignant tumors, a deep learning probabilistic neural network (DLPNN) is deployed. RESULTS: according to the findings, the suggested BTFSC-Net model performed better than more traditional preprocessing, fusion, segmentation, and classification techniques. Additionally, 99.21% segmentation accuracy and 99.46% classification accuracy were reached using the proposed BTFSC-Net model. CONCLUSIONS: earlier approaches have not performed as well as our presented method for image fusion, segmentation, feature extraction, classification operations, and brain tumor classification. These results illustrate that the designed approach performed more effectively in terms of enhanced quantitative evaluation with better accuracy as well as visual performance.

11.
Ther Adv Urol ; 13: 17562872211044880, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34567272

RESUMEN

Over the years, many clinical and engineering methods have been adapted for testing and screening for the presence of diseases. The most commonly used methods for diagnosis and analysis are computed tomography (CT) and X-ray imaging. Manual interpretation of these images is the current gold standard but can be subject to human error, is tedious, and is time-consuming. To improve efficiency and productivity, incorporating machine learning (ML) and deep learning (DL) algorithms could expedite the process. This article aims to review the role of artificial intelligence (AI) and its contribution to data science as well as various learning algorithms in radiology. We will analyze and explore the potential applications in image interpretation and radiological advances for AI. Furthermore, we will discuss the usage, methodology implemented, future of these concepts in radiology, and their limitations and challenges.

12.
IEEE Access ; 9: 72970-72979, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34178559

RESUMEN

A number of recent papers have shown experimental evidence that suggests it is possible to build highly accurate deep neural network models to detect COVID-19 from chest X-ray images. In this paper, we show that good generalization to unseen sources has not been achieved. Experiments with richer data sets than have previously been used show models have high accuracy on seen sources, but poor accuracy on unseen sources. The reason for the disparity is that the convolutional neural network model, which learns features, can focus on differences in X-ray machines or in positioning within the machines, for example. Any feature that a person would clearly rule out is called a confounding feature. Some of the models were trained on COVID-19 image data taken from publications, which may be different than raw images. Some data sets were of pediatric cases with pneumonia where COVID-19 chest X-rays are almost exclusively from adults, so lung size becomes a spurious feature that can be exploited. In this work, we have eliminated many confounding features by working with as close to raw data as possible. Still, deep learned models may leverage source specific confounders to differentiate COVID-19 from pneumonia preventing generalizing to new data sources (i.e. external sites). Our models have achieved an AUC of 1.00 on seen data sources but in the worst case only scored an AUC of 0.38 on unseen ones. This indicates that such models need further assessment/development before they can be broadly clinically deployed. An example of fine-tuning to improve performance at a new site is given.

13.
JAMIA Open ; 4(1): ooab004, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33796821

RESUMEN

OBJECTIVES: The objectives of this study are to construct the high definition phenotype (HDP), a novel time-series data structure composed of both primary and derived parameters, using heterogeneous clinical sources and to determine whether different predictive models can utilize the HDP in the neonatal intensive care unit (NICU) to improve neonatal mortality prediction in clinical settings. MATERIALS AND METHODS: A total of 49 primary data parameters were collected from July 2018 to May 2020 from eight level-III NICUs. From a total of 1546 patients, 757 patients were found to contain sufficient fixed, intermittent, and continuous data to create HDPs. Two different predictive models utilizing the HDP, one a logistic regression model (LRM) and the other a deep learning long-short-term memory (LSTM) model, were constructed to predict neonatal mortality at multiple time points during the patient hospitalization. The results were compared with previous illness severity scores, including SNAPPE, SNAPPE-II, CRIB, and CRIB-II. RESULTS: A HDP matrix, including 12 221 536 minutes of patient stay in NICU, was constructed. The LRM model and the LSTM model performed better than existing neonatal illness severity scores in predicting mortality using the area under the receiver operating characteristic curve (AUC) metric. An ablation study showed that utilizing continuous parameters alone results in an AUC score of >80% for both LRM and LSTM, but combining fixed, intermittent, and continuous parameters in the HDP results in scores >85%. The probability of mortality predictive score has recall and precision of 0.88 and 0.77 for the LRM and 0.97 and 0.85 for the LSTM. CONCLUSIONS AND RELEVANCE: The HDP data structure supports multiple analytic techniques, including the statistical LRM approach and the machine learning LSTM approach used in this study. LRM and LSTM predictive models of neonatal mortality utilizing the HDP performed better than existing neonatal illness severity scores. Further research is necessary to create HDP-based clinical decision tools to detect the early onset of neonatal morbidities.

14.
J Clin Med ; 10(9)2021 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-33925767

RESUMEN

Recent advances in artificial intelligence (AI) have certainly had a significant impact on the healthcare industry. In urology, AI has been widely adopted to deal with numerous disorders, irrespective of their severity, extending from conditions such as benign prostate hyperplasia to critical illnesses such as urothelial and prostate cancer. In this article, we aim to discuss how algorithms and techniques of artificial intelligence are equipped in the field of urology to detect, treat, and estimate the outcomes of urological diseases. Furthermore, we explain the advantages that come from using AI over any existing traditional methods.

15.
J Endourol ; 35(9): 1307-1313, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33691473

RESUMEN

Objective: To develop a decision support system (DSS) for the prediction of the postoperative outcome of a kidney stone treatment procedure, particularly percutaneous nephrolithotomy (PCNL) to serve as a promising tool to provide counseling before an operation. Materials and Methods: The overall procedure includes data collection and prediction model development. Pre-/postoperative variables of 100 patients with staghorn calculus, who underwent PCNL, were collected. For feature vector, variables and categories including patient history variables, kidney stone parameters, and laboratory data were considered. The prediction model was developed using machine learning techniques, which include dimensionality reduction and supervised classification. Multiple classifier scheme was used for prediction. The derived DSS was evaluated by running the leave-one-patient-out cross-validation approach on the data set. Results: The system provided favorable accuracy (81%) in predicting the outcome of a treatment procedure. Performance in predicting the stone-free rate with the Minimum Redundancy Maximum Relevance feature (MRMR) treatment extracting top 3 features using Random Forest (RF) was 67%, with MRMR treatment extracting top 5 features using RF was 63%, and with MRMR treatment extracting top 10 features using Decision Tree was 62%. The statistical significance using standard error between the best area under the curves (AUCs) obtained from the Linear Discriminant Analysis (LDA) and MRMR. The results obtained from the LDA approach (0.81 AUC) was statistically significant (p = 0.027, z = 2.21) from the MRMR (0.64 AUC) (p = 0.05). Conclusion: The promising results of the developed DSS could be used in assisting urologists to provide counseling, predict a surgical outcome, and ultimately choose an appropriate surgical treatment for removing kidney stones.


Asunto(s)
Cálculos Renales , Nefrolitotomía Percutánea , Cálculos Coraliformes , Inteligencia Artificial , Humanos , Cálculos Renales/cirugía , Resultado del Tratamiento
16.
J Indian Prosthodont Soc ; 20(2): 162-170, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32655220

RESUMEN

AIM: The aim of this study was to examine systematically the data published on the cost and cost-effectiveness of mandibular two-implant-retained overdentures compared to other removable prosthodontic treatment options for edentulous mandible. SETTINGS AND DESIGN: It is a systematic review which analyses the available data from the prospective and retrospective studies and randomized clinical trials to find out costs and cost effectiveness of different removable treatment modalities for completely edentulous mandible. The study protocol was decided according to PRISMA guidelines. MATERIALS AND METHODS: The search was limited to English literature only and included an electronic search through PubMed Central, Cochrane Central Register of Controlled Trials, and complemented by hand-searching. All clinical trials published up to August 2019 were included (without any starting limit). Two independent investigators extracted the data and assessed the studies. STATISTICAL ANALYSIS USED: No meta-analysis was conducted because of the high heterogeneity of data. RESULTS: Out of the initial 509 records, only nine studies were included. The risks of bias of individual studies were assessed. Six studies presented data on cost and cost analysis only. The rest three articles provided data on cost-effectiveness. The overall costs of implant overdentures were higher than the conventional complete dentures. However, implant overdentures were more cost-effective when compared to conventional complete dentures. Single-implant overdentures are also less expensive than two-implant overdentures. Overdentures supported by two or four mini-implants were also reported as more cost-effective than conventional two-implant-supported overdentures. CONCLUSIONS: Two-implant-retained overdentures are more expensive but cost-effective than the conventional complete dentures. Two- or four-mini-implant-retained overdentures are less expensive than two-implant-retained overdentures, but there is a lack of long-term data on aftercare cost and survival rate of mini-implants. Single-implant overdentures are also less expensive than the two-implant-retained overdentures. The differences of the aftercare costs of different attachment systems for implant overdentures were not significant. There is a need of further studies on comparative cost-effectiveness of different types of implant overdentures.

17.
Comput Biol Med ; 122: 103882, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32658721

RESUMEN

Convolutional Neural Networks (CNNs) have been utilized for to distinguish between benign lung nodules and those that will become malignant. The objective of this study was to use an ensemble of CNNs to predict which baseline nodules would be diagnosed as lung cancer in a second follow up screening after more than one year. Low-dose helical computed tomography images and data were utilized from the National Lung Screening Trial (NLST). The malignant nodules and nodule positive controls were divided into training and test cohorts. T0 nodules were used to predict lung cancer incidence at T1 or T2. To increase the sample size, image augmentation was performed using rotations, flipping, and elastic deformation. Three CNN architectures were designed for malignancy prediction, and each architecture was trained using seven different seeds to create the initial weights. This enabled variability in the CNN models which were combined to generate a robust, more accurate ensemble model. Augmenting images using only rotation and flipping and training with images from T0 yielded the best accuracy to predict lung cancer incidence at T2 from a separate test cohort (Accuracy = 90.29%; AUC = 0.96) based on an ensemble 21 models. Images augmented by rotation and flipping enabled effective learning by increasing the relatively small sample size. Ensemble learning with deep neural networks is a compelling approach that accurately predicted lung cancer incidence at the second screening after the baseline screen mostly 2 years later.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Estudios de Cohortes , Humanos , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación
18.
Tomography ; 6(2): 209-215, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32548298

RESUMEN

Noninvasive diagnosis of lung cancer in early stages is one task where radiomics helps. Clinical practice shows that the size of a nodule has high predictive power for malignancy. In the literature, convolutional neural networks (CNNs) have become widely used in medical image analysis. We study the ability of a CNN to capture nodule size in computed tomography images after images are resized for CNN input. For our experiments, we used the National Lung Screening Trial data set. Nodules were labeled into 2 categories (small/large) based on the original size of a nodule. After all extracted patches were re-sampled into 100-by-100-pixel images, a CNN was able to successfully classify test nodules into small- and large-size groups with high accuracy. To show the generality of our discovery, we repeated size classification experiments using Common Objects in Context (COCO) data set. From the data set, we selected 3 categories of images, namely, bears, cats, and dogs. For all 3 categories a 5- × 2-fold cross-validation was performed to put them into small and large classes. The average area under receiver operating curve is 0.954, 0.952, and 0.979 for the bear, cat, and dog categories, respectively. Thus, camera image rescaling also enables a CNN to discover the size of an object. The source code for experiments with the COCO data set is publicly available in Github (https://github.com/VisionAI-USF/COCO_Size_Decoding/).


Asunto(s)
Neoplasias Pulmonares , Nódulos Pulmonares Múltiples , Animales , Gatos , Perros , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Redes Neurales de la Computación , Ensayos Clínicos Controlados Aleatorios como Asunto , Tomografía Computarizada por Rayos X , Ursidae
19.
Tomography ; 6(2): 250-260, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32548303

RESUMEN

Image acquisition parameters for computed tomography scans such as slice thickness and field of view may vary depending on tumor size and site. Recent studies have shown that some radiomics features were dependent on voxel size (= pixel size × slice thickness), and with proper normalization, this voxel size dependency could be reduced. Deep features from a convolutional neural network (CNN) have shown great promise in characterizing cancers. However, how do these deep features vary with changes in imaging acquisition parameters? To analyze the variability of deep features, a physical radiomics phantom with 10 different material cartridges was scanned on 8 different scanners. We assessed scans from 3 different cartridges (rubber, dense cork, and normal cork). Deep features from the penultimate layer of the CNN before (pre-rectified linear unit) and after (post-rectified linear unit) applying the rectified linear unit activation function were extracted from a pre-trained CNN using transfer learning. We studied both the interscanner and intrascanner dependency of deep features and also the deep features' dependency over the 3 cartridges. We found some deep features were dependent on pixel size and that, with appropriate normalization, this dependency could be reduced. False discovery rate was applied for multiple comparisons, to mitigate potentially optimistic results. We also used stable deep features for prognostic analysis on 1 non-small cell lung cancer data set.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , Fantasmas de Imagen
20.
J Med Imaging (Bellingham) ; 7(2): 024502, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32280729

RESUMEN

Purpose: Due to the high incidence and mortality rates of lung cancer worldwide, early detection of a precancerous lesion is essential. Low-dose computed tomography is a commonly used technique for screening, diagnosis, and prognosis of non-small-cell lung cancer. Recently, convolutional neural networks (CNN) had shown great potential in lung nodule classification. Clinical information (family history, gender, and smoking history) together with nodule size provide information about lung cancer risk. Large nodules have greater risk than small nodules. Approach: A subset of cases from the National Lung Screening Trial was chosen as a dataset in our study. We divided the nodules into large and small nodules based on different clinical guideline thresholds and then analyzed the groups individually. Similarly, we also analyzed clinical features by dividing them into groups. CNNs were designed and trained over each of these groups individually. To our knowledge, this is the first study to incorporate nodule size and clinical features for classification using CNN. We further made a hybrid model using an ensemble with the CNN models of clinical and size information to enhance malignancy prediction. Results: From our study, we obtained 0.9 AUC and 83.12% accuracy, which was a significant improvement over our previous best results. Conclusions: In conclusion, we found that dividing the nodules by size and clinical information for building predictive models resulted in improved malignancy predictions. Our analysis also showed that appropriately integrating clinical information and size groups could further improve risk prediction.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA