Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
J Imaging Inform Med ; 2024 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-39037669

RESUMEN

Adenomatous polyps, a common premalignant lesion, are often classified into villous adenoma (VA) and tubular adenoma (TA). VA has a higher risk of malignancy, whereas TA typically grows slowly and has a lower likelihood of cancerous transformation. Accurate classification is essential for tailored treatment. In this study, we develop a deep learning-based approach for the localization and classification of adenomatous polyps using endoscopic images. Specifically, a pre-trained EGE-UNet is first adopted to extract regions of interest from original images. Multi-level feature maps are then extracted by the feature extraction pipeline (FEP). The deep-level features are fed into the Pyramid Pooling Module (PPM) to capture global contextual information, and the squeeze body edge (SBE) module is then used to decouple the body and edge parts of features, enabling separate analysis of their distinct characteristics. The Group Aggregation Bridge (GAB) and Boundary Enhancement Module (BEM) are then applied to enhance the body features and edge features, respectively, emphasizing their structural and morphological characteristics. By combining the features of the body and edge parts, the final output can be obtained. Experiments show the proposed method achieved promising results on two private datasets. For adenoma vs. non-adenoma classification, It achieved a mIoU of 91.41%, mPA of 96.33%, mHD of 11.63, and mASD of 2.33. For adenoma subclassification (non-adenomas vs. villous adenomas vs. tubular adenomas), it achieved a mIoU of 91.21%, mPA of 94.83%, mHD of 13.75, and mASD of 2.56. These results demonstrate the potential of our approach for precise adenomatous polyp classification.

2.
Pancreatology ; 24(3): 404-423, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38342661

RESUMEN

Pancreatic cancer is one of digestive tract cancers with high mortality rate. Despite the wide range of available treatments and improvements in surgery, chemotherapy, and radiation therapy, the five-year prognosis for individuals diagnosed pancreatic cancer remains poor. There is still research to be done to see if immunotherapy may be used to treat pancreatic cancer. The goals of our research were to comprehend the tumor microenvironment of pancreatic cancer, found a useful biomarker to assess the prognosis of patients, and investigated its biological relevance. In this paper, machine learning methods such as random forest were fused with weighted gene co-expression networks for screening hub immune-related genes (hub-IRGs). LASSO regression model was used to further work. Thus, we got eight hub-IRGs. Based on hub-IRGs, we created a prognosis risk prediction model for PAAD that can stratify accurately and produce a prognostic risk score (IRG_Score) for each patient. In the raw data set and the validation data set, the five-year area under the curve (AUC) for this model was 0.9 and 0.7, respectively. And shapley additive explanation (SHAP) portrayed the importance of prognostic risk prediction influencing factors from a machine learning perspective to obtain the most influential certain gene (or clinical factor). The five most important factors were TRIM67, CORT, PSPN, SCAMP5, RFXAP, all of which are genes. In summary, the eight hub-IRGs had accurate risk prediction performance and biological significance, which was validated in other cancers. The result of SHAP helped to understand the molecular mechanism of pancreatic cancer.


Asunto(s)
Neoplasias Pancreáticas , Humanos , Área Bajo la Curva , Redes Reguladoras de Genes , Inmunoterapia , Aprendizaje Automático , Microambiente Tumoral , Proteínas de la Membrana
3.
J Imaging Inform Med ; 37(2): 688-705, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38343260

RESUMEN

Anterior cruciate ligament (ACL) tears are prevalent orthopedic sports injuries and are difficult to precisely classify. Previous works have demonstrated the ability of deep learning (DL) to provide support for clinicians in ACL tear classification scenarios, but it requires a large quantity of labeled samples and incurs a high computational expense. This study aims to overcome the challenges brought by small and imbalanced data and achieve fast and accurate ACL tear classification based on magnetic resonance imaging (MRI) of the knee. We propose a lightweight attentive graph neural network (GNN) with a conditional random field (CRF), named the ACGNN, to classify ACL ruptures in knee MR images. A metric-based meta-learning strategy is introduced to conduct independent testing through multiple node classification tasks. We design a lightweight feature embedding network using a feature-based knowledge distillation method to extract features from the given images. Then, GNN layers are used to find the dependencies between samples and complete the classification process. The CRF is incorporated into each GNN layer to refine the affinities. To mitigate oversmoothing and overfitting issues, we apply self-boosting attention, node attention, and memory attention for graph initialization, node updating, and correlation across graph layers, respectively. Experiments demonstrated that our model provided excellent performance on both oblique coronal data and sagittal data with accuracies of 92.94% and 91.92%, respectively. Notably, our proposed method exhibited comparable performance to that of orthopedic surgeons during an internal clinical validation. This work shows the potential of our method to advance ACL diagnosis and facilitates the development of computer-aided diagnosis methods for use in clinical practice.

4.
J Med Internet Res ; 25: e44795, 2023 11 06.
Artículo en Inglés | MEDLINE | ID: mdl-37856760

RESUMEN

Lockdowns and border closures due to COVID-19 imposed mental, social, and financial hardships in many societies. Living with the virus and resuming normal life are increasingly being advocated due to decreasing virus severity and widespread vaccine coverage. However, current trends indicate a continued absence of effective contingency plans to stop the next more virulent variant of the pandemic. The COVID-19-related mask waste crisis has also caused serious environmental problems and virus spreads. It is timely and important to consider how to precisely implement surveillance for the dynamic clearance of COVID-19 and how to efficiently manage discarded masks to minimize disease transmission and environmental hazards. In this viewpoint, we sought to address this issue by proposing an appropriate strategy for intelligent surveillance of infected cases and centralized management of mask waste. Such an intelligent strategy against COVID-19, consisting of wearable mask sample collectors (masklect) and voiceprints and based on the STRONG (Spatiotemporal Reporting Over Network and GPS) strategy, could enable the resumption of social activities and economic recovery and ensure a safe public health environment sustainably.


Asunto(s)
COVID-19 , SARS-CoV-2 , Humanos , Máscaras , COVID-19/epidemiología , COVID-19/prevención & control , Salud Pública
5.
J Mol Med (Berl) ; 101(10): 1267-1287, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37653150

RESUMEN

We aimed to develop endoplasmic reticulum (ER) stress-related risk signature to predict the prognosis of melanoma and elucidate the immune characteristics and benefit of immunotherapy in ER-related risk score-defined subgroups of melanoma based on a machine learning algorithm. Based on The Cancer Genome Atlas (TCGA) melanoma dataset (n = 471) and GTEx database (n = 813), 365 differentially expressed ER-associated genes were selected using the univariate Cox model and LASSO penalty Cox model. Ten genes impacting OS were identified to construct an ER-related signature by using the multivariate Cox regression method and validated with the Gene Expression Omnibus (GEO) dataset. Thereafter, the immune features, CNV, methylation, drug sensitivity, and the clinical benefit of anticancer immune checkpoint inhibitor (ICI) therapy in risk score subgroups, were analyzed. We further validated the gene signature using pan-cancer analysis by comparing it to other tumor types. The ER-related risk score was constructed based on the ARNTL, AGO1, TXN, SORL1, CHD7, EGFR, KIT, HLA-DRB1 KCNA2, and EDNRB genes. The high ER stress-related risk score group patients had a poorer overall survival (OS) than the low-risk score group patients, consistent with the results in the GEO cohort. The combined results suggested that a high ER stress-related risk score was associated with cell adhesion, gamma phagocytosis, cation transport, cell surface cell adhesion, KRAS signalling, CD4 T cells, M1 macrophages, naive B cells, natural killer (NK) cells, and eosinophils and less benefitted from ICI therapy. Based on the expression patterns of ER stress-related genes, we created an appropriate predictive model, which can also help distinguish the immune characteristics, CNV, methylation, and the clinical benefit of ICI therapy. KEY MESSAGES: Melanoma is the cutaneous tumor with a high degree of malignancy, the highest fatality rate, and extremely poor prognosis. Model usefulness should be considered when using models that contained more features. We constructed the Endoplasmic Reticulum stress-associated signature using TCGA and GEO database based on machine learning algorithm. ER stress-associated signature has excellent ability for predicting prognosis for melanoma.

6.
Multimed Tools Appl ; : 1-21, 2023 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-37362730

RESUMEN

Chronic suppurative otitis media (CSOM) and middle ear cholesteatoma (MEC) were two most common chronic middle ear disease(MED) clinically. Accurate differential diagnosis between these two diseases is of high clinical importance given the difference in etiologies, lesion manifestations and treatments. The high-resolution computed tomography (CT) scanning of the temporal bone presents a better view of auditory structures, which is currently regarded as the first-line diagnostic imaging modality in the case of MED. In this paper, we first used a region-of-interest (ROI) network to find the area of the middle ear in the entire temporal bone CT image and segment it to a size of 100*100 pixels. Then, we used a structure-constrained deep feature fusion algorithm to convert different characteristic features of the middle ear in three groups as suppurative otitis media (CSOM), middle ear cholesteatoma (MEC) and normal patches. To fuse structure information, we introduced a graph isomorphism network that implements a feature vector from neighbourhoods and the coordinate distance between vertices. Finally, we construct a classifier named the "otitis media, cholesteatoma and normal identification classifier" (OMCNIC). The experimental results achieved by the graph isomorphism network revealed a 96.36% accuracy in all CSOM and MEC classifications. The experimental results indicate that our structure-constrained deep feature fusion algorithm can quickly and effectively classify CSOM and MEC. It will help otologist in the selection of the most appropriate treatment, and the complications can also be reduced.

7.
Zhong Nan Da Xue Xue Bao Yi Xue Ban ; 48(3): 463-471, 2023 Mar 28.
Artículo en Inglés, Chino | MEDLINE | ID: mdl-37164930

RESUMEN

With the optimization of deep learning algorithms and the accumulation of medical big data, deep learning technology has been widely applied in research in various fields of otology in recent years. At present, research on deep learning in otology is combined with a variety of data such as endoscopy, temporal bone images, audiograms, and intraoperative images, which involves diagnosis of otologic diseases (including auricular malformations, external auditory canal diseases, middle ear diseases, and inner ear diseases), treatment (guiding medication and surgical planning), and prognosis prediction (involving hearing regression and speech learning). According to the type of data and the purpose of the study (disease diagnosis, treatment and prognosis), the different neural network models can be used to take advantage of their algorithms, and the deep learning can be a good aid in treating otologic diseases. The deep learning has a good applicable prospect in the clinical diagnosis and treatment of otologic diseases, which can play a certain role in promoting the development of deep learning combined with intelligent medicine.


Asunto(s)
Aprendizaje Profundo , Enfermedades del Oído , Otolaringología , Humanos , Enfermedades del Oído/diagnóstico , Enfermedades del Oído/terapia , Redes Neurales de la Computación , Algoritmos
8.
Appl Intell (Dordr) ; 53(7): 7614-7633, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35919632

RESUMEN

Acne vulgaris, the most common skin disease, can cause substantial economic and psychological impacts to the people it affects, and its accurate grading plays a crucial role in the treatment of patients. In this paper, we firstly proposed an acne grading criterion that considers lesion classifications and a metric for producing accurate severity ratings. Due to similar appearance of acne lesions with comparable severities and difficult-to-count lesions, severity assessment is a challenging task. We cropped facial skin images of several lesion patches and then addressed the acne lesion with a lightweight acne regular network (Acne-RegNet). Acne-RegNet was built by using a median filter and histogram equalization to improve image quality, a channel attention mechanism to boost the representational power of network, a region-based focal loss to handle classification imbalances and a model pruning and feature-based knowledge distillation to reduce model size. After the application of Acne-RegNet, the severity score is calculated, and the acne grading is further optimized by the metadata of the patients. The entire acne assessment procedure was deployed to a mobile device, and a phone app was designed. Compared with state-of-the-art lightweight models, the proposed Acne-RegNet significantly improves the accuracy of lesion classifications. The acne app demonstrated promising results in severity assessments (accuracy: 94.56%) and showed a dermatologist-level diagnosis on the internal clinical dataset.The proposed acne app could be a useful adjunct to assess acne severity in clinical practice and it enables anyone with a smartphone to immediately assess acne, anywhere and anytime.

9.
Zhong Nan Da Xue Xue Bao Yi Xue Ban ; 47(8): 1037-1048, 2022 Aug 28.
Artículo en Inglés, Chino | MEDLINE | ID: mdl-36097771

RESUMEN

OBJECTIVES: Chronic suppurative otitis media (CSOM) and middle ear cholesteatoma (MEC) are the 2 most common chronic middle ear diseases. In the process of diagnosis and treatment, the 2 diseases are prone to misdiagnosis and missed diagnosis due to their similar clinical manifestations. High resolution computed tomography (HRCT) can clearly display the fine anatomical structure of the temporal bone, accurately reflect the middle ear lesions and the extent of the lesions, and has advantages in the differential diagnosis of chronic middle ear diseases. This study aims to develop a deep learning model for automatic information extraction and classification diagnosis of chronic middle ear diseases based on temporal bone HRCT image data to improve the classification and diagnosis efficiency of chronic middle ear diseases in clinical practice and reduce the occurrence of missed diagnosis and misdiagnosis. METHODS: The clinical records and temporal bone HRCT imaging data for patients with chronic middle ear diseases hospitalized in the Department of Otorhinolaryngology, Xiangya Hospital from January 2018 to October 2020 were retrospectively collected. The patient's medical records were independently reviewed by 2 experienced otorhinolaryngologist and the final diagnosis was reached a consensus. A total of 499 patients (998 ears) were enrolled in this study. The 998 ears were divided into 3 groups: an MEC group (108 ears), a CSOM group (622 ears), and a normal group (268 ears). The Gaussian noise with different variances was used to amplify the samples of the dataset to offset the imbalance in the number of samples between groups. The sample size of the amplified experimental dataset was 1 806 ears. In the study, 75% (1 355) samples were randomly selected for training, 10% (180) samples for validation, and the remaining 15% (271) samples for testing and evaluating the model performance. The overall design for the model was a serial structure, and the deep learning model with 3 different functions was set up. The first model was the regional recommendation network algorithm, which searched the middle ear image from the whole HRCT image, and then cut and saved the image. The second model was image contrast convolutional neural network (CNN) based on twin network structure, which searched the images matching the key layers of HRCT images from the cut images, and constructed 3D data blocks. The third model was based on 3D-CNN operation, which was used for the final classification and diagnosis of the 3D data block construction, and gave the final prediction probability. RESULTS: The special level search network based on twin network structure showed an average AUC of 0.939 on 10 special levels. The overall accuracy of the classification network based on 3D-CNN was 96.5%, the overall recall rate was 96.4%, and the average AUC under the 3 classifications was 0.983. The recall rates of CSOM cases and MEC cases were 93.7% and 97.4%, respectively. In the subsequent comparison experiments, the average accuracy of some classical CNN was 79.3%, and the average recall rate was 87.6%. The precision rate and the recall rate of the deep learning network constructed in this study were about 17.2% and 8.8% higher than those of the common CNN. CONCLUSIONS: The deep learning network model proposed in this study can automatically extract 3D data blocks containing middle ear features from the HRCT image data of patients' temporal bone, which can reduce the overall size of the data while preserve the relationship between corresponding images, and further use 3D-CNN for classification and diagnosis of CSOM and MEC. The design of this model is well fitting to the continuous characteristics of HRCT data, and the experimental results show high precision and adaptability, which is better than the current common CNN methods.


Asunto(s)
Enfermedades del Oído , Redes Neurales de la Computación , Algoritmos , Humanos , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos
10.
J Ambient Intell Humaniz Comput ; : 1-17, 2022 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-35813275

RESUMEN

Aiming at the difficulty in obtaining a complete Bayesian network (BN) structure directly through search-scoring algorithms, authors attempted to incorporate expert judgment and historical data to construct an interpretive structural model with an ISM-K2 algorithm for evaluating vaccination effectiveness (VE). By analyzing the influenza vaccine data provided by Hunan Provincial Center for Disease Control and Prevention, risk factors influencing VE in each link in the process of "Transportation-Storage-Distribution-Inoculation" were systematically investigated. Subsequently, an evaluation index system of VE and an ISM-K2 BN model were developed. Findings include: (1) The comprehensive quality of the staff handling vaccines has a significant impact on VE; (2) Predictive inference and diagnostic reasoning through the ISM-K2 BN model are stable, effective, and highly interpretable, and consequently, the post-production supervision of vaccines is enhanced. The study provides a theoretical basis for evaluating VE and a scientific tool for tracking the responsibility of adverse events of ineffective vaccines, which has the value of promotion in improving VE and reducing the transmission rate of infectious diseases.

11.
Int J Comput Assist Radiol Surg ; 17(3): 579-587, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34845590

RESUMEN

PURPOSE: Fully automated abdominal adipose tissue segmentation from computed tomography (CT) scans plays an important role in biomedical diagnoses and prognoses. However, to identify and segment subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the abdominal region, the traditional routine process used in clinical practise is unattractive, expensive, time-consuming and leads to false segmentation. To address this challenge, this paper introduces and develops an effective global-anatomy-level convolutional neural network (ConvNet) automated segmentation of abdominal adipose tissue from CT scans termed EFNet to accommodate multistage semantic segmentation and high similarity intensity characteristics of the two classes (VAT and SAT) in the abdominal region. METHODS: EFNet consists of three pathways: (1) The first pathway is the max unpooling operator, which was used to reduce computational consumption. (2) The second pathway is concatenation, which was applied to recover the shape segmentation results. (3) The third pathway is anatomy pyramid pooling, which was adopted to obtain fine-grained features. The usable anatomical information was encoded in the output of EFNet and allowed for the control of the density of the fine-grained features. RESULTS: We formulated an end-to-end manner for the learning process of EFNet, where the representation features can be jointly learned through a mixed feature fusion layer. We immensely evaluated our model on different datasets and compared it to existing deep learning networks. Our proposed model called EFNet outperformed other state-of-the-art models on the segmentation results and demonstrated tremendous performances for abdominal adipose tissue segmentation. CONCLUSION: EFNet is extremely fast with remarkable performance for fully automated segmentation of the VAT and SAT in abdominal adipose tissue from CT scans. The proposed method demonstrates a strength ability for automated detection and segmentation of abdominal adipose tissue in clinical practise.


Asunto(s)
Aprendizaje Profundo , Grasa Abdominal/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , Grasa Subcutánea , Tomografía Computarizada por Rayos X
12.
Comput Methods Programs Biomed ; 207: 106212, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34126411

RESUMEN

BACKGROUND AND OBJECTIVE: Esophageal high-resolution manometry (HRM) is widely performed to evaluate the representation of manometric features in patients for diagnosing normal esophageal motility and motility disorders. Clinicians commonly assess esophageal motility function using a scheme termed the Chicago classification, which is difficult, time-consuming and inefficient with large amounts of data. METHODS: Deep learning is a promising approach for diagnosing disorders and has various attractive advantages. In this study, we effectively trace esophageal motility function with HRM by using a deep learning computational model, namely, EMD-DL, which leverages three-dimensional convolution (Conv3D) and bidirectional convolutional long-short-term-memory (BiConvLSTM) models. More specifically, to fully exploit wet swallowing information, we establish an efficient swallowing representation method by localizing manometric features and swallowing box regressions from HRM. Then, EMD-DL learns how to identify major motility disorders, minor motility disorders and normal motility. To the best of our knowledge, this is the first attempt to use Conv3D and BiConvLSTM to predict esophageal motility function over esophageal HRM. RESULTS: Test experiments on HRM datasets demonstrated that the overall accuracy of the proposed EMD-DL model is 91.32% with 90.5% sensitivity and 95.87% specificity. By leveraging information across swallowing motor cycles, our model can rapidly recognize esophageal motility function better than a gastroenterologist and lays the foundation for accurately diagnosing esophageal motility disorders in real time. CONCLUSIONS: This approach opens new avenues for detecting and identifying esophageal motility function, thereby facilitating more efficient computer-aided diagnosis in clinical practice.


Asunto(s)
Aprendizaje Profundo , Trastornos de la Motilidad Esofágica , Deglución , Diagnóstico por Computador , Trastornos de la Motilidad Esofágica/diagnóstico , Humanos , Manometría
13.
J Digit Imaging ; 34(2): 337-350, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33634415

RESUMEN

Jaundice occurs as a symptom of various diseases, such as hepatitis, the liver cancer, gallbladder or pancreas. Therefore, clinical measurement with special equipment is a common method that is used to identify the total serum bilirubin level in patients. Fully automated multi-class recognition of jaundice combines two key issues: (1) the critical difficulties in multi-class recognition of jaundice approaches contrasting with the binary class and (2) the subtle difficulties in multi-class recognition of jaundice represent extensive individuals variability of high-resolution photos of subjects, huge coherency between healthy controls and occult jaundice, as well as broadly inhomogeneous color distribution. We introduce a novel approach for multi-class recognition of jaundice to detect occult jaundice, obvious jaundice and healthy controls. First, region annotation network is developed and trained to propose eye candidates. Subsequently, an efficient jaundice recognizer is proposed to learn similarities, context, localization features and globalization characteristics on photos of subjects. Finally, both networks are unified by using shared convolutional layer. Evaluation of the structured model in a comparative study resulted in a significant performance boost (categorical accuracy for mean 91.38%) over the independent human observer. Our work was exceeded against the state-of-the-art convolutional neural network (96.85% and 90.06% for training and validation subset, respectively) and showed a remarkable categorical result for mean 95.33% on testing subset. The proposed network makes a performance better than physicians. This work demonstrates the strength of our proposal to help bringing an efficient tool for multi-class recognition of jaundice into clinical practice.


Asunto(s)
Ictericia , Redes Neurales de la Computación , Humanos
14.
Pattern Recognit ; 110: 107613, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32868956

RESUMEN

The COVID-19 outbreak continues to threaten the health and life of people worldwide. It is an immediate priority to develop and test a computer-aided detection (CAD) scheme based on deep learning (DL) to automatically localize and differentiate COVID-19 from community-acquired pneumonia (CAP) on chest X-rays. Therefore, this study aims to develop and test an efficient and accurate deep learning scheme that assists radiologists in automatically recognizing and localizing COVID-19. A retrospective chest X-ray image dataset was collected from open image data and the Xiangya Hospital, which was divided into a training group and a testing group. The proposed CAD framework is composed of two steps with DLs: the Discrimination-DL and the Localization-DL. The first DL was developed to extract lung features from chest X-ray radiographs for COVID-19 discrimination and trained using 3548 chest X-ray radiographs. The second DL was trained with 406-pixel patches and applied to the recognized X-ray radiographs to localize and assign them into the left lung, right lung or bipulmonary. X-ray radiographs of CAP and healthy controls were enrolled to evaluate the robustness of the model. Compared to the radiologists' discrimination and localization results, the accuracy of COVID-19 discrimination using the Discrimination-DL yielded 98.71%, while the accuracy of localization using the Localization-DL was 93.03%. This work represents the feasibility of using a novel deep learning-based CAD scheme to efficiently and accurately distinguish COVID-19 from CAP and detect localization with high accuracy and agreement with radiologists.

15.
Ann Biomed Eng ; 48(1): 312-328, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31451989

RESUMEN

One major role of an accurate distribution of abdominal adipose tissue is to predict disease risk. This paper proposes a novel effective three-level convolutional neural network (CNN) approach to automate the selection of abdominal computed tomography (CT) images on large-scale CT scans and automatically quantify the visceral and subcutaneous adipose tissue. First, the proposed framework employs support vector machine (SVM) classifier with a configured parameter to cluster abdominal CT images from screening patients. Second, a pyramid dilation network (DilaLab) is designed based on CNN, to address the complex distribution and non-abdominal internal adipose tissue problems of biomedical image segmentation in visceral adipose tissue. Finally, since the trained DilaLab implicitly encodes the fat-related learning, the transferred DilaLab learning and a simple decoder constitute a new network (DilaLabPlus) for quantifying subcutaneous adipose tissue. The networks are trained not only all available CT images but also with a limited number of CT scans, such as 70 samples including a 10% validation subset. All networks are yielding more precise results. The mean accuracy of the configured SVM classifier yields promising performance of 99.83%, while DilaLabPlus achieves a remarkable performance improvement an with average of 98.08 ± 0.84% standard deviation and 0.7 ± 0.8% standard deviation false-positive rate. The performance of DilaLab yields average 97.82 ± 1.34% standard deviation and 1.23 ± 1.33% standard deviation false-positive rate. This study demonstrates considerable improvement in feasibility and reliability for the fully automated recognition of abdominal CT slices and segmentation of selected abdominal CT in subcutaneous and visceral adipose tissue, and it has a high agreement with a manually annotated biomarker.


Asunto(s)
Tejido Adiposo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Tejido Subcutáneo/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Abdomen/diagnóstico por imagen , Humanos , Máquina de Vectores de Soporte
16.
IEEE Trans Neural Netw ; 21(9): 1517-23, 2010 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-20693108

RESUMEN

It is well known that single hidden layer feedforward networks with radial basis function (RBF) kernels are universal approximators when all the parameters of the networks are obtained through all kinds of algorithms. However, as observed in most neural network implementations, tuning all the parameters of the network may cause learning complicated, poor generalization, overtraining and unstable. Unlike conventional neural network theories, this brief gives a constructive proof for the fact that a decay RBF neural network with n+1 hidden neurons can interpolate n+1 multivariate samples with zero error. Then we prove that the given decay RBFs can uniformly approximate any continuous multivariate functions with arbitrary precision without training. The faster convergence and better generalization performance than conventional RBF algorithm, BP algorithm, extreme learning machine and support vector machines are shown by means of two numerical experiments.


Asunto(s)
Algoritmos , Inteligencia Artificial , Simulación por Computador/normas , Análisis Multivariante , Redes Neurales de la Computación , Cómputos Matemáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...