Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(19)2023 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-37836874

RESUMEN

The Internet of Things (IoT) has significantly benefited several businesses, but because of the volume and complexity of IoT systems, there are also new security issues. Intrusion detection systems (IDSs) guarantee both the security posture and defense against intrusions of IoT devices. IoT systems have recently utilized machine learning (ML) techniques widely for IDSs. The primary deficiencies in existing IoT security frameworks are their inadequate intrusion detection capabilities, significant latency, and prolonged processing time, leading to undesirable delays. To address these issues, this work proposes a novel range-optimized attention convolutional scattered technique (ROAST-IoT) to protect IoT networks from modern threats and intrusions. This system uses the scattered range feature selection (SRFS) model to choose the most crucial and trustworthy properties from the supplied intrusion data. After that, the attention-based convolutional feed-forward network (ACFN) technique is used to recognize the intrusion class. In addition, the loss function is estimated using the modified dingo optimization (MDO) algorithm to ensure the maximum accuracy of classifier. To evaluate and compare the performance of the proposed ROAST-IoT system, we have utilized popular intrusion datasets such as ToN-IoT, IoT-23, UNSW-NB 15, and Edge-IIoT. The analysis of the results shows that the proposed ROAST technique did better than all existing cutting-edge intrusion detection systems, with an accuracy of 99.15% on the IoT-23 dataset, 99.78% on the ToN-IoT dataset, 99.88% on the UNSW-NB 15 dataset, and 99.45% on the Edge-IIoT dataset. On average, the ROAST-IoT system achieved a high AUC-ROC of 0.998, demonstrating its capacity to distinguish between legitimate data and attack traffic. These results indicate that the ROAST-IoT algorithm effectively and reliably detects intrusion attacks mechanism against cyberattacks on IoT systems.

2.
IT Prof ; 23(4): 57-62, 2021 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-35582211

RESUMEN

The novel coronavirus named COVID-19 has quickly spread among humans worldwide, and the situation remains hazardous to the health system. The existence of this virus in the human body is identified through sputum or blood samples. Furthermore, computed tomography (CT) or X-ray has become a significant tool for quick diagnoses. Thus, it is essential to develop an online and real-time computer-aided diagnosis (CAD) approach to support physicians and avoid further spreading of the disease. In this research, a convolutional neural network (CNN) -based Residual neural network (ResNet50) has been employed to detect COVID-19 through chest X-ray images and achieved 98% accuracy. The proposed CAD system will receive the X-ray images from the remote hospitals/healthcare centers and perform diagnostic processes. Furthermore, the proposed CAD system uses advanced load balancer and resilience features to achieve fault tolerance with zero delays and perceives more infected cases during this pandemic.

3.
J Cardiothorac Vasc Anesth ; 31(1): 37-44, 2017 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27554234

RESUMEN

OBJECTIVE: To assess the accuracy and applicability of a novel system, not requiring calibration, for continuous lactate monitoring with intravascular microdialysis in high-risk cardiac surgery. DESIGN: Single-center prospective observational study. SETTING: City Hospital #1 of Arkhangelsk, Russian Federation. PARTICIPANTS: Twenty-one adult patients undergoing elective complex repair or replacement of two or more valves or combined valve and coronary artery cardiac surgery. INTERVENTIONS: After induction of anesthesia, in all patients a dedicated triple-lumen catheter functioning as a regular central venous catheter with integrated microdialysis function was inserted via the right jugular vein for continuous lactate monitoring using the intravascular microdialysis system. MEASUREMENTS AND MAIN RESULTS: Lactate values displayed by the microdialysis system were compared with the reference arterial blood gas (ABG) values. In total, 432 paired microdialysis-ABG lactate samples were obtained. After surgery, the concentration of lactate increased significantly, peaking at 8 hours (p<0.05). The lactate clearance within 8 hours after peak concentration was 50% (39%-63%). There was a significant correlation between Lactatecont and Lactatecont (rho = 0.92, p<0.0001). Bland-Altman analysis showed a bias (mean difference)±limits of agreement (±1.96 SD) of 0.09±1.1 mmol/L. In patients with postoperative complications, peak lactate concentration was significantly higher compared with those without complications: 6.75 (4.43-7.75) mmol/L, versus 4.20 (3.95-4.87) mmol/L (p = 0.002). CONCLUSIONS: Lactate concentration increased significantly after high-risk cardiac surgery. The intravascular microdialysis technique for lactate measurement provided acceptable accuracy and can be used for continuous blood lactate monitoring in cardiac surgery.


Asunto(s)
Procedimientos Quirúrgicos Cardíacos/métodos , Ácido Láctico/sangre , Monitoreo Intraoperatorio/métodos , Anciano , Procedimientos Quirúrgicos Cardíacos/efectos adversos , Cateterismo Venoso Central/métodos , Femenino , Humanos , Hiperlactatemia/diagnóstico , Hiperlactatemia/etiología , Masculino , Microdiálisis/métodos , Persona de Mediana Edad , Monitoreo Fisiológico/métodos , Cuidados Posoperatorios/métodos , Estudios Prospectivos
4.
J Clin Monit Comput ; 31(2): 361-370, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26951494

RESUMEN

To evaluate the accuracy of estimated continuous cardiac output (esCCO) based on pulse wave transit time in comparison with cardiac output (CO) assessed by transpulmonary thermodilution (TPTD) in off-pump coronary artery bypass grafting (OPCAB). We calibrated the esCCO system with non-invasive (Part 1) and invasive (Part 2) blood pressure and compared with TPTD measurements. We performed parallel measurements of CO with both techniques and assessed the accuracy and precision of individual CO values and agreement of trends of changes perioperatively (Part 1) and postoperatively (Part 2). A Bland-Altman analysis revealed a bias between non-invasive esCCO and TPTD of 0.9 L/min and limits of agreement of ±2.8 L/min. Intraoperative bias was 1.2 L/min with limits of agreement of ±2.9 L/min and percentage error (PE) of 64 %. Postoperatively, bias was 0.4 L/min, limits of agreement of ±2.3 L/min and PE of 41 %. A Bland-Altman analysis of invasive esCCO and TPTD after OPCAB found bias of 0.3 L/min with limits of agreement of ±2.1 L/min and PE of 40 %. A 4-quadrant plot analysis of non-invasive esCCO versus TPTD revealed overall, intraoperative and postoperative concordance rate of 76, 65, and 89 %, respectively. The analysis of trending ability of invasive esCCO after OPCAB revealed concordance rate of 73 %. During OPCAB, esCCO demonstrated poor accuracy, precision and trending ability compared to TPTD. Postoperatively, non-invasive esCCO showed better agreement with TPTD. However, invasive calibration of esCCO did not improve the accuracy and precision and the trending ability of method.


Asunto(s)
Gasto Cardíaco/fisiología , Puente de Arteria Coronaria Off-Pump , Monitoreo Intraoperatorio/métodos , Monitoreo Fisiológico/métodos , Termodilución/métodos , Anciano , Anestesia , Presión Sanguínea , Determinación de la Presión Sanguínea , Calibración , Femenino , Humanos , Masculino , Persona de Mediana Edad , Monitoreo Intraoperatorio/instrumentación , Análisis de la Onda del Pulso , Reproducibilidad de los Resultados , Factores de Tiempo , Resistencia Vascular
5.
ScientificWorldJournal ; 2014: 672630, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24967437

RESUMEN

Face recognition in today's technological world, and face recognition applications attain much more importance. Most of the existing work used frontal face images to classify face image. However these techniques fail when applied on real world face images. The proposed technique effectively extracts the prominent facial features. Most of the features are redundant and do not contribute to representing face. In order to eliminate those redundant features, computationally efficient algorithm is used to select the more discriminative face features. Extracted features are then passed to classification step. In the classification step, different classifiers are ensemble to enhance the recognition accuracy rate as single classifier is unable to achieve the high accuracy. Experiments are performed on standard face database images and results are compared with existing techniques.


Asunto(s)
Inteligencia Artificial , Cara , Reconocimiento de Normas Patrones Automatizadas , Expresión Facial , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados
6.
Crit Care ; 17(5): R191, 2013 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-24010849

RESUMEN

INTRODUCTION: Several single-center studies and meta-analyses have shown that perioperative goal-directed therapy may significantly improve outcomes in general surgical patients. We hypothesized that using a treatment algorithm based on pulse pressure variation, cardiac index trending by radial artery pulse contour analysis, and mean arterial pressure in a study group (SG), would result in reduced complications, reduced length of hospital stay and quicker return of bowel movement postoperatively in abdominal surgical patients, when compared to a control group (CG). METHODS: 160 patients undergoing elective major abdominal surgery were randomized to the SG (79 patients) or to the CG (81 patients). In the SG hemodynamic therapy was guided by pulse pressure variation, cardiac index trending and mean arterial pressure. In the CG hemodynamic therapy was performed at the discretion of the treating anesthesiologist. Outcome data were recorded up to 28 days postoperatively. RESULTS: The total number of complications was significantly lower in the SG (72 vs. 52 complications, p = 0.038). In particular, infection complications were significantly reduced (SG: 13 vs. CG: 26 complications, p = 0.023). There were no significant differences between the two groups for return of bowel movement (SG: 3 vs. CG: 2 days postoperatively, p = 0.316), duration of post anesthesia care unit stay (SG: 180 vs. CG: 180 minutes, p = 0.516) or length of hospital stay (SG: 11 vs. CG: 10 days, p = 0.929). CONCLUSIONS: This multi-center study demonstrates that hemodynamic goal-directed therapy using pulse pressure variation, cardiac index trending and mean arterial pressure as the key parameters leads to a decrease in postoperative complications in patients undergoing major abdominal surgery. TRIAL REGISTRATION: ClinicalTrial.gov, NCT01401283.


Asunto(s)
Presión Sanguínea/fisiología , Procedimientos Quirúrgicos Electivos/efectos adversos , Hemodinámica/fisiología , Monitoreo Intraoperatorio/métodos , Planificación de Atención al Paciente , Atención Perioperativa/métodos , Complicaciones Posoperatorias/prevención & control , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad , Complicaciones Posoperatorias/diagnóstico , Complicaciones Posoperatorias/etiología , Estudios Prospectivos , Arteria Radial/fisiología
7.
Diagnostics (Basel) ; 13(15)2023 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-37568946

RESUMEN

Computed tomography (CT) scans, or radiographic images, were used to aid in the early diagnosis of patients and detect normal and abnormal lung function in the human chest. However, the diagnosis of lungs infected with coronavirus disease 2019 (COVID-19) was made more accurately from CT scan data than from a swab test. This study uses human chest radiography pictures to identify and categorize normal lungs, lung opacities, COVID-19-infected lungs, and viral pneumonia (often called pneumonia). In the past, several CAD systems using image processing, ML/DL, and other forms of machine learning have been developed. However, those CAD systems did not provide a general solution, required huge hyper-parameters, and were computationally inefficient to process huge datasets. Moreover, the DL models required high computational complexity, which requires a huge memory cost, and the complexity of the experimental materials' backgrounds, which makes it difficult to train an efficient model. To address these issues, we developed the Inception module, which was improved to recognize and detect four classes of Chest X-ray in this research by substituting the original convolutions with an architecture based on modified-Xception (m-Xception). In addition, the model incorporates depth-separable convolution layers within the convolution layer, interlinked by linear residuals. The model's training utilized a two-stage transfer learning process to produce an effective model. Finally, we used the XgBoost classifier to recognize multiple classes of chest X-rays. To evaluate the m-Xception model, the 1095 dataset was converted using a data augmentation technique into 48,000 X-ray images, including 12,000 normal, 12,000 pneumonia, 12,000 COVID-19 images, and 12,000 lung opacity images. To balance these classes, we used a data augmentation technique. Using public datasets with three distinct train-test divisions (80-20%, 70-30%, and 60-40%) to evaluate our work, we attained an average of 96.5% accuracy, 96% F1 score, 96% recall, and 96% precision. A comparative analysis demonstrates that the m-Xception method outperforms comparable existing methods. The results of the experiments indicate that the proposed approach is intended to assist radiologists in better diagnosing different lung diseases.

8.
Diagnostics (Basel) ; 13(1)2023 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-36611459

RESUMEN

Mental deterioration or Alzheimer's (ALZ) disease is progressive and causes both physical and mental dependency. There is a need for a computer-aided diagnosis (CAD) system that can help doctors make an immediate decision. (1) Background: Currently, CAD systems are developed based on hand-crafted features, machine learning (ML), and deep learning (DL) techniques. Those CAD systems frequently require domain-expert knowledge and massive datasets to extract deep features or model training, which causes problems with class imbalance and overfitting. Additionally, there are still manual approaches used by radiologists due to the lack of dataset availability and to train the model with cost-effective computation. Existing works rely on performance improvement by neglecting the problems of the limited dataset, high computational complexity, and unavailability of lightweight and efficient feature descriptors. (2) Methods: To address these issues, a new approach, CAD-ALZ, is developed by extracting deep features through a ConvMixer layer with a blockwise fine-tuning strategy on a very small original dataset. At first, we apply the data augmentation method to images to increase the size of datasets. In this study, a blockwise fine-tuning strategy is employed on the ConvMixer model to detect robust features. Afterwards, a random forest (RF) is used to classify ALZ disease stages. (3) Results: The proposed CAD-ALZ model obtained significant results by using six evaluation metrics such as the F1-score, Kappa, accuracy, precision, sensitivity, and specificity. The CAD-ALZ model performed with a sensitivity of 99.69% and an F1-score of 99.61%. (4) Conclusions: The suggested CAD-ALZ approach is a potential technique for clinical use and computational efficiency compared to state-of-the-art approaches. The CAD-ALZ model code is freely available on GitHub for the scientific community.

9.
Diagnostics (Basel) ; 13(18)2023 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-37761291

RESUMEN

Convolutional neural network (CNN) models have been extensively applied to skin lesions segmentation due to their information discrimination capabilities. However, CNNs' struggle to capture the connection between long-range contexts when extracting deep semantic features from lesion images, resulting in a semantic gap that causes segmentation distortion in skin lesions. Therefore, detecting the presence of differential structures such as pigment networks, globules, streaks, negative networks, and milia-like cysts becomes difficult. To resolve these issues, we have proposed an approach based on semantic-based segmentation (Dermo-Seg) to detect differential structures of lesions using a UNet model with a transfer-learning-based ResNet-50 architecture and a hybrid loss function. The Dermo-Seg model uses ResNet-50 backbone architecture as an encoder in the UNet model. We have applied a combination of focal Tversky loss and IOU loss functions to handle the dataset's highly imbalanced class ratio. The obtained results prove that the intended model performs well compared to the existing models. The dataset was acquired from various sources, such as ISIC18, ISBI17, and HAM10000, to evaluate the Dermo-Seg model. We have dealt with the data imbalance present within each class at the pixel level using our hybrid loss function. The proposed model achieves a mean IOU score of 0.53 for streaks, 0.67 for pigment networks, 0.66 for globules, 0.58 for negative networks, and 0.53 for milia-like-cysts. Overall, the Dermo-Seg model is efficient in detecting different skin lesion structures and achieved 96.4% on the IOU index. Our Dermo-Seg system improves the IOU index compared to the most recent network.

10.
Healthcare (Basel) ; 11(6)2023 Mar 13.
Artículo en Inglés | MEDLINE | ID: mdl-36981494

RESUMEN

In recent years, a lot of attention has been paid to using radiology imaging to automatically find COVID-19. (1) Background: There are now a number of computer-aided diagnostic schemes that help radiologists and doctors perform diagnostic COVID-19 tests quickly, accurately, and consistently. (2) Methods: Using chest X-ray images, this study proposed a cutting-edge scheme for the automatic recognition of COVID-19 and pneumonia. First, a pre-processing method based on a Gaussian filter and logarithmic operator is applied to input chest X-ray (CXR) images to improve the poor-quality images by enhancing the contrast, reducing the noise, and smoothing the image. Second, robust features are extracted from each enhanced chest X-ray image using a Convolutional Neural Network (CNNs) transformer and an optimal collection of grey-level co-occurrence matrices (GLCM) that contain features such as contrast, correlation, entropy, and energy. Finally, based on extracted features from input images, a random forest machine learning classifier is used to classify images into three classes, such as COVID-19, pneumonia, or normal. The predicted output from the model is combined with Gradient-weighted Class Activation Mapping (Grad-CAM) visualisation for diagnosis. (3) Results: Our work is evaluated using public datasets with three different train-test splits (70-30%, 80-20%, and 90-10%) and achieved an average accuracy, F1 score, recall, and precision of 97%, 96%, 96%, and 96%, respectively. A comparative study shows that our proposed method outperforms existing and similar work. The proposed approach can be utilised to screen COVID-19-infected patients effectively. (4) Conclusions: A comparative study with the existing methods is also performed. For performance evaluation, metrics such as accuracy, sensitivity, and F1-measure are calculated. The performance of the proposed method is better than that of the existing methodologies, and it can thus be used for the effective diagnosis of the disease.

11.
Diagnostics (Basel) ; 13(20)2023 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-37891986

RESUMEN

It is difficult for clinicians or less-experienced ophthalmologists to detect early eye-related diseases. By hand, eye disease diagnosis is labor-intensive, prone to mistakes, and challenging because of the variety of ocular diseases such as glaucoma (GA), diabetic retinopathy (DR), cataract (CT), and normal eye-related diseases (NL). An automated ocular disease detection system with computer-aided diagnosis (CAD) tools is required to recognize eye-related diseases. Nowadays, deep learning (DL) algorithms enhance the classification results of retinograph images. To address these issues, we developed an intelligent detection system based on retinal fundus images. To create this system, we used ODIR and RFMiD datasets, which included various retinographics of distinct classes of the fundus, using cutting-edge image classification algorithms like ensemble-based transfer learning. In this paper, we suggest a three-step hybrid ensemble model that combines a classifier, a feature extractor, and a feature selector. The original image features are first extracted using a pre-trained AlexNet model with an enhanced structure. The improved AlexNet (iAlexNet) architecture with attention and dense layers offers enhanced feature extraction, task adaptability, interpretability, and potential accuracy benefits compared to other transfer learning architectures, making it particularly suited for tasks like retinograph classification. The extracted features are then selected using the ReliefF method, and then the most crucial elements are chosen to minimize the feature dimension. Finally, an XgBoost classifier offers classification outcomes based on the desired features. These classifications represent different ocular illnesses. We utilized data augmentation techniques to control class imbalance issues. The deep-ocular model, based mainly on the AlexNet-ReliefF-XgBoost model, achieves an accuracy of 95.13%. The results indicate the proposed ensemble model can assist dermatologists in making early decisions for the diagnosing and screening of eye-related diseases.

12.
Diagnostics (Basel) ; 13(8)2023 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-37189539

RESUMEN

Hypertensive retinopathy (HR) is a serious eye disease that causes the retinal arteries to change. This change is mainly due to the fact of high blood pressure. Cotton wool patches, bleeding in the retina, and retinal artery constriction are affected lesions of HR symptoms. An ophthalmologist often makes the diagnosis of eye-related diseases by analyzing fundus images to identify the stages and symptoms of HR. The likelihood of vision loss can significantly decrease the initial detection of HR. In the past, a few computer-aided diagnostics (CADx) systems were developed to automatically detect HR eye-related diseases using machine learning (ML) and deep learning (DL) techniques. Compared to ML methods, the CADx systems use DL techniques that require the setting of hyperparameters, domain expert knowledge, a huge training dataset, and a high learning rate. Those CADx systems have shown to be good for automating the extraction of complex features, but they cause problems with class imbalance and overfitting. By ignoring the issues of a small dataset of HR, a high level of computational complexity, and the lack of lightweight feature descriptors, state-of-the-art efforts depend on performance enhancement. In this study, a pretrained transfer learning (TL)-based MobileNet architecture is developed by integrating dense blocks to optimize the network for the diagnosis of HR eye-related disease. We developed a lightweight HR-related eye disease diagnosis system, known as Mobile-HR, by integrating a pretrained model and dense blocks. To increase the size of the training and test datasets, we applied a data augmentation technique. The outcomes of the experiments show that the suggested approach was outperformed in many cases. This Mobile-HR system achieved an accuracy of 99% and an F1 score of 0.99 on different datasets. The results were verified by an expert ophthalmologist. These results indicate that the Mobile-HR CADx model produces positive outcomes and outperforms state-of-the-art HR systems in terms of accuracy.

13.
Diagnostics (Basel) ; 13(10)2023 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-37238188

RESUMEN

Ovarian cancer ranks as the fifth leading cause of cancer-related mortality in women. Late-stage diagnosis (stages III and IV) is a major challenge due to the often vague and inconsistent initial symptoms. Current diagnostic methods, such as biomarkers, biopsy, and imaging tests, face limitations, including subjectivity, inter-observer variability, and extended testing times. This study proposes a novel convolutional neural network (CNN) algorithm for predicting and diagnosing ovarian cancer, addressing these limitations. In this paper, CNN was trained on a histopathological image dataset, divided into training and validation subsets and augmented before training. The model achieved a remarkable accuracy of 94%, with 95.12% of cancerous cases correctly identified and 93.02% of healthy cells accurately classified. The significance of this study lies in overcoming the challenges associated with the human expert examination, such as higher misclassification rates, inter-observer variability, and extended analysis times. This study presents a more accurate, efficient, and reliable approach to predicting and diagnosing ovarian cancer. Future research should explore recent advances in this field to enhance the effectiveness of the proposed method further.

14.
Diagnostics (Basel) ; 12(12)2022 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-36553116

RESUMEN

The major cause of death worldwide is due to cardiovascular disorders (CVDs). For a proper diagnosis of CVD disease, an inexpensive solution based on phonocardiogram (PCG) signals is proposed. (1) Background: Currently, a few deep learning (DL)-based CVD systems have been developed to recognize different stages of CVD. However, the accuracy of these systems is not up-to-the-mark, and the methods require high computational power and huge training datasets. (2) Methods: To address these issues, we developed a novel attention-based technique (CVT-Trans) on a convolutional vision transformer to recognize and categorize PCG signals into five classes. The continuous wavelet transform-based spectrogram (CWTS) strategy was used to extract representative features from PCG data. Following that, a new CVT-Trans architecture was created to categorize the CWTS signals into five groups. (3) Results: The dataset derived from our investigation indicated that the CVT-Trans system had an overall average accuracy ACC of 100%, SE of 99.00%, SP of 99.5%, and F1-score of 98%, based on 10-fold cross validation. (4) Conclusions: The CVD-Trans technique outperformed many state-of-the-art methods. The robustness of the constructed model was confirmed by 10-fold cross-validation. Cardiologists can use this CVT-Trans system to help patients with the diagnosis of heart valve problems.

15.
Curr Med Imaging ; 17(6): 686-694, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33334293

RESUMEN

Abnormal behaviors of tumors pose a risk to human survival. Thus, the detection of cancers at their initial stage is beneficial for patients and lowers the mortality rate. However, this can be difficult due to various factors related to imaging modalities, such as complex background, low contrast, brightness issues, poorly defined borders and the shape of the affected area. Recently, computer-aided diagnosis (CAD) models have been used to accurately diagnose tumors in different parts of the human body, especially breast, brain, lung, liver, skin and colon cancers. These cancers are diagnosed using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), colonoscopy, mammography, dermoscopy and histopathology. The aim of this review was to investigate existing approaches for the diagnosis of breast, brain, lung, liver, skin and colon tumors. The review focuses on decision-making systems, including handcrafted features and deep learning architectures for tumor detection.


Asunto(s)
Aprendizaje Automático , Neoplasias , Diagnóstico por Computador , Humanos , Mamografía , Neoplasias/diagnóstico , Radiofármacos
16.
Microsc Res Tech ; 84(9): 2186-2194, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33908111

RESUMEN

Females are approximately half of the total population worldwide, and most of them are victims of breast cancer (BC). Computer-aided diagnosis (CAD) frameworks can help radiologists to find breast density (BD), which further helps in BC detection precisely. This research detects BD automatically using mammogram images based on Internet of Medical Things (IoMT) supported devices. Two pretrained deep convolutional neural network models called DenseNet201 and ResNet50 were applied through a transfer learning approach. A total of 322 mammogram images containing 106 fatty, 112 dense, and 104 glandular cases were obtained from the Mammogram Image Analysis Society dataset. The pruning out irrelevant regions and enhancing target regions is performed in preprocessing. The overall classification accuracy of the BD task is performed and accomplished 90.47% through DensNet201 model. Such a framework is beneficial in identifying BD more rapidly to assist radiologists and patients without delay.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Internet , Mamografía , Redes Neurales de la Computación
17.
World J Crit Care Med ; 2(2): 9-16, 2013 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-24701411

RESUMEN

AIM: To determine the influence of intra-abdominal pressure (IAP) on respiratory function after surgical repair of ventral hernia and to compare two different methods of IAP measurement during the perioperative period. METHODS: Thirty adult patients after elective repair of ventral hernia were enrolled into this prospective study. IAP monitoring was performed via both a balloon-tipped nasogastric probe [intragastric pressure (IGP), CiMON, Pulsion Medical Systems, Munich, Germany] and a urinary catheter [intrabladder pressure (IBP), UnoMeterAbdo-Pressure Kit, UnoMedical, Denmark] on five consecutive stages: (1) after tracheal intubation (AI); (2) after ventral hernia repair; (3) at the end of surgery; (4) during spontaneous breathing trial through the endotracheal tube; and (5) at 1 h after tracheal extubation. The patients were in the complete supine position during all study stages. RESULTS: The IAP (measured via both techniques) increased on average by 12% during surgery compared to AI (P < 0.02) and by 43% during spontaneous breathing through the endotracheal tube (P < 0.01). In parallel, the gradient between РаСО2 and EtCO2 [Р(а-et)CO2] rose significantly, reaching a maximum during the spontaneous breathing trial. The PаO2/FiO2 decreased by 30% one hour after tracheal extubation (P = 0.02). The dynamic compliance of respiratory system reduced intraoperatively by 15%-20% (P < 0.025). At all stages, we observed a significant correlation between IGP and IBP (r = 0.65-0.81, P < 0.01) with a mean bias varying from -0.19 mmHg (2SD 7.25 mmHg) to -1.06 mm Hg (2SD 8.04 mmHg) depending on the study stage. Taking all paired measurements together (n = 133), the median IGP was 8.0 (5.5-11.0) mmHg and the median IBP was 8.8 (5.8-13.1) mmHg. The overall r (2) value (n = 30) was 0.76 (P < 0.0001). Bland and Altman analysis showed an overall bias for the mean values per patient of 0.6 mmHg (2SD 4.2 mmHg) with percentage error of 45.6%. Looking at changes in IAP between the different study stages, we found an excellent concordance coefficient of 94.9% comparing ΔIBP and ΔIGP (n = 117). CONCLUSION: During ventral hernia repair, the IAP rise is accompanied by changes in Р(а-et)CO2 and PаO2/FiO2-ratio. Estimation of IAP via IGP or IBP demonstrated excellent concordance.

18.
Comput Methods Programs Biomed ; 108(3): 1062-9, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22940136

RESUMEN

Recent advances in the field of image processing have shown that level of noise highly affect the quality and accuracy of classification when working with mammographic images. In this paper, we have proposed a method that consists of two major modules: noise detection and noise filtering. For detection purpose, neural network is used which effectively detect the noise from highly corrupted images. Pixel values of the window and some other features are used as feature for the training of neural network. For noise removal, three filters are used. The weighted average value of these three filters is filled on noisy pixels. The proposed technique has been tested on salt & pepper and quantum noise present in mammogram images. Peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM) are used for comparison of proposed technique with different existing techniques. Experiments shows that proposed technique produce better results as compare to existing methods.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Mamografía , Teoría Cuántica , Femenino , Humanos , Redes Neurales de la Computación , Distribución de Poisson
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA