Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
1.
Diagnostics (Basel) ; 14(17)2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39272662

RESUMEN

This multicenter retrospective study evaluated the diagnostic performance of a deep learning (DL)-based application for detecting, classifying, and highlighting suspected aortic dissections (ADs) on chest and thoraco-abdominal CT angiography (CTA) scans. CTA scans from over 200 U.S. and European cities acquired on 52 scanner models from six manufacturers were retrospectively collected and processed by CINA-CHEST (AD) (Avicenna.AI, La Ciotat, France) device. The diagnostic performance of the device was compared with the ground truth established by the majority agreement of three U.S. board-certified radiologists. Furthermore, the DL algorithm's time to notification was evaluated to demonstrate clinical effectiveness. The study included 1303 CTAs (mean age 58.8 ± 16.4 years old, 46.7% male, 10.5% positive). The device demonstrated a sensitivity of 94.2% [95% CI: 88.8-97.5%] and a specificity of 97.3% [95% CI: 96.2-98.1%]. The application classified positive cases by the AD type with an accuracy of 99.5% [95% CI: 98.9-99.8%] for type A and 97.5 [95% CI: 96.4-98.3%] for type B. The application did not miss any type A cases. The device flagged 32 cases incorrectly, primarily due to acquisition artefacts and aortic pathologies mimicking AD. The mean time to process and notify of potential AD cases was 27.9 ± 8.7 s. This deep learning-based application demonstrated a strong performance in detecting and classifying aortic dissection cases, potentially enabling faster triage of these urgent cases in clinical settings.

2.
Digit Health ; 10: 20552076241269536, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39108255

RESUMEN

Objective: Poor conditions in the intraoral environment often lead to low-quality photos and videos, hindering further clinical diagnosis. To restore these digital records, this study proposes a real-time interactive restoration system using segment anything model. Methods: Intraoral digital videos, obtained from the vident-lab dataset through an intraoral camera, serve as the input for interactive restoration system. The initial phase employs an interactive segmentation module leveraging segment anything model. Subsequently, a real-time intraframe restoration module and a video enhancement module were designed. A series of ablation studies were systematically conducted to illustrate the superior design of interactive restoration system. Our quantitative evaluation criteria contain restoration quality, segmentation accuracy, and processing speed. Furthermore, the clinical applicability of the processed videos was evaluated by experts. Results: Extensive experiments demonstrated its performance on segmentation with a mean intersection-over-union of 0.977. On video restoration, it leads to reliable performances with peak signal-to-noise ratio of 37.09 and structural similarity index measure of 0.961, respectively. More visualization results are shown on the https://yogurtsam.github.io/iveproject page. Conclusion: Interactive restoration system demonstrates its potential to serve patients and dentists with reliable and controllable intraoral video restoration.

3.
J Biophotonics ; 17(9): e202400105, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38955359

RESUMEN

Nail fold capillaroscopy is an important means of monitoring human health. Panoramic nail fold images improve the efficiency and accuracy of examinations. However, the acquisition of panoramic nail fold images is seldom studied and the problem manifests of few matching feature points when image stitching is used for such images. Therefore, this paper presents a method for panoramic nail fold image stitching based on vascular contour enhancement, which first solves the problem of few matching feature points by pre-processing the image with contrast-constrained adaptive histogram equalization (CLAHE), bilateral filtering (BF), and sharpening algorithms. The panoramic images of the nail fold blood vessels are then successfully stitched using the fast robust feature (SURF), fast library of approximate nearest neighbors (FLANN) and random sample agreement (RANSAC) algorithms. The experimental results show that the panoramic image stitched by this paper's algorithm has a field of view width of 7.43 mm, which improves the efficiency and accuracy of diagnosis.


Asunto(s)
Algoritmos , Capilares , Procesamiento de Imagen Asistido por Computador , Uñas , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Capilares/diagnóstico por imagen , Uñas/diagnóstico por imagen , Uñas/irrigación sanguínea
4.
Sensors (Basel) ; 24(13)2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-39000974

RESUMEN

Partially automated robotic systems, such as camera holders, represent a pivotal step towards enhancing efficiency and precision in surgical procedures. Therefore, this paper introduces an approach for real-time tool localization in laparoscopy surgery using convolutional neural networks. The proposed model, based on two Hourglass modules in series, can localize up to two surgical tools simultaneously. This study utilized three datasets: the ITAP dataset, alongside two publicly available datasets, namely Atlas Dione and EndoVis Challenge. Three variations of the Hourglass-based models were proposed, with the best model achieving high accuracy (92.86%) and frame rates (27.64 FPS), suitable for integration into robotic systems. An evaluation on an independent test set yielded slightly lower accuracy, indicating limited generalizability. The model was further analyzed using the Grad-CAM technique to gain insights into its functionality. Overall, this work presents a promising solution for automating aspects of laparoscopic surgery, potentially enhancing surgical efficiency by reducing the need for manual endoscope manipulation.


Asunto(s)
Laparoscopía , Redes Neurales de la Computación , Laparoscopía/métodos , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Algoritmos
5.
J Biomed Opt ; 29(3): 036001, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38434772

RESUMEN

Significance: In recent years, we and others have developed non-destructive methods to obtain three-dimensional (3D) pathology datasets of clinical biopsies and surgical specimens. For prostate cancer risk stratification (prognostication), standard-of-care Gleason grading is based on examining the morphology of prostate glands in thin 2D sections. This motivates us to perform 3D segmentation of prostate glands in our 3D pathology datasets for the purposes of computational analysis of 3D glandular features that could offer improved prognostic performance. Aim: To facilitate prostate cancer risk assessment, we developed a computationally efficient and accurate deep learning model for 3D gland segmentation based on open-top light-sheet microscopy datasets of human prostate biopsies stained with a fluorescent analog of hematoxylin and eosin (H&E). Approach: For 3D gland segmentation based on our H&E-analog 3D pathology datasets, we previously developed a hybrid deep learning and computer vision-based pipeline, called image translation-assisted segmentation in 3D (ITAS3D), which required a complex two-stage procedure and tedious manual optimization of parameters. To simplify this procedure, we use the 3D gland-segmentation masks previously generated by ITAS3D as training datasets for a direct end-to-end deep learning-based segmentation model, nnU-Net. The inputs to this model are 3D pathology datasets of prostate biopsies rapidly stained with an inexpensive fluorescent analog of H&E and the outputs are 3D semantic segmentation masks of the gland epithelium, gland lumen, and surrounding stromal compartments within the tissue. Results: nnU-Net demonstrates remarkable accuracy in 3D gland segmentations even with limited training data. Moreover, compared with the previous ITAS3D pipeline, nnU-Net operation is simpler and faster, and it can maintain good accuracy even with lower-resolution inputs. Conclusions: Our trained DL-based 3D segmentation model will facilitate future studies to demonstrate the value of computational 3D pathology for guiding critical treatment decisions for patients with prostate cancer.


Asunto(s)
Próstata , Neoplasias de la Próstata , Masculino , Humanos , Próstata/diagnóstico por imagen , Neoplasias de la Próstata/diagnóstico por imagen , Biopsia , Colorantes , Eosina Amarillenta-(YS)
6.
Diagnosis (Berl) ; 11(3): 283-294, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-38487874

RESUMEN

OBJECTIVES: Early skin cancer diagnosis can save lives; however, traditional methods rely on expert knowledge and can be time-consuming. This calls for automated systems using machine learning and deep learning. However, existing datasets often focus on flat skin surfaces, neglecting more complex cases on organs or with nearby lesions. METHODS: This work addresses this gap by proposing a skin cancer diagnosis methodology using a dataset named ASAN that covers diverse skin cancer cases but suffers from noisy features. To overcome the noisy feature problem, a segmentation dataset named SASAN is introduced, focusing on Region of Interest (ROI) extraction-based classification. This allows models to concentrate on critical areas within the images while ignoring learning the noisy features. RESULTS: Various deep learning segmentation models such as UNet, LinkNet, PSPNet, and FPN were trained on the SASAN dataset to perform segmentation-based ROI extraction. Classification was then performed using the dataset with and without ROI extraction. The results demonstrate that ROI extraction significantly improves the performance of these models in classification. This implies that SASAN is effective in evaluating performance metrics for complex skin cancer cases. CONCLUSIONS: This study highlights the importance of expanding datasets to include challenging scenarios and developing better segmentation methods to enhance automated skin cancer diagnosis. The SASAN dataset serves as a valuable tool for researchers aiming to improve such systems and ultimately contribute to better diagnostic outcomes.


Asunto(s)
Aprendizaje Profundo , Neoplasias Cutáneas , Humanos , Neoplasias Cutáneas/patología , Neoplasias Cutáneas/clasificación , Neoplasias Cutáneas/diagnóstico por imagen , Biopsia , Piel/patología , Piel/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Algoritmos
7.
Auris Nasus Larynx ; 51(3): 460-464, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38520978

RESUMEN

OBJECTIVE: While subjective methods like the Yanagihara system and the House-Brackmann system are standard in evaluating facial paralysis, they are limited by intra- and inter-observer variability. Meanwhile, quantitative objective methods such as electroneurography and electromyography are time-consuming. Our aim was to introduce a swift, objective, and quantitative method for evaluating facial movements. METHODS: We developed an application software (app) that utilizes the facial recognition functionality of the iPhone (Apple Inc., Cupertino, USA) for facial movement evaluation. This app leverages the phone's front camera, infrared radiation, and infrared camera to provide detailed three-dimensional facial topology. It quantitatively compares left and right facial movements by region and displays the movement ratio of the affected side to the opposite side. Evaluations using the app were conducted on both normal and facial palsy subjects and were compared with conventional methods. RESULTS: Our app provided an intuitive user experience, completing evaluations in under a minute, and thus proving practical for regular use. Its evaluation scores correlated highly with the Yanagihara system, the House-Brackmann system, and electromyography. Furthermore, the app outperformed conventional methods in assessing detailed facial movements. CONCLUSION: Our novel iPhone app offers a valuable tool for the comprehensive and efficient evaluation of facial palsy.


Asunto(s)
Reconocimiento Facial Automatizado , Enfermedades del Nervio Facial , Aplicaciones Móviles , Parálisis , Aplicaciones Móviles/normas , Enfermedades del Nervio Facial/diagnóstico , Parálisis/diagnóstico , Reconocimiento Facial Automatizado/instrumentación , Factores de Tiempo , Reproducibilidad de los Resultados , Humanos
8.
Heliyon ; 10(2): e24403, 2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-38304780

RESUMEN

The HT-29 cell line, derived from human colon cancer, is valuable for biological and cancer research applications. Early detection is crucial for improving the chances of survival, and researchers are introducing new techniques for accurate cancer diagnosis. This study introduces an efficient deep learning-based method for detecting and counting colorectal cancer cells (HT-29). The colorectal cancer cell line was procured from a company. Further, the cancer cells were cultured, and a transwell experiment was conducted in the lab to collect the dataset of colorectal cancer cell images via fluorescence microscopy. Of the 566 images, 80 % were allocated to the training set, and the remaining 20 % were assigned to the testing set. The HT-29 cell detection and counting in medical images is performed by integrating YOLOv2, ResNet-50, and ResNet-18 architectures. The accuracy achieved by ResNet-18 is 98.70 % and ResNet-50 is 96.66 %. The study achieves its primary objective by focusing on detecting and quantifying congested and overlapping colorectal cancer cells within the images. This innovative work constitutes a significant development in overlapping cancer cell detection and counting, paving the way for novel advancements and opening new avenues for research and clinical applications. Researchers can extend the study by exploring variations in ResNet and YOLO architectures to optimize object detection performance. Further investigation into real-time deployment strategies will enhance the practical applicability of these models.

9.
J Imaging ; 10(1)2024 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-38276320

RESUMEN

Endoscopies are helpful for examining internal organs, including the gastrointestinal tract. The endoscope device consists of a flexible tube to which a camera and light source are attached. The diagnostic process heavily depends on the quality of the endoscopic images. That is why the visual quality of endoscopic images has a significant effect on patient care, medical decision-making, and the efficiency of endoscopic treatments. In this study, we propose an endoscopic image enhancement technique based on image fusion. Our method aims to improve the visual quality of endoscopic images by first generating multiple sub images from the single input image which are complementary to one another in terms of local and global contrast. Then, each sub layer is subjected to a novel wavelet transform and guided filter-based decomposition technique. To generate the final improved image, appropriate fusion rules are utilized at the end. A set of upper gastrointestinal tract endoscopic images were put to the test in studies to confirm the efficacy of our strategy. Both qualitative and quantitative analyses show that the proposed framework performs better than some of the state-of-the-art algorithms.

10.
NMR Biomed ; 37(2): e5054, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37794648

RESUMEN

The aim of the current study was to compare the performance of fully automated software with human expert interpretation of single-voxel proton magnetic resonance spectroscopy (1H-MRS) spectra in the assessment of breast lesions. Breast magnetic resonance imaging (MRI) (including contrast-enhanced T1-weighted, T2-weighted, and diffusion-weighted imaging) and 1H-MRS images of 74 consecutive patients were acquired on a 3-T positron emission tomography-MRI scanner then automatically imported into and analyzed by SpecTec-ULR 1.1 software (LifeTec Solutions GmbH). All ensuing 117 spectra were additionally independently analyzed and interpreted by two blinded radiologists. Histopathology of at least 24 months of imaging follow-up served as the reference standard. Nonparametric Spearman's correlation coefficients for all measured parameters (signal-to-noise ratio [SNR] and integral of total choline [tCho]), Passing and Bablok regression, and receiver operating characteristic analysis, were calculated to assess test diagnostic performance, as well as to compare automated with manual reading. Based on 117 spectra of 74 patients, the area under the curve for tCho SNR and integrals ranged from 0.768 to 0.814 and from 0.721 to 0.784 to distinguish benign from malignant tissue, respectively. Neither method displayed significant differences between measurements (automated vs. human expert readers, p > 0.05), in line with the results from the univariate Spearman's rank correlation coefficients, as well as the Passing and Bablok regression analysis. It was concluded that this pilot study demonstrates that 1H-MRS data from breast MRI can be automatically exported and interpreted by SpecTec-ULR 1.1 software. The diagnostic performance of this software was not inferior to human expert readers.


Asunto(s)
Neoplasias de la Mama , Colina , Humanos , Femenino , Espectroscopía de Protones por Resonancia Magnética , Colina/análisis , Proyectos Piloto , Sensibilidad y Especificidad , Mama/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología
11.
Digit Health ; 9: 20552076231216549, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38033522

RESUMEN

Introduction: This article was undertaken to explore the potential of AI in enhancing the diagnostic accuracy and efficiency in identifying hip fractures using X-ray radiographs. In the study, we trained three distinct deep learning models, and we utilized majority voting to evaluate their outcomes, aiming to yield the most reliable and precise diagnoses of hip fractures from X-ray radiographs. Methods: An initial study was conducted of 10,849 AP pelvis X-rays obtained from five hospitals affiliated with Baskent University. Two expert orthopedic surgeons initially labeled 2,291 radiographs as fractures and 8,558 as non-fractures. The algorithm was trained on 6,943 (64%) radiographs, validated on 1,736 (16%) radiographs, and tested on 2,170 (20%) radiographs, ensuring an even distribution of fracture presence, age, and gender. We employed three advanced deep learning architectures, Xception (Model A), EfficientNet (Model B), and NfNet (Model C), with a final decision aggregated through a majority voting technique (Model D). Results: For each model, we achieved the following metrics:For Model A: F1 Score 0.895, Accuracy 0.956, Specificity 0.973, Sensitivity 0.893.For Model B: F1 Score 0.900, Accuracy 0.960, Specificity 0.991, Sensitivity 0.845.For Model C: F1 Score 0.919, Accuracy 0.966, Specificity 0.984, Sensitivity 0.899.For Model D: F1 Score 0.929, Accuracy 0.971, Specificity 0.991, Sensitivity 0.897.We concluded that Model D (majority voting) achieved the best results in terms of the F1 score, accuracy, and specificity values. Conclusions: Our study demonstrates that the results obtained by aggregating the decisions of multiple models through voting, rather than relying solely on the decision of a single algorithm, are more consistent. The practical application of these algorithms will be difficult due to ethical, legal, and confidentiality issues, despite the theoretical success achieved. Developing successful algorithms and methodologies should not be viewed as the ultimate goal; it is important to understand how these algorithms will be used in real-life situations. In order to achieve more consistent results, feedback from clinical practice will be helpful.

12.
Digit Health ; 9: 20552076231211547, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38025115

RESUMEN

Objective: Endotracheal intubation (ETI) is critical to secure the airway in emergent situations. Although artificial intelligence algorithms are frequently used to analyze medical images, their application to evaluating intraoral structures based on images captured during emergent ETI remains limited. The aim of this study is to develop an artificial intelligence model for segmenting structures in the oral cavity using video laryngoscope (VL) images. Methods: From 54 VL videos, clinicians manually labeled images that include motion blur, foggy vision, blood, mucus, and vomitus. Anatomical structures of interest included the tongue, epiglottis, vocal cord, and corniculate cartilage. EfficientNet-B5 with DeepLabv3+, EffecientNet-B5 with U-Net, and Configured Mask R-Convolution Neural Network (CNN) were used; EffecientNet-B5 was pretrained on ImageNet. Dice similarity coefficient (DSC) was used to measure the segmentation performance of the model. Accuracy, recall, specificity, and F1 score were used to evaluate the model's performance in targeting the structure from the value of the intersection over union between the ground truth and prediction mask. Results: The DSC of tongue, epiglottis, vocal cord, and corniculate cartilage obtained from the EfficientNet-B5 with DeepLabv3+, EfficientNet-B5 with U-Net, and Configured Mask R-CNN model were 0.3351/0.7675/0.766/0.6539, 0.0/0.7581/0.7395/0.6906, and 0.1167/0.7677/0.7207/0.57, respectively. Furthermore, the processing speeds (frames per second) of the three models stood at 3, 24, and 32, respectively. Conclusions: The algorithm developed in this study can assist medical providers performing ETI in emergent situations.

13.
Diagnostics (Basel) ; 13(18)2023 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-37761294

RESUMEN

Fractures affect nearly 9.45% of the South Korean population, with radiography being the primary diagnostic tool. This research employs a machine-learning methodology that integrates HyperColumn techniques with the convolutional block attention module (CBAM) to enhance fracture detection in X-ray radiographs. Utilizing the EfficientNet-B0 and DenseNet169 models bolstered by the HyperColumn and the CBAM, distinct improvements in fracture site prediction emerge. Significantly, when HyperColumn and CBAM integration is applied, both DenseNet169 and EfficientNet-B0 showed noteworthy accuracy improvements, with increases of approximately 0.69% and 0.70%, respectively. The HyperColumn-CBAM-DenseNet169 model particularly stood out, registering an uplift in the AUC score from 0.8778 to 0.9145. The incorporation of Grad-CAM technology refined the heatmap's focus, achieving alignment with expert-recognized fracture sites and alleviating the deep-learning challenge of heavy reliance on bounding box annotations. This innovative approach signifies potential strides in streamlining training processes and augmenting diagnostic precision in fracture detection.

14.
J Digit Imaging ; 36(6): 2441-2460, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37537514

RESUMEN

Detecting neurological abnormalities such as brain tumors and Alzheimer's disease (AD) using magnetic resonance imaging (MRI) images is an important research topic in the literature. Numerous machine learning models have been used to detect brain abnormalities accurately. This study addresses the problem of detecting neurological abnormalities in MRI. The motivation behind this problem lies in the need for accurate and efficient methods to assist neurologists in the diagnosis of these disorders. In addition, many deep learning techniques have been applied to MRI to develop accurate brain abnormality detection models, but these networks have high time complexity. Hence, a novel hand-modeled feature-based learning network is presented to reduce the time complexity and obtain high classification performance. The model proposed in this work uses a new feature generation architecture named pyramid and fixed-size patch (PFP). The main aim of the proposed PFP structure is to attain high classification performance using essential feature extractors with both multilevel and local features. Furthermore, the PFP feature extractor generates low- and high-level features using a handcrafted extractor. To obtain the high discriminative feature extraction ability of the PFP, we have used histogram-oriented gradients (HOG); hence, it is named PFP-HOG. Furthermore, the iterative Chi2 (IChi2) is utilized to choose the clinically significant features. Finally, the k-nearest neighbors (kNN) with tenfold cross-validation is used for automated classification. Four MRI neurological databases (AD dataset, brain tumor dataset 1, brain tumor dataset 2, and merged dataset) have been utilized to develop our model. PFP-HOG and IChi2-based models attained 100%, 94.98%, 98.19%, and 97.80% using the AD dataset, brain tumor dataset1, brain tumor dataset 2, and merged brain MRI dataset, respectively. These findings not only provide an accurate and robust classification of various neurological disorders using MRI but also hold the potential to assist neurologists in validating manual MRI brain abnormality screening.


Asunto(s)
Enfermedad de Alzheimer , Neoplasias Encefálicas , Humanos , Imagen por Resonancia Magnética/métodos , Neuroimagen , Encéfalo/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Aprendizaje Automático , Enfermedad de Alzheimer/diagnóstico por imagen
15.
Diagnostics (Basel) ; 13(3)2023 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-36766537

RESUMEN

In recent years, the number of studies for the automatic diagnosis of biomedical diseases has increased. Many of these studies have used Deep Learning, which gives extremely good results but requires a vast amount of data and computing load. If the processor is of insufficient quality, this takes time and places an excessive load on the processor. On the other hand, Machine Learning is faster than Deep Learning and does not have a much-needed computing load, but it does not provide as high an accuracy value as Deep Learning. Therefore, our goal is to develop a hybrid system that provides a high accuracy value, while requiring a smaller computing load and less time to diagnose biomedical diseases such as the retinal diseases we chose for this study. For this purpose, first, retinal layer extraction was conducted through image preprocessing. Then, traditional feature extractors were combined with pre-trained Deep Learning feature extractors. To select the best features, we used the Firefly algorithm. In the end, multiple binary classifications were conducted instead of multiclass classification with Machine Learning classifiers. Two public datasets were used in this study. The first dataset had a mean accuracy of 0.957, and the second dataset had a mean accuracy of 0.954.

16.
Comput Methods Programs Biomed ; 225: 107089, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36058063

RESUMEN

BACKGROUND AND OBJECTIVE: Cardiotocography, commonly called CTG, has become an indispensable auxiliary examination in obstetrics. Generally, CTG is provided in the form of a report, so the fetal heart rate and uterine contraction signals have to be extracted from the CTG images. However, most studies focused on reading data for a single curve, and the influence of complex backgrounds was usually not considered. METHODS: An efficient signal extraction method was proposed for the binary CTG images with complex backgrounds. Firstly, the images' background grids and symbol noise were removed by templates. Then a morphological method was used to fill breakpoints of curves. Moreover, the projection map was utilized to localize the area and the starting and ending positions of curves. Subsequently, data of the curves were extracted by column scanning. Finally, the amplitude of the extracted signal was calibrated. RESULTS: This study had tested 552 CTG images simulated using the CTU-UHB database. The correlation coefficient between the extracted and original signals was 0.9991 ± 0.0030 for fetal heart rate and 0.9904 ± 0.0208 for uterine contraction, and the mean absolute error of fetal heart rate and uterine contraction were 2.4658 ± 1.8446 and 1.8025 ± 0.6155, and the root mean square error of fetal heart rate and uterine contraction were 4.2930 ± 2.9771 and 2.5214 ± 0.9640, respectively. After being validated using 293 clinical authentic CTG images, the extracted signals were remarkably similar to the original counterparts, and no significant differences were observed. CONCLUSIONS: The proposed method could effectively extract the fetal heart rate and uterine contraction signals from the binary CTG images with complex backgrounds.


Asunto(s)
Cardiotocografía , Obstetricia , Cardiotocografía/métodos , Bases de Datos Factuales , Femenino , Frecuencia Cardíaca Fetal/fisiología , Humanos , Embarazo , Contracción Uterina
17.
J Imaging ; 8(9)2022 Sep 16.
Artículo en Inglés | MEDLINE | ID: mdl-36135416

RESUMEN

We study the performance of CLAIRE-a diffeomorphic multi-node, multi-GPU image-registration algorithm and software-in large-scale biomedical imaging applications with billions of voxels. At such resolutions, most existing software packages for diffeomorphic image registration are prohibitively expensive. As a result, practitioners first significantly downsample the original images and then register them using existing tools. Our main contribution is an extensive analysis of the impact of downsampling on registration performance. We study this impact by comparing full-resolution registrations obtained with CLAIRE to lower resolution registrations for synthetic and real-world imaging datasets. Our results suggest that registration at full resolution can yield a superior registration quality-but not always. For example, downsampling a synthetic image from 10243 to 2563 decreases the Dice coefficient from 92% to 79%. However, the differences are less pronounced for noisy or low contrast high resolution images. CLAIRE allows us not only to register images of clinically relevant size in a few seconds but also to register images at unprecedented resolution in reasonable time. The highest resolution considered are CLARITY images of size 2816×3016×1162. To the best of our knowledge, this is the first study on image registration quality at such resolutions.

18.
Medicina (Kaunas) ; 58(8)2022 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-36013557

RESUMEN

Background and Objectives: Clinical diagnosis has become very significant in today's health system. The most serious disease and the leading cause of mortality globally is brain cancer which is a key research topic in the field of medical imaging. The examination and prognosis of brain tumors can be improved by an early and precise diagnosis based on magnetic resonance imaging. For computer-aided diagnosis methods to assist radiologists in the proper detection of brain tumors, medical imagery must be detected, segmented, and classified. Manual brain tumor detection is a monotonous and error-prone procedure for radiologists; hence, it is very important to implement an automated method. As a result, the precise brain tumor detection and classification method is presented. Materials and Methods: The proposed method has five steps. In the first step, a linear contrast stretching is used to determine the edges in the source image. In the second step, a custom 17-layered deep neural network architecture is developed for the segmentation of brain tumors. In the third step, a modified MobileNetV2 architecture is used for feature extraction and is trained using transfer learning. In the fourth step, an entropy-based controlled method was used along with a multiclass support vector machine (M-SVM) for the best features selection. In the final step, M-SVM is used for brain tumor classification, which identifies the meningioma, glioma and pituitary images. Results: The proposed method was demonstrated on BraTS 2018 and Figshare datasets. Experimental study shows that the proposed brain tumor detection and classification method outperforms other methods both visually and quantitatively, obtaining an accuracy of 97.47% and 98.92%, respectively. Finally, we adopt the eXplainable Artificial Intelligence (XAI) method to explain the result. Conclusions: Our proposed approach for brain tumor detection and classification has outperformed prior methods. These findings demonstrate that the proposed approach obtained higher performance in terms of both visually and enhanced quantitative evaluation with improved accuracy.


Asunto(s)
Neoplasias Encefálicas , Máquina de Vectores de Soporte , Inteligencia Artificial , Neoplasias Encefálicas/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación
19.
Comput Biol Chem ; 100: 107731, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35907293

RESUMEN

Chromosome karyotyping analysis is a vital cytogenetics technique for diagnosing genetic and congenital malformations, analyzing gestational and implantation failures, etc. Since the chromosome classification as an essential stage in chromosome karyotype analysis is a highly time-consuming, tedious, and error-prone task, which requires a large amount of manual work of experienced cytogenetics experts. Many deep learning-based methods have been proposed to address the chromosome classification issues. However, two challenges still remain in current chromosome classification methods. First, most existing methods were developed by different private datasets, making these methods difficult to compare with each other on the same base. Second, due to the absence of reproducing details of most existing methods, these methods are difficult to be applied in clinical chromosome classification applications widely. To address the above challenges in the chromosome classification issue, this work builds and publishes a massive clinical dataset. This dataset enables the benchmarking and building chromosome classification baselines suitable for different scenarios. The massive clinical dataset consists of 126,453 privacy preserving G-band chromosome instances from 2763 karyotypes of 408 individuals. To our best knowledge, it is the first work to collect, annotate, and release a publicly available clinical chromosome classification dataset whose data size scale is also over 120,000. Meanwhile, the experimental results show that the proposed dataset can boost performance of existing chromosome classification models at a varied range of degrees, with the highest accuracy improvement by 5.39 % points. Moreover, the best baseline with 99.33 % accuracy reports state-of-the-art classification performance. The clinical dataset and state-of-the-art baselines can be found at https://github.com/CloudDataLab/BenchmarkForChromosomeClassification.


Asunto(s)
Algoritmos , Benchmarking , Cromosomas/genética , Humanos
20.
Sensors (Basel) ; 22(3)2022 Feb 08.
Artículo en Inglés | MEDLINE | ID: mdl-35162030

RESUMEN

Hospitals, especially their emergency services, receive a high number of wrist fracture cases. For correct diagnosis and proper treatment of these, images obtained from various medical equipment must be viewed by physicians, along with the patient's medical records and physical examination. The aim of this study is to perform fracture detection by use of deep-learning on wrist X-ray images to support physicians in the diagnosis of these fractures, particularly in the emergency services. Using SABL, RegNet, RetinaNet, PAA, Libra R-CNN, FSAF, Faster R-CNN, Dynamic R-CNN and DCN deep-learning-based object detection models with various backbones, 20 different fracture detection procedures were performed on Gazi University Hospital's dataset of wrist X-ray images. To further improve these procedures, five different ensemble models were developed and then used to reform an ensemble model to develop a unique detection model, 'wrist fracture detection-combo (WFD-C)'. From 26 different models for fracture detection, the highest detection result obtained was 0.8639 average precision (AP50) in the WFD-C model. Huawei Turkey R&D Center supports this study within the scope of the ongoing cooperation project coded 071813 between Gazi University, Huawei and Medskor.


Asunto(s)
Aprendizaje Profundo , Humanos , Radiografía , Muñeca/diagnóstico por imagen , Articulación de la Muñeca , Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...