Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 993
Filter
1.
Ultrasound Med Biol ; 2024 Sep 06.
Article in English | MEDLINE | ID: mdl-39244483

ABSTRACT

OBJECTIVE: As metabolic dysfunction-associated steatotic liver disease (MASLD) becomes more prevalent worldwide, it is imperative to create more accurate technologies that make it easy to assess the liver in a point-of-care setting. The aim of this study is to test the performance of a new software tool implemented in Velacur (Sonic Incytes), a liver stiffness and ultrasound attenuation measurement device, on patients with MASLD. This tool employs a deep learning-based method to detect and segment shear waves in the liver tissue for subsequent analysis to improve tissue characterization for patient diagnosis. METHODS: This new tool consists of a deep learning based algorithm, which was trained on 15,045 expert-segmented images from 103 patients, using a U-Net architecture. The algorithm was then tested on 4429 images from 36 volunteers and patients with MASLD. Test subjects were scanned at different clinics with different Velacur operators. Evaluation was performed on both individual images (image based) and averaged across all images collected from a patient (patient based). Ground truth was defined by expert segmentation of the shear waves within each image. For evaluation, sensitivity and specificity for correct wave detection in the image were calculated. For those images containing waves, the Dice coefficient was calculated. A prototype of the software tool was also implemented on Velacur and assessed by operators in real world settings. RESULTS: The wave detection algorithm had a sensitivity of 81% and a specificity of 84%, with a Dice coefficient of 0.74 and 0.75 for image based and patient-based averages respectively. The implementation of this software tool as an overlay on the B-Mode ultrasound resulted in improved exam quality collected by operators. CONCLUSION: The shear wave algorithm performed well on a test set of volunteers and patients with metabolic dysfunction-associated steatotic liver disease. The addition of this software tool, implemented on the Velacur system, improved the quality of the liver assessments performed in a real world, point of care setting.

2.
Bioinform Biol Insights ; 18: 11779322241272387, 2024.
Article in English | MEDLINE | ID: mdl-39246684

ABSTRACT

Objectives: This article focuses on the detection of cells in low-contrast brightfield microscopy images; in our case, it is chronic lymphocytic leukaemia cells. The automatic detection of cells from brightfield time-lapse microscopic images brings new opportunities in cell morphology and migration studies; to achieve the desired results, it is advisable to use state-of-the-art image segmentation methods that not only detect the cell but also detect its boundaries with the highest possible accuracy, thus defining its shape and dimensions. Methods: We compared eight state-of-the-art neural network architectures with different backbone encoders for image data segmentation, namely U-net, U-net++, the Pyramid Attention Network, the Multi-Attention Network, LinkNet, the Feature Pyramid Network, DeepLabV3, and DeepLabV3+. The training process involved training each of these networks for 1000 epochs using the PyTorch and PyTorch Lightning libraries. For instance segmentation, the watershed algorithm and three-class image semantic segmentation were used. We also used StarDist, a deep learning-based tool for object detection with star-convex shapes. Results: The optimal combination for semantic segmentation was the U-net++ architecture with a ResNeSt-269 background with a data set intersection over a union score of 0.8902. For the cell characteristics examined (area, circularity, solidity, perimeter, radius, and shape index), the difference in mean value using different chronic lymphocytic leukaemia cell segmentation approaches appeared to be statistically significant (Mann-Whitney U test, P < .0001). Conclusion: We found that overall, the algorithms demonstrate equal agreement with ground truth, but with the comparison, it can be seen that the different approaches prefer different morphological features of the cells. Consequently, choosing the most suitable method for instance-based cell segmentation depends on the particular application, namely, the specific cellular traits being investigated.

3.
Technol Health Care ; 2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39240595

ABSTRACT

BACKGROUND: Liver cancer poses a significant health challenge due to its high incidence rates and complexities in detection and treatment. Accurate segmentation of liver tumors using medical imaging plays a crucial role in early diagnosis and treatment planning. OBJECTIVE: This study proposes a novel approach combining U-Net and ResNet architectures with the Adam optimizer and sigmoid activation function. The method leverages ResNet's deep residual learning to address training issues in deep neural networks. At the same time, U-Net's structure facilitates capturing local and global contextual information essential for precise tumor characterization. The model aims to enhance segmentation accuracy by effectively capturing intricate tumor features and contextual details by integrating these architectures. The Adam optimizer expedites model convergence by dynamically adjusting the learning rate based on gradient statistics during training. METHODS: To validate the effectiveness of the proposed approach, segmentation experiments are conducted on a diverse dataset comprising 130 CT scans of liver cancers. Furthermore, a state-of-the-art fusion strategy is introduced, combining the robust feature learning capabilities of the UNet-ResNet classifier with Snake-based Level Set Segmentation. RESULTS: Experimental results demonstrate impressive performance metrics, including an accuracy of 0.98 and a minimal loss of 0.10, underscoring the efficacy of the proposed methodology in liver cancer segmentation. CONCLUSION: This fusion approach effectively delineates complex and diffuse tumor shapes, significantly reducing errors.

4.
Heliyon ; 10(16): e35933, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39258194

ABSTRACT

The growing interest in Subseasonal to Seasonal (S2S) prediction data across different industries underscores its potential use in comprehending weather patterns, extreme conditions, and important sectors such as agriculture and energy management. However, concerns about its accuracy have been raised. Furthermore, enhancing the precision of rainfall predictions remains challenging in S2S forecasts. This study enhanced the sub-seasonal to seasonal (S2S) prediction skills for precipitation amount and occurrence over the East Asian region by employing deep learning-based post-processing techniques. We utilized a modified U-Net architecture that wraps all its convolutional layers with TimeDistributed layers as a deep learning model. For the training datasets, the precipitation prediction data of six S2S climate models and their multi-model ensemble (MME) were constructed, and the daily precipitation occurrence was obtained from the three thresholds values, 0 % of the daily precipitation for no-rain events, <33 % for light-rain, >67 % for heavy-rain. Based on the precipitation amount prediction skills of the six climate models, deep learning-based post-processing outperformed post-processing using multiple linear regression (MLR) in the lead times of weeks 2-4. The prediction accuracy of precipitation occurrence with MLR-based post-processing did not significantly improve, whereas deep learning-based post-processing enhanced the prediction accuracy in the total lead times, demonstrating superiority over MLR. We enhanced the prediction accuracy in forecasting the amount and occurrence of precipitation in individual climate models using deep learning-based post-processing.

5.
J Imaging Inform Med ; 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39227537

ABSTRACT

Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.

6.
Skeletal Radiol ; 2024 Sep 04.
Article in English | MEDLINE | ID: mdl-39230576

ABSTRACT

OBJECTIVE: A fully automated laminar cartilage composition (MRI-based T2) analysis method was technically and clinically validated by comparing radiographically normal knees with (CL-JSN) and without contra-lateral joint space narrowing or other signs of radiographic osteoarthritis (OA, CL-noROA). MATERIALS AND METHODS: 2D U-Nets were trained from manually segmented femorotibial cartilages (n = 72) from all 7 echoes (AllE), or from the 1st echo only (1stE) of multi-echo-spin-echo (MESE) MRIs acquired by the Osteoarthritis Initiative (OAI). Because of its greater accuracy, only the AllE U-Net was then applied to knees from the OAI healthy reference cohort (n = 10), CL-JSN (n = 39), and (1:1) matched CL-noROA knees (n = 39) that all had manual expert segmentation, and to 982 non-matched CL-noROA knees without expert segmentation. RESULTS: The agreement (Dice similarity coefficient) between automated vs. manual expert cartilage segmentation was between 0.82 ± 0.05/0.79 ± 0.06 (AllE/1stE) and 0.88 ± 0.03/0.88 ± 0.03 (AllE/1stE) across femorotibial cartilage plates. The deviation between automated vs. manually derived laminar T2 reached up to - 2.2 ± 2.6 ms/ + 4.1 ± 10.2 ms (AllE/1stE). The AllE U-Net showed a similar sensitivity to cross-sectional laminar T2 differences between CL-JSN and CL-noROA knees in the matched (Cohen's D ≤ 0.54) and the non-matched (D ≤ 0.54) comparison as the matched manual analyses (D ≤ 0.48). Longitudinally, the AllE U-Net also showed a similar sensitivity to CL-JSN vs. CS-noROA differences in the matched (D ≤ 0.51) and the non-matched (D ≤ 0.43) comparison as matched manual analyses (D ≤ 0.41). CONCLUSION: The fully automated T2 analysis showed a high agreement, acceptable accuracy, and similar sensitivity to cross-sectional and longitudinal laminar T2 differences in an early OA model, compared with manual expert analysis. TRIAL REGISTRATION: Clinicaltrials.gov identification: NCT00080171.

7.
Comput Biol Med ; 181: 109036, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39213706

ABSTRACT

The rat sciatic nerve model is commonly used to test novel therapies for nerve injury repair. The static sciatic index (SSI) is a useful metric for quantifying functional recovery, and involves comparing an operated paw versus a control paw using a weighted ratio between the toe spread and the internal toe spread. To calculate it, rats are placed in a transparent box, photos are taken from underneath and the toe distances measured manually. This is labour intensive and subject to human error due to the challenge of consistently taking photos, identifying digits and making manual measurements. Although several commercial kits have been developed to address this challenge, they have seen little dissemination due to cost. Here we develop a novel algorithm for automatic measurement of SSI metrics based on video data using casted U-Nets. The algorithm consists of three U-Nets, one to segment the hind paws and two for the two pairs of digits which input into the SSI calculation. A training intersection over union error of 60 % and 80 % was achieved for the back paws and for both digit segmentation U-Nets, respectfully. The algorithm was tested against video data from three separate experiments. Compared to manual measurements, the algorithm provides the same profile of recovery for every experiment but with a tighter standard deviation in the SSI measure. Through the open-source release of this algorithm, we aim to provide an inexpensive tool to more reliably quantify functional recovery metrics to the nerve repair research community.


Subject(s)
Algorithms , Disease Models, Animal , Peripheral Nerve Injuries , Animals , Rats , Peripheral Nerve Injuries/physiopathology , Computer Simulation , Sciatic Nerve/physiology , Sciatic Nerve/injuries , Rats, Sprague-Dawley
8.
J Imaging Inform Med ; 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39103563

ABSTRACT

Obstructive sleep apnea is characterized by a decrease or cessation of breathing due to repetitive closure of the upper airway during sleep, leading to a decrease in blood oxygen saturation. In this study, employing a U-Net model, we utilized drug-induced sleep endoscopy images to segment the major causes of airway obstruction, including the epiglottis, oropharynx lateral walls, and tongue base. The evaluation metrics included sensitivity, specificity, accuracy, and Dice score, with airway sensitivity at 0.93 (± 0.06), specificity at 0.96 (± 0.01), accuracy at 0.95 (± 0.01), and Dice score at 0.84 (± 0.03), indicating overall high performance. The results indicate the potential for artificial intelligence (AI)-driven automatic interpretation of sleep disorder diagnosis, with implications for standardizing medical procedures and improving healthcare services. The study suggests that advancements in AI technology hold promise for enhancing diagnostic accuracy and treatment efficacy in sleep and respiratory disorders, fostering competitiveness in the medical AI market.

9.
J Imaging Inform Med ; 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39103564

ABSTRACT

Retinal vessel segmentation is crucial for the diagnosis of ophthalmic and cardiovascular diseases. However, retinal vessels are densely and irregularly distributed, with many capillaries blending into the background, and exhibit low contrast. Moreover, the encoder-decoder-based network for retinal vessel segmentation suffers from irreversible loss of detailed features due to multiple encoding and decoding, leading to incorrect segmentation of the vessels. Meanwhile, the single-dimensional attention mechanisms possess limitations, neglecting the importance of multidimensional features. To solve these issues, in this paper, we propose a detail-enhanced attention feature fusion network (DEAF-Net) for retinal vessel segmentation. First, the detail-enhanced residual block (DERB) module is proposed to strengthen the capacity for detailed representation, ensuring that intricate features are efficiently maintained during the segmentation of delicate vessels. Second, the multidimensional collaborative attention encoder (MCAE) module is proposed to optimize the extraction of multidimensional information. Then, the dynamic decoder (DYD) module is introduced to preserve spatial information during the decoding process and reduce the information loss caused by upsampling operations. Finally, the proposed detail-enhanced feature fusion (DEFF) module composed of DERB, MCAE and DYD modules fuses feature maps from both encoding and decoding and achieves effective aggregation of multi-scale contextual information. The experiments conducted on the datasets of DRIVE, CHASEDB1, and STARE, achieving Sen of 0.8305, 0.8784, and 0.8654, and AUC of 0.9886, 0.9913, and 0.9911 on DRIVE, CHASEDB1, and STARE, respectively, demonstrate the performance of our proposed network, particularly in the segmentation of fine retinal vessels.

10.
J Imaging Inform Med ; 2024 Aug 08.
Article in English | MEDLINE | ID: mdl-39117939

ABSTRACT

To propose a deep learning framework "SpineCurve-net" for automated measuring the 3D Cobb angles from computed tomography (CT) images of presurgical scoliosis patients. A total of 116 scoliosis patients were analyzed, divided into a training set of 89 patients (average age 32.4 ± 24.5 years) and a validation set of 27 patients (average age 17.3 ± 5.8 years). Vertebral identification and curve fitting were achieved through U-net and NURBS-net and resulted in a Non-Uniform Rational B-Spline (NURBS) curve of the spine. The 3D Cobb angles were measured in two ways: the predicted 3D Cobb angle (PRED-3D-CA), which is the maximum value in the smoothed angle map derived from the NURBS curve, and the 2D mapping Cobb angle (MAP-2D-CA), which is the maximal angle formed by the tangent vectors along the projected 2D spinal curve. The model segmented spinal masks effectively, capturing easily missed vertebral bodies. Spoke kernel filtering distinguished vertebral regions, centralizing spinal curves. The SpineCurve Network method's Cobb angle (PRED-3D-CA and MAP-2D-CA) measurements correlated strongly with the surgeons' annotated Cobb angle (ground truth, GT) based on 2D radiographs, revealing high Pearson correlation coefficients of 0.983 and 0.934, respectively. This paper proposed an automated technique for calculating the 3D Cobb angle in preoperative scoliosis patients, yielding results that are highly correlated with traditional 2D Cobb angle measurements. Given its capacity to accurately represent the three-dimensional nature of spinal deformities, this method shows potential in aiding physicians to develop more precise surgical strategies in upcoming cases.

11.
Skin Res Technol ; 30(8): e13783, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39113617

ABSTRACT

BACKGROUND: In recent years, the increasing prevalence of skin cancers, particularly malignant melanoma, has become a major concern for public health. The development of accurate automated segmentation techniques for skin lesions holds immense potential in alleviating the burden on medical professionals. It is of substantial clinical importance for the early identification and intervention of skin cancer. Nevertheless, the irregular shape, uneven color, and noise interference of the skin lesions have presented significant challenges to the precise segmentation. Therefore, it is crucial to develop a high-precision and intelligent skin lesion segmentation framework for clinical treatment. METHODS: A precision-driven segmentation model for skin cancer images is proposed based on the Transformer U-Net, called BiADATU-Net, which integrates the deformable attention Transformer and bidirectional attention blocks into the U-Net. The encoder part utilizes deformable attention Transformer with dual attention block, allowing adaptive learning of global and local features. The decoder part incorporates specifically tailored scSE attention modules within skip connection layers to capture image-specific context information for strong feature fusion. Additionally, deformable convolution is aggregated into two different attention blocks to learn irregular lesion features for high-precision prediction. RESULTS: A series of experiments are conducted on four skin cancer image datasets (i.e., ISIC2016, ISIC2017, ISIC2018, and PH2). The findings show that our model exhibits satisfactory segmentation performance, all achieving an accuracy rate of over 96%. CONCLUSION: Our experiment results validate the proposed BiADATU-Net achieves competitive performance supremacy compared to some state-of-the-art methods. It is potential and valuable in the field of skin lesion segmentation.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Melanoma/diagnostic imaging , Melanoma/pathology , Algorithms , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods , Dermoscopy/methods , Deep Learning
12.
Front Physiol ; 15: 1412985, 2024.
Article in English | MEDLINE | ID: mdl-39156824

ABSTRACT

In recent years, semantic segmentation in deep learning has been widely applied in medical image segmentation, leading to the development of numerous models. Convolutional Neural Network (CNNs) have achieved milestone achievements in medical image analysis. Particularly, deep neural networks based on U-shaped architectures and skip connections have been extensively employed in various medical image tasks. U-Net is characterized by its encoder-decoder architecture and pioneering skip connections, along with multi-scale features, has served as a fundamental network architecture for many modifications. But U-Net cannot fully utilize all the information from the encoder layer in the decoder layer. U-Net++ connects mid parameters of different dimensions through nested and dense skip connections. However, it can only alleviate the disadvantage of not being able to fully utilize the encoder information and will greatly increase the model parameters. In this paper, a novel BFNet is proposed to utilize all feature maps from the encoder at every layer of the decoder and reconnects with the current layer of the encoder. This allows the decoder to better learn the positional information of segmentation targets and improves learning of boundary information and abstract semantics in the current layer of the encoder. Our proposed method has a significant improvement in accuracy with 1.4 percent. Besides enhancing accuracy, our proposed BFNet also reduces network parameters. All the advantages we proposed are demonstrated on our dataset. We also discuss how different loss functions influence this model and some possible improvements.

13.
bioRxiv ; 2024 Aug 08.
Article in English | MEDLINE | ID: mdl-39149331

ABSTRACT

Human pluripotent stem cell (hPSC)-derived cardiac organoid is the most recent three-dimensional tissue structure that mimics the structure and functionality of the human heart and plays a pivotal role in modeling heart development and disease. The hPSC-derived cardiac organoids are commonly characterized by bright-field microscopic imaging for tracking daily organoid differentiation and morphology formation. Although the brightfield microscope provides essential information about hPSC-derived cardiac organoids, such as morphology, size, and general structure, it does not extend our understanding of cardiac organoids on cell type-specific distribution and structure. Then, fluorescence microscopic imaging is required to identify the specific cardiovascular cell types in the hPSC-derived cardiac organoids by fluorescence immunostaining fixed organoid samples or fluorescence reporter imaging of live organoids. Both approaches require extra steps of experiments and techniques and do not provide general information on hPSC-derived cardiac organoids from different batches of differentiation and characterization, which limits the biomedical applications of hPSC-derived cardiac organoids. This research addresses this limitation by proposing a comprehensive workflow for colorizing phase contrast images of cardiac organoids from brightfield microscopic imaging using conditional Generative Adversarial Networks (GANs) to provide cardiovascular cell type-specific information in hPSC-derived cardiac organoids. By infusing these phase contrast images with accurate fluorescence colorization, our approach aims to unlock the hidden wealth of cell type, structure, and further quantifications of fluorescence intensity and area, for better characterizing hPSC-derived cardiac organoids.

14.
Br J Radiol ; 2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39141433

ABSTRACT

OBJECTIVES: This study aims to develop an automated approach for estimating the vertical rotation of the thorax, which can be used to assess the technical adequacy of chest X-ray radiographs (CXRs). METHODS: Total 800 chest radiographs were used to train and establish segmentation networks for outlining the lungs and spine regions in chest X-ray images. By measuring the widths of the left and right lungs between the central line of segmented spine and the lateral sides of the segmented lungs, the quantification of thoracic vertical rotation was achieved. Additionally, a life-size, full body anthropomorphic phantom was employed to collect chest radiographic images under various specified rotation angles for assessing the accuracy of the proposed approach. RESULTS: The deep learning networks effectively segmented the anatomical structures of the lungs and spine. The proposed approach demonstrated a mean estimation error of less than 2° for thoracic rotation, surpassing existing techniques and indicating its superiority. CONCLUSIONS: The proposed approach offers a robust assessment of thoracic rotation and presents new possibilities for automated image quality control in chest X-ray examinations. ADVANCES IN KNOWLEDGE: This study presents a novel deep learning-based approach for the automated estimation of vertical thoracic rotation in chest X-ray radiographs. The proposed method enables a quantitative assessment of the technical adequacy of CXR examinations and opens up new possibilities for automated screening and quality control of radiographs.

15.
Sensors (Basel) ; 24(15)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39123824

ABSTRACT

In this work, we investigate the impact of annotation quality and domain expertise on the performance of Convolutional Neural Networks (CNNs) for semantic segmentation of wear on titanium nitride (TiN) and titanium carbonitride (TiCN) coated end mills. Using an innovative measurement system and customized CNN architecture, we found that domain expertise significantly affects model performance. Annotator 1 achieved maximum mIoU scores of 0.8153 for abnormal wear and 0.7120 for normal wear on TiN datasets, whereas Annotator 3 with the lowest expertise achieved significantly lower scores. Sensitivity to annotation inconsistencies and model hyperparameters were examined, revealing that models for TiCN datasets showed a higher coefficient of variation (CV) of 16.32% compared to 8.6% for TiN due to the subtle wear characteristics, highlighting the need for optimized annotation policies and high-quality images to improve wear segmentation.

16.
Bioengineering (Basel) ; 11(8)2024 Jul 26.
Article in English | MEDLINE | ID: mdl-39199717

ABSTRACT

Accurate and efficient segmentation of coronary arteries from CTA images is crucial for diagnosing and treating cardiovascular diseases. This study proposes a structured approach that combines vesselness enhancement, heart region of interest (ROI) extraction, and the ResUNet deep learning method to accurately and efficiently extract coronary artery vessels. Vesselness enhancement and heart ROI extraction significantly improve the accuracy and efficiency of the segmentation process, while ResUNet enables the model to capture both local and global features. The proposed method outperformed other state-of-the-art methods, achieving a Dice similarity coefficient (DSC) of 0.867, a Recall of 0.881, and a Precision of 0.892. The exceptional results for segmenting coronary arteries from CTA images demonstrate the potential of this method to significantly contribute to accurate diagnosis and effective treatment of cardiovascular diseases.

17.
J Imaging ; 10(8)2024 Aug 21.
Article in English | MEDLINE | ID: mdl-39194991

ABSTRACT

Liver segmentation technologies play vital roles in clinical diagnosis, disease monitoring, and surgical planning due to the complex anatomical structure and physiological functions of the liver. This paper provides a comprehensive review of the developments, challenges, and future directions in liver segmentation technology. We systematically analyzed high-quality research published between 2014 and 2024, focusing on liver segmentation methods, public datasets, and evaluation metrics. This review highlights the transition from manual to semi-automatic and fully automatic segmentation methods, describes the capabilities and limitations of available technologies, and provides future outlooks.

18.
Diagnostics (Basel) ; 14(16)2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39202266

ABSTRACT

Post-mortem (PM) imaging has potential for identifying individuals by comparing ante-mortem (AM) and PM images. Radiographic images of bones contain significant information for personal identification. However, PM images are affected by soft tissue decomposition; therefore, it is desirable to extract only images of bones that change little over time. This study evaluated the effectiveness of U-Net for bone image extraction from two-dimensional (2D) X-ray images. Two types of pseudo 2D X-ray images were created from the PM computed tomography (CT) volumetric data using ray-summation processing for training U-Net. One was a projection of all body tissues, and the other was a projection of only bones. The performance of the U-Net for bone extraction was evaluated using Intersection over Union, Dice coefficient, and the area under the receiver operating characteristic curve. Additionally, AM chest radiographs were used to evaluate its performance with real 2D images. Our results indicated that bones could be extracted visually and accurately from both AM and PM images using U-Net. The extracted bone images could provide useful information for personal identification in forensic pathology.

19.
Biomed Phys Eng Express ; 10(5)2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39094595

ABSTRACT

Dynamic 2-[18F] fluoro-2-deoxy-D-glucose positron emission tomography (dFDG-PET) for human brain imaging has considerable clinical potential, yet its utilization remains limited. A key challenge in the quantitative analysis of dFDG-PET is characterizing a patient-specific blood input function, traditionally reliant on invasive arterial blood sampling. This research introduces a novel approach employing non-invasive deep learning model-based computations from the internal carotid arteries (ICA) with partial volume (PV) corrections, thereby eliminating the need for invasive arterial sampling. We present an end-to-end pipeline incorporating a 3D U-Net based ICA-net for ICA segmentation, alongside a Recurrent Neural Network (RNN) based MCIF-net for the derivation of a model-corrected blood input function (MCIF) with PV corrections. The developed 3D U-Net and RNN was trained and validated using a 5-fold cross-validation approach on 50 human brain FDG PET scans. The ICA-net achieved an average Dice score of 82.18% and an Intersection over Union of 68.54% across all tested scans. Furthermore, the MCIF-net exhibited a minimal root mean squared error of 0.0052. The application of this pipeline to ground truth data for dFDG-PET brain scans resulted in the precise localization of seizure onset regions, which contributed to a successful clinical outcome, with the patient achieving a seizure-free state after treatment. These results underscore the efficacy of the ICA-net and MCIF-net deep learning pipeline in learning the ICA structure's distribution and automating MCIF computation with PV corrections. This advancement marks a significant leap in non-invasive neuroimaging.


Subject(s)
Brain , Deep Learning , Fluorodeoxyglucose F18 , Positron-Emission Tomography , Humans , Positron-Emission Tomography/methods , Brain/diagnostic imaging , Brain/blood supply , Image Processing, Computer-Assisted/methods , Brain Mapping/methods , Neural Networks, Computer , Carotid Artery, Internal/diagnostic imaging , Male , Algorithms , Female , Radiopharmaceuticals
20.
Comput Biol Med ; 180: 108927, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39096608

ABSTRACT

Rare genetic diseases are difficult to diagnose and this translates in patient's diagnostic odyssey! This is particularly true for more than 900 rare diseases including orodental developmental anomalies such as missing teeth. However, if left untreated, their symptoms can become significant and disabling for the patient. Early detection and rapid management are therefore essential in this context. The i-Dent project aims to supply a pre-diagnostic tool to detect rare diseases with tooth agenesis of varying severity and pattern. To identify missing teeth, image segmentation models (Mask R-CNN, U-Net) have been trained for the automatic detection of teeth on patients' panoramic dental X-rays. Teeth segmentation enables the identification of teeth which are present or missing within the mouth. Furthermore, a dental age assessment is conducted to verify whether the absence of teeth is an anomaly or a characteristic of the patient's age. Due to the small size of our dataset, we developed a new dental age assessment technique based on the tooth eruption rate. Information about missing teeth is then used by a final algorithm based on the agenesis probabilities to propose a pre-diagnosis of a rare disease. The results obtained in detecting three types of genes (PAX9, WNT10A and EDA) by our system are very promising, providing a pre-diagnosis with an average accuracy of 72 %.


Subject(s)
Rare Diseases , Humans , Rare Diseases/genetics , Rare Diseases/diagnostic imaging , Child , Male , Female , Radiography, Panoramic , Adolescent
SELECTION OF CITATIONS
SEARCH DETAIL