Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 145.049
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Article En | MEDLINE | ID: mdl-38434231

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Histological Techniques , Microscopy , Animals , Flow Cytometry , Image Processing, Computer-Assisted
2.
J Vis Exp ; (207)2024 May 17.
Article En | MEDLINE | ID: mdl-38829110

PyDesigner is a Python-based software package based on the original Diffusion parameter EStImation with Gibbs and NoisE Removal (DESIGNER) pipeline (Dv1) for dMRI preprocessing and tensor estimation. This software is openly provided for non-commercial research and may not be used for clinical care. PyDesigner combines tools from FSL and MRtrix3 to perform denoising, Gibbs ringing correction, eddy current motion correction, brain masking, image smoothing, and Rician bias correction to optimize the estimation of multiple diffusion measures. It can be used across platforms on Windows, Mac, and Linux to accurately derive commonly used metrics from DKI, DTI, WMTI, FBI, and FBWM datasets as well as tractography ODFs and .fib files. It is also file-format agnostic, accepting inputs in the form of .nii, .nii.gz, .mif, and dicom format. User-friendly and easy to install, this software also outputs quality control metrics illustrating signal-to-noise ratio graphs, outlier voxels, and head motion to evaluate data integrity. Additionally, this dMRI processing pipeline supports multiple echo-time dataset processing and features pipeline customization, allowing the user to specify which processes are employed and which outputs are produced to meet a variety of user needs.


Diffusion Magnetic Resonance Imaging , Software , Humans , Diffusion Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods , Brain/diagnostic imaging
3.
Radiat Oncol ; 19(1): 69, 2024 May 31.
Article En | MEDLINE | ID: mdl-38822385

BACKGROUND: Multiple artificial intelligence (AI)-based autocontouring solutions have become available, each promising high accuracy and time savings compared with manual contouring. Before implementing AI-driven autocontouring into clinical practice, three commercially available CT-based solutions were evaluated. MATERIALS AND METHODS: The following solutions were evaluated in this work: MIM-ProtégéAI+ (MIM), Radformation-AutoContour (RAD), and Siemens-DirectORGANS (SIE). Sixteen organs were identified that could be contoured by all solutions. For each organ, ten patients that had manually generated contours approved by the treating physician (AP) were identified, totaling forty-seven different patients. CT scans in the supine position were acquired using a Siemens-SOMATOMgo 64-slice helical scanner and used to generate autocontours. Physician scoring of contour accuracy was performed by at least three physicians using a five-point Likert scale. Dice similarity coefficient (DSC), Hausdorff distance (HD) and mean distance to agreement (MDA) were calculated comparing AI contours to "ground truth" AP contours. RESULTS: The average physician score ranged from 1.00, indicating that all physicians reviewed the contour as clinically acceptable with no modifications necessary, to 3.70, indicating changes are required and that the time taken to modify the structures would likely take as long or longer than manually generating the contour. When averaged across all sixteen structures, the AP contours had a physician score of 2.02, MIM 2.07, RAD 1.96 and SIE 1.99. DSC ranged from 0.37 to 0.98, with 41/48 (85.4%) contours having an average DSC ≥ 0.7. Average HD ranged from 2.9 to 43.3 mm. Average MDA ranged from 0.6 to 26.1 mm. CONCLUSIONS: The results of our comparison demonstrate that each vendor's AI contouring solution exhibited capabilities similar to those of manual contouring. There were a small number of cases where unusual anatomy led to poor scores with one or more of the solutions. The consistency and comparable performance of all three vendors' solutions suggest that radiation oncology centers can confidently choose any of the evaluated solutions based on individual preferences, resource availability, and compatibility with their existing clinical workflows. Although AI-based contouring may result in high-quality contours for the majority of patients, a minority of patients require manual contouring and more in-depth physician review.


Artificial Intelligence , Radiotherapy Planning, Computer-Assisted , Tomography, X-Ray Computed , Humans , Radiotherapy Planning, Computer-Assisted/methods , Organs at Risk/radiation effects , Algorithms , Image Processing, Computer-Assisted/methods
4.
Invertebr Syst ; 382024 Jun.
Article En | MEDLINE | ID: mdl-38838190

Hymenoptera has some of the highest diversity and number of individuals among insects. Many of these species potentially play key roles as food sources, pest controllers and pollinators. However, little is known about the diversity and biology and ~80% of the species have not yet been described. Classical taxonomy based on morphology is a rather slow process but DNA barcoding has already brought considerable progress in identification. Innovative methods such as image-based identification and automation can further speed up the process. We present a proof of concept for image data recognition of a parasitic wasp family, the Diapriidae (Hymenoptera), obtained as part of the GBOL III project. These tiny (1.2-4.5mm) wasps were photographed and identified using DNA barcoding to provide a solid ground truth for training a neural network. Taxonomic identification was used down to the genus level. Subsequently, three different neural network architectures were trained, evaluated and optimised. As a result, 11 different genera of diaprids and one mixed group of 'other Hymenoptera' can be classified with an average accuracy of 96%. Additionally, the sex of the specimen can be classified automatically with an accuracy of >97%.


Neural Networks, Computer , Wasps , Animals , Wasps/genetics , Wasps/anatomy & histology , DNA Barcoding, Taxonomic , Image Processing, Computer-Assisted/methods , Female , Classification/methods , Species Specificity , Male
5.
J Nucl Med Technol ; 52(2): 168-172, 2024 Jun 05.
Article En | MEDLINE | ID: mdl-38839124

Because nuclear medicine diagnostic equipment has not been installed at our educational institution, we had not been able to incorporate nuclear medicine techniques into on-campus training until now. Methods: We have introduced a diagnostic image processing simulator to replace nuclear medicine diagnostic equipment. The simulator was used to conduct on-campus practical training on nuclear medicine technology. We also conducted a questionnaire survey of students regarding their experience with on-campus practical training using the simulators. Results: The survey results revealed that the on-campus practical training using simulators deepened students' understanding of the content they had encountered in classroom lectures. Conclusion: We successfully implemented on-campus practical training in nuclear medicine technology using a diagnostic image-processing simulator. According to the results of our questionnaire, it is possible to provide on-campus practical training to students using simulators that enhance understanding of nuclear medicine technology.


Nuclear Medicine , Nuclear Medicine/education , Surveys and Questionnaires , Humans , Image Processing, Computer-Assisted/methods
6.
Top Magn Reson Imaging ; 33(3): e0312, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38836588

BACKGROUND: Altered size in the corpus callosum (CC) has been reported in individuals with autism spectrum disorder (ASD), but few studies have investigated younger children. Moreover, knowledge about the age-related changes in CC size in individuals with ASD is limited. OBJECTIVES: Our objective was to investigate the age-related size of the CC and compare them with age-matched healthy controls between the ages of 2 and 18 years. METHODS: Structural-weighted images were acquired in 97 male patients diagnosed with ASD; published data were used for the control group. The CC was segmented into 7 distinct subregions (rostrum, genu, rostral body, anterior midbody, posterior midbody, isthmus, and splenium) as per Witelson's technique using ITK-SNAP software. We calculated both the total length and volume of the CC as well as the length and height of its 7 subregions. The length of the CC measures was studied as both continuous and categorical forms. For the continuous form, Pearson's correlation was used, while categorical forms were based on age ranges reflecting brain expansion during early postnatal years. Differences in CC measures between adjacent age groups in individuals with ASD were assessed using a Student t-test. Mean and standard deviation scores were compared between ASD and control groups using the Welch t-test. RESULTS: Age showed a moderate positive association with the total length of the CC (r = 0.43; Padj = 0.003) among individuals with ASD. Among the subregions, a positive association was observed only in the anterior midbody of the CC (r = 0.41; Padj = 0.01). No association was found between the age and the height of individual subregions or with the total volume of the CC. In comparison with healthy controls, individuals with ASD exhibited shorter lengths and heights of the genu and splenium of the CC across wide age ranges. CONCLUSION: Overall, our results highlight a distinct abnormal developmental trajectory of CC in ASD, particularly in the genu and splenium structures, potentially reflecting underlying pathophysiological mechanisms that warrant further investigation.


Autism Spectrum Disorder , Corpus Callosum , Magnetic Resonance Imaging , Humans , Male , Corpus Callosum/diagnostic imaging , Corpus Callosum/pathology , Autism Spectrum Disorder/diagnostic imaging , Autism Spectrum Disorder/pathology , Child , Adolescent , Child, Preschool , Female , Image Processing, Computer-Assisted
7.
IEEE J Biomed Health Inform ; 28(6): 3379-3388, 2024 Jun.
Article En | MEDLINE | ID: mdl-38843069

Monitoring in-bed pose estimation based on the Internet of Medical Things (IoMT) and ambient technology has a significant impact on many applications such as sleep-related disorders including obstructive sleep apnea syndrome, assessment of sleep quality, and health risk of pressure ulcers. In this research, a new multimodal in-bed pose estimation has been proposed using a deep learning framework. The Simultaneously-collected multimodal Lying Pose (SLP) dataset has been used for performance evaluation of the proposed framework where two modalities including long wave infrared (LWIR) and depth images are used to train the proposed model. The main contribution of this research is the feature fusion network and the use of a generative model to generate RGB images having similar poses to other modalities (LWIR/depth). The inclusion of a generative model helps to improve the overall accuracy of the pose estimation algorithm. Moreover, the method can be generalized for situations to recover human pose both in home and hospital settings under various cover thickness levels. The proposed model is compared with other fusion-based models and shows an improved performance of 97.8% at PCKh @0.5. In addition, performance has been evaluated for different cover conditions, and under home and hospital environments which present improvements using our proposed model.


Neural Networks, Computer , Posture , Humans , Posture/physiology , Deep Learning , Algorithms , Image Processing, Computer-Assisted/methods , Beds
8.
J Biomed Opt ; 29(6): 066006, 2024 Jun.
Article En | MEDLINE | ID: mdl-38846677

Significance: Photoacoustic computed tomography (PACT) is a promising non-invasive imaging technique for both life science and clinical implementations. To achieve fast imaging speed, modern PACT systems have equipped arrays that have hundreds to thousands of ultrasound transducer (UST) elements, and the element number continues to increase. However, large number of UST elements with parallel data acquisition could generate a massive data size, making it very challenging to realize fast image reconstruction. Although several research groups have developed GPU-accelerated method for PACT, there lacks an explicit and feasible step-by-step description of GPU-based algorithms for various hardware platforms. Aim: In this study, we propose a comprehensive framework for developing GPU-accelerated PACT image reconstruction (GPU-accelerated photoacoustic computed tomography), to help the research community to grasp this advanced image reconstruction method. Approach: We leverage widely accessible open-source parallel computing tools, including Python multiprocessing-based parallelism, Taichi Lang for Python, CUDA, and possible other backends. We demonstrate that our framework promotes significant performance of PACT reconstruction, enabling faster analysis and real-time applications. Besides, we also described how to realize parallel computing on various hardware configurations, including multicore CPU, single GPU, and multiple GPUs platform. Results: Notably, our framework can achieve an effective rate of ∼ 871 times when reconstructing extremely large-scale three-dimensional PACT images on a dual-GPU platform compared to a 24-core workstation CPU. In this paper, we share example codes via GitHub. Conclusions: Our approach allows for easy adoption and adaptation by the research community, fostering implementations of PACT for both life science and medicine.


Algorithms , Image Processing, Computer-Assisted , Phantoms, Imaging , Photoacoustic Techniques , Photoacoustic Techniques/methods , Photoacoustic Techniques/instrumentation , Image Processing, Computer-Assisted/methods , Animals , Computer Graphics , Tomography, X-Ray Computed/methods , Tomography, X-Ray Computed/instrumentation , Humans
9.
Sci Rep ; 14(1): 13082, 2024 06 07.
Article En | MEDLINE | ID: mdl-38844566

Accurate classification of tooth development stages from orthopantomograms (OPG) is crucial for dental diagnosis, treatment planning, age assessment, and forensic applications. This study aims to develop an automated method for classifying third molar development stages using OPGs. Initially, our data consisted of 3422 OPG images, each classified and curated by expert evaluators. The dataset includes images from both Q3 (lower jaw left side) and Q4 (lower right side) regions extracted from panoramic images, resulting in a total of 6624 images for analysis. Following data collection, the methodology employs region of interest extraction, pre-filtering, and extensive data augmentation techniques to enhance classification accuracy. The deep neural network model, including architectures such as EfficientNet, EfficientNetV2, MobileNet Large, MobileNet Small, ResNet18, and ShuffleNet, is optimized for this task. Our findings indicate that EfficientNet achieved the highest classification accuracy at 83.7%. Other architectures achieved accuracies ranging from 71.57 to 82.03%. The variation in performance across architectures highlights the influence of model complexity and task-specific features on classification accuracy. This research introduces a novel machine learning model designed to accurately estimate the development stages of lower wisdom teeth in OPG images, contributing to the fields of dental diagnostics and treatment planning.


Deep Learning , Molar, Third , Radiography, Panoramic , Molar, Third/growth & development , Molar, Third/diagnostic imaging , Humans , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Female , Male
10.
Genome Biol ; 25(1): 147, 2024 Jun 06.
Article En | MEDLINE | ID: mdl-38844966

Current clustering analysis of spatial transcriptomics data primarily relies on molecular information and fails to fully exploit the morphological features present in histology images, leading to compromised accuracy and interpretability. To overcome these limitations, we have developed a multi-stage statistical method called iIMPACT. It identifies and defines histology-based spatial domains based on AI-reconstructed histology images and spatial context of gene expression measurements, and detects domain-specific differentially expressed genes. Through multiple case studies, we demonstrate iIMPACT outperforms existing methods in accuracy and interpretability and provides insights into the cellular spatial organization and landscape of functional genes within spatial transcriptomics data.


Gene Expression Profiling , Transcriptome , Gene Expression Profiling/methods , Humans , Cluster Analysis , Image Processing, Computer-Assisted/methods
11.
Sci Rep ; 14(1): 12907, 2024 06 05.
Article En | MEDLINE | ID: mdl-38839814

Flatbed scanners are commonly used for root analysis, but typical manual segmentation methods are time-consuming and prone to errors, especially in large-scale, multi-plant studies. Furthermore, the complex nature of root structures combined with noisy backgrounds in images complicates automated analysis. Addressing these challenges, this article introduces RhizoNet, a deep learning-based workflow to semantically segment plant root scans. Utilizing a sophisticated Residual U-Net architecture, RhizoNet enhances prediction accuracy and employs a convex hull operation for delineation of the primary root component. Its main objective is to accurately segment root biomass and monitor its growth over time. RhizoNet processes color scans of plants grown in a hydroponic system known as EcoFAB, subjected to specific nutritional treatments. The root detection model using RhizoNet demonstrates strong generalization in the validation tests of all experiments despite variable treatments. The main contributions are the standardization of root segmentation and phenotyping, systematic and accelerated analysis of thousands of images, significantly aiding in the precise assessment of root growth dynamics under varying plant conditions, and offering a path toward self-driving labs.


Biomass , Plant Roots , Plant Roots/growth & development , Image Processing, Computer-Assisted/methods , Deep Learning
12.
Sci Rep ; 14(1): 12630, 2024 06 02.
Article En | MEDLINE | ID: mdl-38824210

In this study, we present the development of a fine structural human phantom designed specifically for applications in dentistry. This research focused on assessing the viability of applying medical computer vision techniques to the task of segmenting individual teeth within a phantom. Using a virtual cone-beam computed tomography (CBCT) system, we generated over 170,000 training datasets. These datasets were produced by varying the elemental densities and tooth sizes within the human phantom, as well as varying the X-ray spectrum, noise intensity, and projection cutoff intensity in the virtual CBCT system. The deep-learning (DL) based tooth segmentation model was trained using the generated datasets. The results demonstrate an agreement with manual contouring when applied to clinical CBCT data. Specifically, the Dice similarity coefficient exceeded 0.87, indicating the robust performance of the developed segmentation model even when virtual imaging was used. The present results show the practical utility of virtual imaging techniques in dentistry and highlight the potential of medical computer vision for enhancing precision and efficiency in dental imaging processes.


Cone-Beam Computed Tomography , Phantoms, Imaging , Tooth , Humans , Tooth/diagnostic imaging , Tooth/anatomy & histology , Cone-Beam Computed Tomography/methods , Dentistry/methods , Image Processing, Computer-Assisted/methods , Deep Learning
13.
Hum Brain Mapp ; 45(8): e26718, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38825985

The early stages of human development are increasingly acknowledged as pivotal in laying the groundwork for subsequent behavioral and cognitive development. Spatiotemporal (4D) brain functional atlases are important in elucidating the development of human brain functions. However, the scarcity of such atlases for early life stages stems from two primary challenges: (1) the significant noise in functional magnetic resonance imaging (fMRI) that complicates the generation of high-quality atlases for each age group, and (2) the rapid and complex changes in the early human brain that hinder the maintenance of temporal consistency in 4D atlases. This study tackles these challenges by integrating low-rank tensor learning with spectral embedding, thereby proposing a novel, data-driven 4D functional atlas generation framework based on spectral functional network learning (SFNL). This method utilizes low-rank tensor learning to capture common functional connectivity (FC) patterns across different ages, thus optimizing FCs for each age group to improve the temporal consistency of functional networks. Incorporating spectral embedding aids in mitigating potential noise in FC networks derived from fMRI data by reconstructing networks in the spectral space. Utilizing SFNL-generated functional networks enables the creation of consistent and highly qualified spatiotemporal functional atlases. The framework was applied to the developing Human Connectome Project (dHCP) dataset, generating the first neonatal 4D functional atlases with fine-grained temporal and spatial resolutions. Experimental evaluations focusing on functional homogeneity, reliability, and temporal consistency demonstrated the superiority of our framework compared to existing methods for constructing 4D atlases. Additionally, network analysis experiments, including individual identification, functional systems development, and local efficiency assessments, further corroborate the efficacy and robustness of the generated atlases. The 4D atlases and related codes will be made publicly accessible (https://github.com/zhaoyunxi/neonate-atlases).


Atlases as Topic , Connectome , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Infant, Newborn , Connectome/methods , Male , Female , Brain/diagnostic imaging , Brain/physiology , Brain/growth & development , Infant , Image Processing, Computer-Assisted/methods , Machine Learning , Nerve Net/diagnostic imaging , Nerve Net/physiology , Nerve Net/growth & development
14.
Acta Crystallogr D Struct Biol ; 80(Pt 6): 421-438, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38829361

For cryo-electron tomography (cryo-ET) of beam-sensitive biological specimens, a planar sample geometry is typically used. As the sample is tilted, the effective thickness of the sample along the direction of the electron beam increases and the signal-to-noise ratio concomitantly decreases, limiting the transfer of information at high tilt angles. In addition, the tilt range where data can be collected is limited by a combination of various sample-environment constraints, including the limited space in the objective lens pole piece and the possible use of fixed conductive braids to cool the specimen. Consequently, most tilt series are limited to a maximum of ±70°, leading to the presence of a missing wedge in Fourier space. The acquisition of cryo-ET data without a missing wedge, for example using a cylindrical sample geometry, is hence attractive for volumetric analysis of low-symmetry structures such as organelles or vesicles, lysis events, pore formation or filaments for which the missing information cannot be compensated by averaging techniques. Irrespective of the geometry, electron-beam damage to the specimen is an issue and the first images acquired will transfer more high-resolution information than those acquired last. There is also an inherent trade-off between higher sampling in Fourier space and avoiding beam damage to the sample. Finally, the necessity of using a sufficient electron fluence to align the tilt images means that this fluence needs to be fractionated across a small number of images; therefore, the order of data acquisition is also a factor to consider. Here, an n-helix tilt scheme is described and simulated which uses overlapping and interleaved tilt series to maximize the use of a pillar geometry, allowing the entire pillar volume to be reconstructed as a single unit. Three related tilt schemes are also evaluated that extend the continuous and classic dose-symmetric tilt schemes for cryo-ET to pillar samples to enable the collection of isotropic information across all spatial frequencies. A fourfold dose-symmetric scheme is proposed which provides a practical compromise between uniform information transfer and complexity of data acquisition.


Cryoelectron Microscopy , Electron Microscope Tomography , Electron Microscope Tomography/methods , Cryoelectron Microscopy/methods , Image Processing, Computer-Assisted/methods , Fourier Analysis , Signal-To-Noise Ratio
15.
PLoS One ; 19(6): e0298698, 2024.
Article En | MEDLINE | ID: mdl-38829850

With the accelerated development of the technological power of society, aerial images of drones gradually penetrated various industries. Due to the variable speed of drones, the captured images are shadowed, blurred, and obscured. Second, drones fly at varying altitudes, leading to changing target scales and making it difficult to detect and identify small targets. In order to solve the above problems, an improved ASG-YOLOv5 model is proposed in this paper. Firstly, this research proposes a dynamic contextual attention module, which uses feature scores to dynamically assign feature weights and output feature information through channel dimensions to improve the model's attention to small target feature information and increase the network's ability to extract contextual information; secondly, this research designs a spatial gating filtering multi-directional weighted fusion module, which uses spatial filtering and weighted bidirectional fusion in the multi-scale fusion stage to improve the characterization of weak targets, reduce the interference of redundant information, and better adapt to the detection of weak targets in images under unmanned aerial vehicle remote sensing aerial photography; meanwhile, using Normalized Wasserstein Distance and CIoU regression loss function, the similarity metric value of the regression frame is obtained by modeling the Gaussian distribution of the regression frame, which increases the smoothing of the positional difference of the small targets and solves the problem that the positional deviation of the small targets is very sensitive, so that the model's detection accuracy of the small targets is effectively improved. This paper trains and tests the model on the VisDrone2021 and AI-TOD datasets. This study used the NWPU-RESISC dataset for visual detection validation. The experimental results show that ASG-YOLOv5 has a better detection effect in unmanned aerial vehicle remote sensing aerial images, and the frames per second (FPS) reaches 86, which meets the requirement of real-time small target detection, and it can be better adapted to the detection of the weak and small targets in the aerial image dataset, and ASG-YOLOv5 outperforms many existing target detection methods, and its detection accuracy reaches 21.1% mAP value. The mAP values are improved by 2.9% and 1.4%, respectively, compared with the YOLOv5 model. The project is available at https://github.com/woaini-shw/asg-yolov5.git.


Remote Sensing Technology , Unmanned Aerial Devices , Remote Sensing Technology/methods , Remote Sensing Technology/instrumentation , Algorithms , Image Processing, Computer-Assisted/methods
16.
PLoS One ; 19(6): e0304789, 2024.
Article En | MEDLINE | ID: mdl-38829858

Malaria is a deadly disease that is transmitted through mosquito bites. Microscopists use a microscope to examine thin blood smears at high magnification (1000x) to identify parasites in red blood cells (RBCs). Estimating parasitemia is essential in determining the severity of the Plasmodium falciparum infection and guiding treatment. However, this process is time-consuming, labor-intensive, and subject to variation, which can directly affect patient outcomes. In this retrospective study, we compared three methods for measuring parasitemia from a collection of anonymized thin blood smears of patients with Plasmodium falciparum obtained from the Clinical Department of Parasitology-Mycology, National Reference Center (NRC) for Malaria in Paris, France. We first analyzed the impact of the number of field images on parasitemia count using our framework, MALARIS, which features a top-classifier convolutional neural network (CNN). Additionally, we studied the variation between different microscopists using two manual techniques to demonstrate the need for a reliable and reproducible automated system. Finally, we included thin blood smear images from an additional 102 patients to compare the performance and correlation of our system with manual microscopy and flow cytometry. Our results showed strong correlations between the three methods, with a coefficient of determination between 0.87 and 0.92.


Malaria, Falciparum , Microscopy , Parasitemia , Plasmodium falciparum , Humans , Plasmodium falciparum/isolation & purification , Parasitemia/diagnosis , Parasitemia/blood , Parasitemia/parasitology , Malaria, Falciparum/diagnosis , Malaria, Falciparum/blood , Malaria, Falciparum/parasitology , Retrospective Studies , Microscopy/methods , Erythrocytes/parasitology , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Flow Cytometry/methods
17.
PLoS One ; 19(6): e0300976, 2024.
Article En | MEDLINE | ID: mdl-38829868

Multi beam forward looking sonar plays an important role in underwater detection. However, due to the complex underwater environment, unclear features, and susceptibility to noise interference, most forward looking sonar systems have poor recognition performance. The research on MFLS for underwater target detection faces some challenges. Therefore, this study proposes innovative improvements to the YOLOv5 algorithm to address the above issues. On the basis of maintaining the original YOLOv5 architecture, this improved model introduces transfer learning technology to overcome the limitation of scarce sonar image data. At the same time, by incorporating the concept of coordinate convolution, the improved model can extract features with rich positional information, significantly enhancing the model's detection ability for small underwater targets. Furthermore, in order to solve the problem of feature extraction in forward looking sonar images, this study integrates attention mechanisms. This mechanism expands the receptive field of the model and optimizes the feature learning process by highlighting key details while suppressing irrelevant information. These improvements not only enhance the recognition accuracy of the model for sonar images, but also enhance its applicability and generalization performance in different underwater environments. In response to the common problem of uneven training sample quality in forward looking sonar imaging technology, this study made a key improvement to the classic YOLOv5 algorithm. By adjusting the bounding box loss function of YOLOv5, the model's over sensitivity to low-quality samples was reduced, thereby reducing the punishment on these samples. After a series of comparative experiments, the newly proposed CCW-YOLOv5 algorithm has achieved detection accuracy in object detection mAP@0.5 Reached 85.3%, and the fastest inference speed tested on the local machine was 54 FPS, showing significant improvement and performance improvement compared to existing advanced algorithms.


Algorithms , Image Processing, Computer-Assisted/methods , Sound
18.
PLoS One ; 19(6): e0304716, 2024.
Article En | MEDLINE | ID: mdl-38829872

Optical microscopy videos enable experts to analyze the motion of several biological elements. Particularly in blood samples infected with Trypanosoma cruzi (T. cruzi), microscopy videos reveal a dynamic scenario where the parasites' motions are conspicuous. While parasites have self-motion, cells are inert and may assume some displacement under dynamic events, such as fluids and microscope focus adjustments. This paper analyzes the trajectory of T. cruzi and blood cells to discriminate between these elements by identifying the following motion patterns: collateral, fluctuating, and pan-tilt-zoom (PTZ). We consider two approaches: i) classification experiments for discrimination between parasites and cells; and ii) clustering experiments to identify the cell motion. We propose the trajectory step dispersion (TSD) descriptor based on standard deviation to characterize these elements, outperforming state-of-the-art descriptors. Our results confirm motion is valuable in discriminating T. cruzi of the cells. Since the parasites perform the collateral motion, their trajectory steps tend to randomness. The cells may assume fluctuating motion following a homogeneous and directional path or PTZ motion with trajectory steps in a restricted area. Thus, our findings may contribute to developing new computational tools focused on trajectory analysis, which can advance the study and medical diagnosis of Chagas disease.


Microscopy, Video , Trypanosoma cruzi , Trypanosoma cruzi/physiology , Microscopy, Video/methods , Chagas Disease/parasitology , Humans , Image Processing, Computer-Assisted/methods
19.
Sci Rep ; 14(1): 12699, 2024 06 03.
Article En | MEDLINE | ID: mdl-38830932

Medical image segmentation has made a significant contribution towards delivering affordable healthcare by facilitating the automatic identification of anatomical structures and other regions of interest. Although convolution neural networks have become prominent in the field of medical image segmentation, they suffer from certain limitations. In this study, we present a reliable framework for producing performant outcomes for the segmentation of pathological structures of 2D medical images. Our framework consists of a novel deep learning architecture, called deep multi-level attention dilated residual neural network (MADR-Net), designed to improve the performance of medical image segmentation. MADR-Net uses a U-Net encoder/decoder backbone in combination with multi-level residual blocks and atrous pyramid scene parsing pooling. To improve the segmentation results, channel-spatial attention blocks were added in the skip connection to capture both the global and local features and superseded the bottleneck layer with an ASPP block. Furthermore, we introduce a hybrid loss function that has an excellent convergence property and enhances the performance of the medical image segmentation task. We extensively validated the proposed MADR-Net on four typical yet challenging medical image segmentation tasks: (1) Left ventricle, left atrium, and myocardial wall segmentation from Echocardiogram images in the CAMUS dataset, (2) Skin cancer segmentation from dermoscopy images in ISIC 2017 dataset, (3) Electron microscopy in FIB-SEM dataset, and (4) Fluid attenuated inversion recovery abnormality from MR images in LGG segmentation dataset. The proposed algorithm yielded significant results when compared to state-of-the-art architectures such as U-Net, Residual U-Net, and Attention U-Net. The proposed MADR-Net consistently outperformed the classical U-Net by 5.43%, 3.43%, and 3.92% relative improvement in terms of dice coefficient, respectively, for electron microscopy, dermoscopy, and MRI. The experimental results demonstrate superior performance on single and multi-class datasets and that the proposed MADR-Net can be utilized as a baseline for the assessment of cross-dataset and segmentation tasks.


Deep Learning , Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Image Processing, Computer-Assisted/methods , Algorithms , Magnetic Resonance Imaging/methods
20.
J Robot Surg ; 18(1): 237, 2024 Jun 04.
Article En | MEDLINE | ID: mdl-38833204

A major obstacle in applying machine learning for medical fields is the disparity between the data distribution of the training images and the data encountered in clinics. This phenomenon can be explained by inconsistent acquisition techniques and large variations across the patient spectrum. The result is poor translation of the trained models to the clinic, which limits their implementation in medical practice. Patient-specific trained networks could provide a potential solution. Although patient-specific approaches are usually infeasible because of the expenses associated with on-the-fly labeling, the use of generative adversarial networks enables this approach. This study proposes a patient-specific approach based on generative adversarial networks. In the presented training pipeline, the user trains a patient-specific segmentation network with extremely limited data which is supplemented with artificial samples generated by generative adversarial models. This approach is demonstrated in endoscopic video data captured during fetoscopic laser coagulation, a procedure used for treating twin-to-twin transfusion syndrome by ablating the placental blood vessels. Compared to a standard deep learning segmentation approach, the pipeline was able to achieve an intersection over union score of 0.60 using only 20 annotated images compared to 100 images using a standard approach. Furthermore, training with 20 annotated images without the use of the pipeline achieves an intersection over union score of 0.30, which, therefore, corresponds to a 100% increase in performance when incorporating the pipeline. A pipeline using GANs was used to generate artificial data which supplements the real data, this allows patient-specific training of a segmentation network. We show that artificial images generated using GANs significantly improve performance in vessel segmentation and that training patient-specific models can be a viable solution to bring automated vessel segmentation to the clinic.


Placenta , Humans , Pregnancy , Placenta/blood supply , Placenta/diagnostic imaging , Female , Deep Learning , Image Processing, Computer-Assisted/methods , Fetofetal Transfusion/surgery , Fetofetal Transfusion/diagnostic imaging , Machine Learning , Robotic Surgical Procedures/methods , Neural Networks, Computer
...