Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Sci Rep ; 14(1): 18478, 2024 08 09.
Artículo en Inglés | MEDLINE | ID: mdl-39122782

RESUMEN

Inverse problems in biomedical image analysis represent a significant frontier in disease detection, leveraging computational methodologies and mathematical modelling to unravel complex data embedded within medical images. These problems include deducing the unknown properties of biological structures or tissues from the observed imaging data, presenting a unique challenge in decoding intricate biological phenomena. Regarding disease detection, this technique has played a critical role in optimizing diagnostic efficiency by extracting meaningful insights from different imaging modalities like molecular imaging, MRI, and CT scans. Inverse problems contribute to uncovering subtle abnormalities by employing iterative optimization techniques and sophisticated algorithms, enabling precise and early disease detection. Deep learning (DL) solutions have emerged as robust mechanisms for addressing inverse problems in biomedical image analysis, especially in disease recognition. Inverse problems involve reconstructing unknown structures or parameters from observed data, and the DL model excels in learning complex representations and mappings. This study develops a DL Solution for Inverse Problems in the Advanced Biomedical Image Analysis on Disease Detection (DLSIP-ABIADD) technique. The DLSIP-ABIADD technique exploits the DL approach to solve inverse problems and detect the presence of diseases on biomedical images. To solve the inverse problem, the DLSIP-ABIADD technique uses a direct mapping approach. Bilateral filtering (BF) is used for image preprocessing. Besides, the MobileNetv2 model derives feature vectors from the input images. Moreover, the Henry gas solubility optimization (HGSO) method is applied for optimal hyperparameter selection of the MobileNetv2 model. Furthermore, a bidirectional long short-term memory (BiLSTM) model is deployed to identify diseases in medical images. Extensive simulations have been involved to illustrate the better performance of the DLSIP-ABIADD technique. The experimentation outcomes stated that the DLSIP-ABIADD technique performs better than other models.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Tomografía Computarizada por Rayos X/métodos , Interpretación de Imagen Asistida por Computador/métodos
2.
Sci Rep ; 14(1): 11588, 2024 05 21.
Artículo en Inglés | MEDLINE | ID: mdl-38773207

RESUMEN

Current assessment methods for diabetic foot ulcers (DFUs) lack objectivity and consistency, posing a significant risk to diabetes patients, including the potential for amputations, highlighting the urgent need for improved diagnostic tools and care standards in the field. To address this issue, the objective of this study was to develop and evaluate the Smart Diabetic Foot Ulcer Scoring System, ScoreDFUNet, which incorporates artificial intelligence (AI) and image analysis techniques, aiming to enhance the precision and consistency of diabetic foot ulcer assessment. ScoreDFUNet demonstrates precise categorization of DFU images into "ulcer," "infection," "normal," and "gangrene" areas, achieving a noteworthy accuracy rate of 95.34% on the test set, with elevated levels of precision, recall, and F1 scores. Comparative evaluations with dermatologists affirm that our algorithm consistently surpasses the performance of junior and mid-level dermatologists, closely matching the assessments of senior dermatologists, and rigorous analyses including Bland-Altman plots and significance testing validate the robustness and reliability of our algorithm. This innovative AI system presents a valuable tool for healthcare professionals and can significantly improve the care standards in the field of diabetic foot ulcer assessment.


Asunto(s)
Algoritmos , Inteligencia Artificial , Pie Diabético , Pie Diabético/diagnóstico , Pie Diabético/patología , Humanos , Reproducibilidad de los Resultados , Procesamiento de Imagen Asistido por Computador/métodos , Índice de Severidad de la Enfermedad
3.
Comput Struct Biotechnol J ; 21: 3696-3704, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37560127

RESUMEN

The assessment of muscle condition is of great importance in various research areas. In particular, evaluating the degree of intramuscular fat (IMF) in tissue sections is a challenging task, which today is still mostly performed qualitatively or quantitatively by a highly subjective and error-prone manual analysis. We here realize the mission to make automated IMF analysis possible that (i) minimizes subjectivity, (ii) provides accurate and quantitative results quickly, and (iii) is cost-effective using standard hematoxylin and eosin (H&E) stained tissue sections. To address all these needs in a deep learning approach, we utilized the convolutional encoder-decoder network SegNet to train the specialized network IMFSegNet allowing to accurately quantify the spatial distribution of IMF in histological sections. Our fully automated analysis was validated on 17 H&E-stained muscle sections from individual sheep and compared to various state-of-the-art approaches. Not only does IMFSegNet outperform all other approaches, but this neural network also provides fully automated and highly accurate results utilizing the most cost-effective procedures of sample preparation and imaging. Furthermore, we shed light on the opacity of black-box approaches such as neural networks by applying an explainable artificial intelligence technique to clarify that the success of IMFSegNet actually lies in identifying the hard-to-detect IMF structures. Embedded in our open-source visual programming language JIPipe that does not require programming skills, it can be expected that IMFSegNet advances muscle condition assessment in basic research across multiple areas as well as in research fields focusing on translational clinical applications.

4.
Front Bioinform ; 3: 1194993, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37484865

RESUMEN

Artificial Intelligence (AI) has achieved remarkable success in image generation, image analysis, and language modeling, making data-driven techniques increasingly relevant in practical real-world applications, promising enhanced creativity and efficiency for human users. However, the deployment of AI in high-stakes domains such as infrastructure and healthcare still raises concerns regarding algorithm accountability and safety. The emerging field of explainable AI (XAI) has made significant strides in developing interfaces that enable humans to comprehend the decisions made by data-driven models. Among these approaches, concept-based explainability stands out due to its ability to align explanations with high-level concepts familiar to users. Nonetheless, early research in adversarial machine learning has unveiled that exposing model explanations can render victim models more susceptible to attacks. This is the first study to investigate and compare the impact of concept-based explanations on the privacy of Deep Learning based AI models in the context of biomedical image analysis. An extensive privacy benchmark is conducted on three different state-of-the-art model architectures (ResNet50, NFNet, ConvNeXt) trained on two biomedical (ISIC and EyePACS) and one synthetic dataset (SCDB). The success of membership inference attacks while exposing varying degrees of attribution-based and concept-based explanations is systematically compared. The findings indicate that, in theory, concept-based explanations can potentially increase the vulnerability of a private AI system by up to 16% compared to attributions in the baseline setting. However, it is demonstrated that, in more realistic attack scenarios, the threat posed by explanations is negligible in practice. Furthermore, actionable recommendations are provided to ensure the safe deployment of concept-based XAI systems. In addition, the impact of differential privacy (DP) on the quality of concept-based explanations is explored, revealing that while negatively influencing the explanation ability, DP can have an adverse effect on the models' privacy.

5.
Med Image Anal ; 86: 102765, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36965252

RESUMEN

Challenges have become the state-of-the-art approach to benchmark image analysis algorithms in a comparative manner. While the validation on identical data sets was a great step forward, results analysis is often restricted to pure ranking tables, leaving relevant questions unanswered. Specifically, little effort has been put into the systematic investigation on what characterizes images in which state-of-the-art algorithms fail. To address this gap in the literature, we (1) present a statistical framework for learning from challenges and (2) instantiate it for the specific task of instrument instance segmentation in laparoscopic videos. Our framework relies on the semantic meta data annotation of images, which serves as foundation for a General Linear Mixed Models (GLMM) analysis. Based on 51,542 meta data annotations performed on 2,728 images, we applied our approach to the results of the Robust Medical Instrument Segmentation Challenge (ROBUST-MIS) challenge 2019 and revealed underexposure, motion and occlusion of instruments as well as the presence of smoke or other objects in the background as major sources of algorithm failure. Our subsequent method development, tailored to the specific remaining issues, yielded a deep learning model with state-of-the-art overall performance and specific strengths in the processing of images in which previous methods tended to fail. Due to the objectivity and generic applicability of our approach, it could become a valuable tool for validation in the field of medical image analysis and beyond.


Asunto(s)
Algoritmos , Laparoscopía , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
6.
Data Brief ; 46: 108769, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36506804

RESUMEN

Automated detection of cell nuclei in fluorescence microscopy images is a key task in bioimage analysis. It is essential for most types of microscopy-based high-throughput drug and genomic screening and is often required in smaller scale experiments as well. To develop and evaluate algorithms and neural networks that perform instance or semantic segmentation for detecting nuclei, high quality annotated data is essential. Here we present a benchmarking dataset of fluorescence microscopy images with Hoechst 33342-stained nuclei together with annotations of nuclei, nuclear fragments and micronuclei. Images were randomly selected from an RNA interference screen with a modified U2OS osteosarcoma cell line, acquired on a Thermo Fischer CX7 high-content imaging system at 20x magnification. Labelling was performed by a single annotator and reviewed by a biomedical expert. The dataset, called Aitslab-bioimaging1, contains 50 images showing over 2000 labelled nuclear objects in total, which is sufficiently large to train well-performing neural networks for instance or semantic segmentation. The dataset is split into training, development and test set for user convenience.

7.
Med Image Anal ; 80: 102500, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35667329

RESUMEN

Exploiting well-labeled training sets has led deep learning models to astonishing results for counting biological structures in microscopy images. However, dealing with weak multi-rater annotations, i.e., when multiple human raters disagree due to non-trivial patterns, remains a relatively unexplored problem. More reliable labels can be obtained by aggregating and averaging the decisions given by several raters to the same data. Still, the scale of the counting task and the limited budget for labeling prohibit this. As a result, making the most with small quantities of multi-rater data is crucial. To this end, we propose a two-stage counting strategy in a weakly labeled data scenario. First, we detect and count the biological structures; then, in the second step, we refine the predictions, increasing the correlation between the scores assigned to the samples and the raters' agreement on the annotations. We assess our methodology on a novel dataset comprising fluorescence microscopy images of mice brains containing extracellular matrix aggregates named perineuronal nets. We demonstrate that we significantly enhance counting performance, improving confidence calibration by taking advantage of the redundant information characterizing the small sets of available multi-rater data.


Asunto(s)
Incertidumbre , Animales , Humanos , Ratones
8.
Biosystems ; 211: 104557, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34634444

RESUMEN

Cell segmentation is a major bottleneck in extracting quantitative single-cell information from microscopy data. The challenge is exasperated in the setting of microstructured environments. While deep learning approaches have proven useful for general cell segmentation tasks, previously available segmentation tools for the yeast-microstructure setting rely on traditional machine learning approaches. Here we present convolutional neural networks trained for multiclass segmenting of individual yeast cells and discerning these from cell-similar microstructures. An U-Net based semantic segmentation approach, as well as a direct instance segmentation approach with a Mask R-CNN are demonstrated. We give an overview of the datasets recorded for training, validating and testing the networks, as well as a typical use-case. We showcase the methods' contribution to segmenting yeast in microstructured environments with a typical systems or synthetic biology application. The models achieve robust segmentation results, outperforming the previous state-of-the-art in both accuracy and speed. The combination of fast and accurate segmentation is not only beneficial for a posteriori data processing, it also makes online monitoring of thousands of trapped cells or closed-loop optimal experimental design feasible from an image processing perspective. Code is and data samples are available at https://git.rwth-aachen.de/bcs/projects/tp/multiclass-yeast-seg.


Asunto(s)
Aprendizaje Profundo , Saccharomyces cerevisiae/citología , Microscopía , Redes Neurales de la Computación
9.
Diagnostics (Basel) ; 11(11)2021 Oct 27.
Artículo en Inglés | MEDLINE | ID: mdl-34829341

RESUMEN

We propose a new framework, PlasmodiumVF-Net, to analyze thick smear microscopy images for a malaria diagnosis on both image and patient-level. Our framework detects whether a patient is infected, and in case of a malarial infection, reports whether the patient is infected by Plasmodium falciparum or Plasmodium vivax. PlasmodiumVF-Net first detects candidates for Plasmodium parasites using a Mask Regional-Convolutional Neural Network (Mask R-CNN), filters out false positives using a ResNet50 classifier, and then follows a new approach to recognize parasite species based on a score obtained from the number of detected patches and their aggregated probabilities for all of the patient images. Reporting a patient-level decision is highly challenging, and therefore reported less often in the literature, due to the small size of detected parasites, the similarity to staining artifacts, the similarity of species in different development stages, and illumination or color variations on patient-level. We use a manually annotated dataset consisting of 350 patients, with about 6000 images, which we make publicly available together with this manuscript. Our framework achieves an overall accuracy above 90% on image and patient-level.

10.
Eur Urol Focus ; 7(4): 710-712, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34120881

RESUMEN

With the impact of artificial intelligence (AI) algorithms on medical research on the rise, the importance of competitions for comparative validation of algorithms, so-called challenges, has been steadily increasing, to a point at which challenges can be considered major drivers of research, particularly in the biomedical image analysis domain. Given their importance, high quality, transparency, and interpretability of challenges is essential for good scientific practice and meaningful validation of AI algorithms, for instance towards clinical translation. This mini-review presents several issues related to the design, execution, and interpretation of challenges in the biomedical domain and provides best-practice recommendations. PATIENT SUMMARY: This paper presents recommendations on how to reliably compare the usefulness of new artificial intelligence methods for analysis of medical images.


Asunto(s)
Inteligencia Artificial , Investigación Biomédica , Algoritmos , Humanos
11.
BMC Oral Health ; 21(1): 185, 2021 04 12.
Artículo en Inglés | MEDLINE | ID: mdl-33845806

RESUMEN

High-resolution micro-computed tomography is a powerful tool to analyze and visualize the internal morphology of human permanent teeth. It is increasingly used for investigation of epidemiological questions to provide the dentist with the necessary information required for successful endodontic treatment. The aim of the present paper was to propose an image processing method to automate parts of the work needed to fully describe the internal morphology of human permanent teeth. One hundred and four human teeth were scanned on a high-resolution micro-CT scanner using an automatic specimen changer. Python code in a Jupyter notebook was used to verify and process the scans, prepare the datasets for description of the internal morphology and to measure the apical region of the tooth. The presented method offers an easy, non-destructive, rapid and efficient approach to scan, check and preview tomographic datasets of a large number of teeth. It is a helpful tool for the detailed description and characterization of the internal morphology of human permanent teeth using automated segmentation by means of micro-CT with full reproducibility and high standardization.


Asunto(s)
Cavidad Pulpar , Dentición Permanente , Humanos , Procesamiento de Imagen Asistido por Computador , Reproducibilidad de los Resultados , Raíz del Diente , Microtomografía por Rayos X
12.
Artículo en Inglés | MEDLINE | ID: mdl-36589620

RESUMEN

This paper studies why pathologists can misdiagnose diagnostically challenging breast biopsy cases, using a data set of 240 whole slide images (WSIs). Three experienced pathologists agreed on a consensus reference ground-truth diagnosis for each slide and also a consensus region of interest (ROI) from which the diagnosis could best be made. A study group of 87 other pathologists then diagnosed test sets (60 slides each) and marked their own regions of interest. Diagnoses and ROIs were categorized such that if on a given slide, their ROI differed from the consensus ROI and their diagnosis was incorrect, that ROI was called a distractor. We used the HATNet transformer-based deep learning classifier to evaluate the visual similarities and differences between the true (consensus) ROIs and the distractors. Results showed high accuracy for both the similarity and difference networks, showcasing the challenging nature of feature classification with breast biopsy images. This study is important in the potential use of its results for teaching pathologists how to diagnose breast biopsy slides.

13.
Front Bioeng Biotechnol ; 8: 558880, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33117778

RESUMEN

Various pre-trained deep learning models for the segmentation of bioimages have been made available as developer-to-end-user solutions. They are optimized for ease of use and usually require neither knowledge of machine learning nor coding skills. However, individually testing these tools is tedious and success is uncertain. Here, we present the Open Segmentation Framework (OpSeF), a Python framework for deep learning-based instance segmentation. OpSeF aims at facilitating the collaboration of biomedical users with experienced image analysts. It builds on the analysts' knowledge in Python, machine learning, and workflow design to solve complex analysis tasks at any scale in a reproducible, well-documented way. OpSeF defines standard inputs and outputs, thereby facilitating modular workflow design and interoperability with other software. Users play an important role in problem definition, quality control, and manual refinement of results. OpSeF semi-automates preprocessing, convolutional neural network (CNN)-based segmentation in 2D or 3D, and postprocessing. It facilitates benchmarking of multiple models in parallel. OpSeF streamlines the optimization of parameters for pre- and postprocessing such, that an available model may frequently be used without retraining. Even if sufficiently good results are not achievable with this approach, intermediate results can inform the analysts in the selection of the most promising CNN-architecture in which the biomedical user might invest the effort of manually labeling training data. We provide Jupyter notebooks that document sample workflows based on various image collections. Analysts may find these notebooks useful to illustrate common segmentation challenges, as they prepare the advanced user for gradually taking over some of their tasks and completing their projects independently. The notebooks may also be used to explore the analysis options available within OpSeF in an interactive way and to document and share final workflows. Currently, three mechanistically distinct CNN-based segmentation methods, the U-Net implementation used in Cellprofiler 3.0, StarDist, and Cellpose have been integrated within OpSeF. The addition of new networks requires little; the addition of new models requires no coding skills. Thus, OpSeF might soon become both an interactive model repository, in which pre-trained models might be shared, evaluated, and reused with ease.

14.
Med Image Anal ; 66: 101796, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-32911207

RESUMEN

The number of biomedical image analysis challenges organized per year is steadily increasing. These international competitions have the purpose of benchmarking algorithms on common data sets, typically to identify the best method for a given problem. Recent research, however, revealed that common practice related to challenge reporting does not allow for adequate interpretation and reproducibility of results. To address the discrepancy between the impact of challenges and the quality (control), the Biomedical Image Analysis ChallengeS (BIAS) initiative developed a set of recommendations for the reporting of challenges. The BIAS statement aims to improve the transparency of the reporting of a biomedical image analysis challenge regardless of field of application, image modality or task category assessed. This article describes how the BIAS statement was developed and presents a checklist which authors of biomedical image analysis challenges are encouraged to include in their submission when giving a paper on a challenge into review. The purpose of the checklist is to standardize and facilitate the review process and raise interpretability and reproducibility of challenge results by making relevant information explicit.


Asunto(s)
Investigación Biomédica , Lista de Verificación , Humanos , Pronóstico , Reproducibilidad de los Resultados
15.
Proc IEEE Int Symp Biomed Imaging ; 2020: 758-762, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32802270

RESUMEN

3D organ contouring is an essential step in radiation therapy treatment planning for organ dose estimation as well as for optimizing plans to reduce organs-at-risk doses. Manual contouring is time-consuming and its inter-clinician variability adversely affects the outcomes study. Such organs also vary dramatically on sizes - up to two orders of magnitude difference in volumes. In this paper, we present BrainSegNet, a novel 3D fully convolutional neural network (FCNN) based approach for automatic segmentation of brain organs. BrainSegNet takes a multiple resolution paths approach and uses a weighted loss function to solve the major challenge of the large variability in organ sizes. We evaluated our approach with a dataset of 46 Brain CT image volumes with corresponding expert organ contours as reference. Compared with those of LiviaNet and V-Net, BrainSegNet has a superior performance in segmenting tiny or thin organs, such as chiasm, optic nerves, and cochlea, and outperforms these methods in segmenting large organs as well. BrainSegNet can reduce the manual contouring time of a volume from an hour to less than two minutes, and holds high potential to improve the efficiency of radiation therapy workflow.

16.
Neuroinformatics ; 18(3): 479-508, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32107735

RESUMEN

Neuron shape and connectivity affect function. Modern imaging methods have proven successful at extracting morphological information. One potential path to achieve analysis of this morphology is through graph theory. Encoding by graphs enables the use of high throughput informatic methods to extract and infer brain function. However, the application of graph-theoretic methods to neuronal morphology comes with certain challenges in term of complex subgraph matching and the difficulty in computing intermediate shapes in between two imaged temporal samples. Here we report a novel, efficacious graph-theoretic method that rises to the challenges. The morphology of a neuron, which consists of its overall size, global shape, local branch patterns, and cell-specific biophysical properties, can vary significantly with the cell's identity, location, as well as developmental and physiological state. Various algorithms have been developed to customize shape based statistical and graph related features for quantitative analysis of neuromorphology, followed by the classification of neuron cell types using the features. Unlike the classical feature extraction based methods from imaged or 3D reconstructed neurons, we propose a model based on the rooted path decomposition from the soma to the dendrites of a neuron and extract morphological features from each constituent path. We hypothesize that measuring the distance between two neurons can be realized by minimizing the cost of continuously morphing the set of all rooted paths of one neuron to another. To validate this claim, we first establish the correspondence of paths between two neurons using a modified Munkres algorithm. Next, an elastic deformation framework that employs the square root velocity function is established to perform the continuous morphing, which, as an added benefit, provides an effective visualization tool. We experimentally show the efficacy of NeuroPath2Path, NeuroP2P, over the state of the art.


Asunto(s)
Algoritmos , Neuronas/clasificación , Neuronas/citología , Animales , Humanos , Modelos Neurológicos
17.
Cytometry A ; 97(3): 226-240, 2020 03.
Artículo en Inglés | MEDLINE | ID: mdl-31981309

RESUMEN

Optical imaging technology that has the advantages of high sensitivity and cost-effectiveness greatly promotes the progress of nondestructive single-cell studies. Complex cellular image analysis tasks such as three-dimensional reconstruction call for machine-learning technology in cell optical image research. With the rapid developments of high-throughput imaging flow cytometry, big data cell optical images are always obtained that may require machine learning for data analysis. In recent years, deep learning has been prevalent in the field of machine learning for large-scale image processing and analysis, which brings a new dawn for single-cell optical image studies with an explosive growth of data availability. Popular deep learning techniques offer new ideas for multimodal and multitask single-cell optical image research. This article provides an overview of the basic knowledge of deep learning and its applications in single-cell optical image studies. We explore the feasibility of applying deep learning techniques to single-cell optical image analysis, where popular techniques such as transfer learning, multimodal learning, multitask learning, and end-to-end learning have been reviewed. Image preprocessing and deep learning model training methods are then summarized. Applications based on deep learning techniques in the field of single-cell optical image studies are reviewed, which include image segmentation, super-resolution image reconstruction, cell tracking, cell counting, cross-modal image reconstruction, and design and control of cell imaging systems. In addition, deep learning in popular single-cell optical imaging techniques such as label-free cell optical imaging, high-content screening, and high-throughput optical imaging cytometry are also mentioned. Finally, the perspectives of deep learning technology for single-cell optical image analysis are discussed. © 2020 International Society for Advancement of Cytometry.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Imagen , Citometría de Flujo , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático
18.
BMC Bioinformatics ; 20(Suppl 3): 132, 2019 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-30925860

RESUMEN

BACKGROUND: Cryo-electron tomography (cryo-ET) enables the 3D visualization of cellular organization in near-native state which plays important roles in the field of structural cell biology. However, due to the low signal-to-noise ratio (SNR), large volume and high content complexity within cells, it remains difficult and time-consuming to localize and identify different components in cellular cryo-ET. To automatically localize and recognize in situ cellular structures of interest captured by cryo-ET, we proposed a simple yet effective automatic image analysis approach based on Faster-RCNN. RESULTS: Our experimental results were validated using in situ cyro-ET-imaged mitochondria data. Our experimental results show that our algorithm can accurately localize and identify important cellular structures on both the 2D tilt images and the reconstructed 2D slices of cryo-ET. When ran on the mitochondria cryo-ET dataset, our algorithm achieved Average Precision >0.95. Moreover, our study demonstrated that our customized pre-processing steps can further improve the robustness of our model performance. CONCLUSIONS: In this paper, we proposed an automatic Cryo-ET image analysis algorithm for localization and identification of different structure of interest in cells, which is the first Faster-RCNN based method for localizing an cellular organelle in Cryo-ET images and demonstrated the high accuracy and robustness of detection and classification tasks of intracellular mitochondria. Furthermore, our approach can be easily applied to detection tasks of other cellular structures as well.


Asunto(s)
Tomografía con Microscopio Electrónico/métodos , Mitocondrias/metabolismo , Mitocondrias/ultraestructura , Redes Neurales de la Computación , Algoritmos , Animales , Automatización , Línea Celular , Microscopía por Crioelectrón/métodos , Bases de Datos como Asunto , Procesamiento de Imagen Asistido por Computador , Modelos Teóricos , Ratas , Relación Señal-Ruido
19.
Int Wound J ; 16(1): 211-218, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30379398

RESUMEN

Automated tracking of wound-healing progress using images from smartphones can be useful and convenient for the patient to perform at home. To evaluate the feasibility, 119 images were taken with an iPhone smartphone during the treatment of a chronic wound at one patient's home. An image analysis algorithm was developed to quantitatively classify wound content as an index of wound healing. The core of the algorithm involves transforming the colour image into hue-saturation-value colour space, after which a threshold can be reliably applied to produce segmentation using the Black-Yellow-Red wound model. Morphological transforms are used to refine the classification. This method was found to be accurate and robust with respect to lighting conditions for smartphone-captured photos. The wound composition percentage showed a different trend from the wound area measurements, suggesting its role as a complementary metric. Overall, smartphone photography and automated image analysis is a promising cost-effective way of monitoring patients. While the current setup limits our capability of measuring wound area, future smartphones equipped with depth-sensing technology will enable accurate volumetric evaluation in addition to composition analysis.


Asunto(s)
Pie Diabético/terapia , Procesamiento de Imagen Asistido por Computador/métodos , Monitoreo Fisiológico/instrumentación , Monitoreo Fisiológico/métodos , Fotograbar/métodos , Teléfono Inteligente , Cicatrización de Heridas/fisiología , Algoritmos , Servicios de Atención de Salud a Domicilio , Humanos
20.
Cytometry A ; 95(4): 366-380, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30565841

RESUMEN

Artificial intelligence, deep convolutional neural networks, and deep learning are all niche terms that are increasingly appearing in scientific presentations as well as in the general media. In this review, we focus on deep learning and how it is applied to microscopy image data of cells and tissue samples. Starting with an analogy to neuroscience, we aim to give the reader an overview of the key concepts of neural networks, and an understanding of how deep learning differs from more classical approaches for extracting information from image data. We aim to increase the understanding of these methods, while highlighting considerations regarding input data requirements, computational resources, challenges, and limitations. We do not provide a full manual for applying these methods to your own data, but rather review previously published articles on deep learning in image cytometry, and guide the readers toward further reading on specific networks and methods, including new methods not yet applied to cytometry data. © 2018 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry.


Asunto(s)
Aprendizaje Profundo , Citometría de Imagen/métodos , Animales , Inteligencia Artificial/tendencias , Aprendizaje Profundo/tendencias , Humanos , Citometría de Imagen/instrumentación , Citometría de Imagen/tendencias , Procesamiento de Imagen Asistido por Computador/métodos , Aprendizaje Automático , Microscopía/instrumentación , Microscopía/métodos , Redes Neurales de la Computación
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA