Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Am J Hematol ; 95(8): 883-891, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32282969

RESUMO

Over 200 million malaria cases globally lead to half a million deaths annually. Accurate malaria diagnosis remains a challenge. Automated imaging processing approaches to analyze Thick Blood Films (TBF) could provide scalable solutions, for urban healthcare providers in the holoendemic malaria sub-Saharan region. Although several approaches have been attempted to identify malaria parasites in TBF, none have achieved negative and positive predictive performance suitable for clinical use in the west sub-Saharan region. While malaria parasite object detection remains an intermediary step in achieving automatic patient diagnosis, training state-of-the-art deep-learning object detectors requires the human-expert labor-intensive process of labeling a large dataset of digitized TBF. To overcome these challenges and to achieve a clinically usable system, we show a novel approach. It leverages routine clinical-microscopy labels from our quality-controlled malaria clinics, to train a Deep Malaria Convolutional Neural Network classifier (DeepMCNN) for automated malaria diagnosis. Our system also provides total Malaria Parasite (MP) and White Blood Cell (WBC) counts allowing parasitemia estimation in MP/µL, as recommended by the WHO. Prospective validation of the DeepMCNN achieves sensitivity/specificity of 0.92/0.90 against expert-level malaria diagnosis. Our approach PPV/NPV performance is of 0.92/0.90, which is clinically usable in our holoendemic settings in the densely populated metropolis of Ibadan. It is located within the most populous African country (Nigeria) and with one of the largest burdens of Plasmodium falciparum malaria. Our openly available method is of importance for strategies aimed to scale malaria diagnosis in urban regions where daily assessment of thousands of specimens is required.


Assuntos
Malária Falciparum/sangue , Malária/diagnóstico , Redes Neurais de Computação , Humanos , Malária/sangue
2.
Sci Rep ; 13(1): 2562, 2023 02 13.
Artigo em Inglês | MEDLINE | ID: mdl-36781917

RESUMO

While optical microscopy inspection of blood films and bone marrow aspirates by a hematologist is a crucial step in establishing diagnosis of acute leukemia, especially in low-resource settings where other diagnostic modalities are not available, the task remains time-consuming and prone to human inconsistencies. This has an impact especially in cases of Acute Promyelocytic Leukemia (APL) that require urgent treatment. Integration of automated computational hematopathology into clinical workflows can improve the throughput of these services and reduce cognitive human error. However, a major bottleneck in deploying such systems is a lack of sufficient cell morphological object-labels annotations to train deep learning models. We overcome this by leveraging patient diagnostic labels to train weakly-supervised models that detect different types of acute leukemia. We introduce a deep learning approach, Multiple Instance Learning for Leukocyte Identification (MILLIE), able to perform automated reliable analysis of blood films with minimal supervision. Without being trained to classify individual cells, MILLIE differentiates between acute lymphoblastic and myeloblastic leukemia in blood films. More importantly, MILLIE detects APL in blood films (AUC 0.94 ± 0.04) and in bone marrow aspirates (AUC 0.99 ± 0.01). MILLIE is a viable solution to augment the throughput of clinical pathways that require assessment of blood film microscopy.


Assuntos
Aprendizado Profundo , Leucemia Mieloide Aguda , Leucemia Promielocítica Aguda , Humanos , Leucemia Promielocítica Aguda/diagnóstico , Leucemia Promielocítica Aguda/patologia , Medula Óssea/patologia , Leucemia Mieloide Aguda/patologia , Testes Hematológicos
3.
Biomed Opt Express ; 13(2): 1005-1016, 2022 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-35284186

RESUMO

Automated digital high-magnification optical microscopy is key to accelerating biology research and improving pathology clinical pathways. High magnification objectives with large numerical apertures are usually preferred to resolve the fine structural details of biological samples, but they have a very limited depth-of-field. Depending on the thickness of the sample, analysis of specimens typically requires the acquisition of multiple images at different focal planes for each field-of-view, followed by the fusion of these planes into an extended depth-of-field image. This translates into low scanning speeds, increased storage space, and processing time not suitable for high-throughput clinical use. We introduce a novel content-aware multi-focus image fusion approach based on deep learning which extends the depth-of-field of high magnification objectives effectively. We demonstrate the method with three examples, showing that highly accurate, detailed, extended depth of field images can be obtained at a lower axial sampling rate, using 2-fold fewer focal planes than normally required.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa