Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
IEEE J Biomed Health Inform ; 26(7): 3025-3036, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35130177

RESUMO

Unavailability of large training datasets is a bottleneck that needs to be overcome to realize the true potential of deep learning in histopathology applications. Although slide digitization via whole slide imaging scanners has increased the speed of data acquisition, labeling of virtual slides requires a substantial time investment from pathologists. Eye gaze annotations have the potential to speed up the slide labeling process. This work explores the viability and timing comparisons of eye gaze labeling compared to conventional manual labeling for training object detectors. Challenges associated with gaze based labeling and methods to refine the coarse data annotations for subsequent object detection are also discussed. Results demonstrate that gaze tracking based labeling can save valuable pathologist time and delivers good performance when employed for training a deep object detector. Using the task of localization of Keratin Pearls in cases of oral squamous cell carcinoma as a test case, we compare the performance gap between deep object detectors trained using hand-labelled and gaze-labelled data. On average, compared to 'Bounding-box' based hand-labeling, gaze-labeling required 57.6% less time per label and compared to 'Freehand' labeling, gaze-labeling required on average 85% less time per label.


Assuntos
Carcinoma de Células Escamosas , Neoplasias Bucais , Fixação Ocular , Humanos , Redes Neurais de Computação
2.
Artigo em Inglês | MEDLINE | ID: mdl-34284963

RESUMO

Oral cancer is a global health problem with increasing case numbers worldwide and no significant improvement in prognosis over the last few decades. It is one of the most common cancers and a leading cause of death in Pakistan, although the number reported is significantly underreported owing to the lack of a national cancer repository, and the true magnitude of this challenge is not known. Bilateral discussions and workshops funded by the Global Challenges Research Fund brought together a number of like-minded researchers and clinicians from the United Kingdom and Pakistan to analyze the status quo and plan the future course. This article reviews some of these discussions as well as barriers to oral cancer diagnosis in Pakistan and makes recommendations to investigate the magnitude and develop measures that may help tackle this devastating disease.


Assuntos
Neoplasias Bucais , Diagnóstico Bucal , Humanos , Neoplasias Bucais/diagnóstico , Neoplasias Bucais/prevenção & controle , Paquistão , Pesquisadores , Reino Unido
3.
Artigo em Inglês | MEDLINE | ID: mdl-32950425

RESUMO

OBJECTIVE: The aim of this study was to investigate automated feature detection, segmentation, and quantification of common findings in periapical radiographs (PRs) by using deep learning (DL)-based computer vision techniques. STUDY DESIGN: Caries, alveolar bone recession, and interradicular radiolucencies were labeled on 206 digital PRs by 3 specialists (2 oral pathologists and 1 endodontist). The PRs were divided into "Training and Validation" and "Test" data sets consisting of 176 and 30 PRs, respectively. Multiple transformations of image data were used as input to deep neural networks during training. Outcomes of existing and purpose-built DL architectures were compared to identify the most suitable architecture for automated analysis. RESULTS: The U-Net architecture and its variant significantly outperformed Xnet and SegNet in all metrics. The overall best performing architecture on the validation data set was "U-Net+Densenet121" (mean intersection over union [mIoU] = 0.501; Dice coefficient = 0.569). Performance of all architectures degraded on the "Test" data set; "U-Net" delivered the best performance (mIoU = 0.402; Dice coefficient = 0.453). Interradicular radiolucencies were the most difficult to segment. CONCLUSIONS: DL has potential for automated analysis of PRs but warrants further research. Among existing off-the-shelf architectures, U-Net and its variants delivered the best performance. Further performance gains can be obtained via purpose-built architectures and a larger multicentric cohort.


Assuntos
Aprendizado Profundo , Osso e Ossos , Humanos , Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Radiografia
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4462-4465, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946856

RESUMO

Automated analysis of digitized pathology images in tele-health applications can have a transformative impact on under-served communities in the developing world. However, the vast majority of existing image analysis algorithms are trained on slide images acquired via expensive Whole-Slide-Imaging (WSI) scanners. High scanner cost is a key bottleneck preventing large-scale adoption of digital pathology in developing countries. In this work, we investigate the viability of automated analysis of slide images captured from the eyepiece of a microscope via a smart phone. The mitosis detection application is considered as a use case.Results indicate performance degradation when using (lower-quality) smartphone images; as expected. However, the performance gap is not too wide (F1-score smartphone=0.65, F1-score WSI=0.70) demonstrating that smartphones could potentially be employed as image acquisition devices for digital pathology at locations where expensive scanners are not available.


Assuntos
Microscopia , Neoplasias , Automação , Humanos , Neoplasias/diagnóstico , Neoplasias/patologia
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 2944-2947, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29060515

RESUMO

Physical activities are known to introduce motion artifacts in electrical impedance plethysmographic (EIP) sensors. Existing literature considers motion artifacts as a nuisance and generally discards the artifact containing portion of the sensor output. This paper examines the notion of exploiting motion artifacts for detecting the underlying physical activities which give rise to the artifacts in question. In particular, we investigate whether the artifact pattern associated with a physical activity is unique; and does it vary from one human-subject to another? Data was recorded from 19 adult human-subjects while conducting 5 distinct, artifact inducing, activities. A set of novel features based on the time-frequency signatures of the sensor outputs are then constructed. Our analysis demonstrates that these features enable high accuracy detection of the underlying physical activity. Using an SVM classifier we are able to differentiate between 5 distinct physical activities (coughing, reaching, walking, eating and rolling-on-bed) with an average accuracy of 85.46%. Classification is performed solely using features designed specifically to capture the time-frequency signatures of different physical activities. This enables us to measure both respiratory and motion information using only one type of sensor. This is in contrast to conventional approaches to physical activity monitoring; which rely on additional hardware such as accelerometers to capture activity information.


Assuntos
Movimento (Física) , Artefatos , Impedância Elétrica , Exercício Físico , Humanos , Pletismografia de Impedância , Processamento de Sinais Assistido por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA