Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37949472

RESUMO

INTRODUCTION: The English Diabetic Eye Screening Programme (DESP) offers people living with diabetes (PLD) annual eye screening. We examined incidence and determinants of sight-threatening diabetic retinopathy (STDR) in a sociodemographically diverse multi-ethnic population. RESEARCH DESIGN AND METHODS: North East London DESP cohort data (January 2012 to December 2021) with 137 591 PLD with no retinopathy, or non-STDR at baseline in one/both eyes, were used to calculate STDR incidence rates by sociodemographic factors, diabetes type, and duration. HR from Cox models examined associations with STDR. RESULTS: There were 16 388 incident STDR cases over a median of 5.4 years (IQR 2.8-8.2; STDR rate 2.214, 95% CI 2.214 to 2.215 per 100 person-years). People with no retinopathy at baseline had a lower risk of sight-threatening diabetic retinopathy (STDR) compared with those with non-STDR in one eye (HR 3.03, 95% CI 2.91 to 3.15, p<0.001) and both eyes (HR 7.88, 95% CI 7.59 to 8.18, p<0.001). Black and South Asian individuals had higher STDR hazards than white individuals (HR 1.57, 95% CI 1.50 to 1.64 and HR 1.36, 95% CI 1.31 to 1.42, respectively). Additionally, every 5-year increase in age at inclusion was associated with an 8% reduction in STDR hazards (p<0.001). CONCLUSIONS: Ethnic disparities exist in a health system limited by capacity rather than patient economic circumstances. Diabetic retinopathy at first screen is a strong determinant of STDR development. By using basic demographic characteristics, screening programmes or clinical practices can stratify risk for sight-threatening diabetic retinopathy development.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Estudos Retrospectivos , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/epidemiologia , Programas de Rastreamento , Incidência , Londres/epidemiologia , Diabetes Mellitus/diagnóstico , Diabetes Mellitus/epidemiologia
2.
Oral Dis ; 29(5): 2230-2238, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35398971

RESUMO

OBJECTIVE: To describe the development of a platform for image collection and annotation that resulted in a multi-sourced international image dataset of oral lesions to facilitate the development of automated lesion classification algorithms. MATERIALS AND METHODS: We developed a web-interface, hosted on a web server to collect oral lesions images from international partners. Further, we developed a customised annotation tool, also a web-interface for systematic annotation of images to build a rich clinically labelled dataset. We evaluated the sensitivities comparing referral decisions through the annotation process with the clinical diagnosis of the lesions. RESULTS: The image repository hosts 2474 images of oral lesions consisting of oral cancer, oral potentially malignant disorders and other oral lesions that were collected through MeMoSA® UPLOAD. Eight-hundred images were annotated by seven oral medicine specialists on MeMoSA® ANNOTATE, to mark the lesion and to collect clinical labels. The sensitivity in referral decision for all lesions that required a referral for cancer management/surveillance was moderate to high depending on the type of lesion (64.3%-100%). CONCLUSION: This is the first description of a database with clinically labelled oral lesions. This database could accelerate the improvement of AI algorithms that can promote the early detection of high-risk oral lesions.


Assuntos
Algoritmos , Neoplasias Bucais , Humanos
3.
Sensors (Basel) ; 22(12)2022 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-35746125

RESUMO

One common issue of object detection in aerial imagery is the small size of objects in proportion to the overall image size. This is mainly caused by high camera altitude and wide-angle lenses that are commonly used in drones aimed to maximize the coverage. State-of-the-art general purpose object detector tend to under-perform and struggle with small object detection due to loss of spatial features and weak feature representation of the small objects and sheer imbalance between objects and the background. This paper aims to address small object detection in aerial imagery by offering a Convolutional Neural Network (CNN) model that utilizes the Single Shot multi-box Detector (SSD) as the baseline network and extends its small object detection performance with feature enhancement modules including super-resolution, deconvolution and feature fusion. These modules are collectively aimed at improving the feature representation of small objects at the prediction layer. The performance of the proposed model is evaluated using three datasets including two aerial images datasets that mainly consist of small objects. The proposed model is compared with the state-of-the-art small object detectors. Experiment results demonstrate improvements in the mean Absolute Precision (mAP) and Recall values in comparison to the state-of-the-art small object detectors that investigated in this study.

4.
Artigo em Inglês | MEDLINE | ID: mdl-31059446

RESUMO

Within this work a novel semi-supervised learning technique is introduced based on a simple iterative learning cycle together with learned thresholding techniques and an ensemble decision support system. State-of-the-art model performance and increased training data volume are demonstrated, through the use of unlabelled data when training deeply learned classification models. The methods presented work independently from the model architectures or loss functions, making this approach applicable to a wide range of machine learning and classification tasks. Evaluation of the proposed approach is performed on commonly used datasets when evaluating semi-supervised learning techniques as well as a number of more challenging image classification datasets (CIFAR-100 and a 200 class subset of ImageNet).

5.
IEEE Trans Image Process ; 27(9): 4287-4301, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29870348

RESUMO

Classification of plants based on a multi-organ approach is very challenging. Although additional data provide more information that might help to disambiguate between species, the variability in shape and appearance in plant organs also raises the degree of complexity of the problem. Despite promising solutions built using deep learning enable representative features to be learned for plant images, the existing approaches focus mainly on generic features for species classification, disregarding the features representing plant organs. In fact, plants are complex living organisms sustained by a number of organ systems. In our approach, we introduce a hybrid generic-organ convolutional neural network (HGO-CNN), which takes into account both organ and generic information, combining them using a new feature fusion scheme for species classification. Next, instead of using a CNN-based method to operate on one image with a single organ, we extend our approach. We propose a new framework for plant structural learning using the recurrent neural network-based method. This novel approach supports classification based on a varying number of plant views, capturing one or more organs of a plant, by optimizing the contextual dependencies between them. We also present the qualitative results of our proposed models based on feature visualization techniques and show that the outcomes of visualizations depict our hypothesis and expectation. Finally, we show that by leveraging and combining the aforementioned techniques, our best network outperforms the state of the art on the PlantClef2015 benchmark. The source code and models are available at https://github.com/cs-chan/Deep-Plant.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Plantas/classificação , Algoritmos , Bases de Dados Factuais
6.
Sensors (Basel) ; 15(7): 17209-31, 2015 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-26193271

RESUMO

Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA) algorithm to further improve the bag of words (BoWs) representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications.

7.
IEEE Trans Cybern ; 43(6): 2147-56, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23757524

RESUMO

This paper addresses the problem of detecting and localizing abnormal activities in crowded scenes. A spatiotemporal Laplacian eigenmap method is proposed to extract different crowd activities from videos. This is achieved by learning the spatial and temporal variations of local motions in an embedded space. We employ representatives of different activities to construct the model which characterizes the regular behavior of a crowd. This model of regular crowd behavior allows the detection of abnormal crowd activities both in local and global contexts and the localization of regions which show abnormal behavior. Experiments on the recently published data sets show that the proposed method achieves comparable results with the state-of-the-art methods without sacrificing computational simplicity.


Assuntos
Actigrafia/métodos , Algoritmos , Inteligência Artificial , Aglomeração , Técnicas de Apoio para a Decisão , Interpretação de Imagem Assistida por Computador/métodos , Modelos Teóricos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Humanos
8.
IEEE Trans Biomed Eng ; 59(9): 2538-48, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22736688

RESUMO

This paper presents a new supervised method for segmentation of blood vessels in retinal photographs. This method uses an ensemble system of bagged and boosted decision trees and utilizes a feature vector based on the orientation analysis of gradient vector field, morphological transformation, line strength measures, and Gabor filter responses. The feature vector encodes information to handle the healthy as well as the pathological retinal image. The method is evaluated on the publicly available DRIVE and STARE databases, frequently used for this purpose and also on a new public retinal vessel reference dataset CHASE_DB1 which is a subset of retinal images of multiethnic children from the Child Heart and Health Study in England (CHASE) dataset. The performance of the ensemble system is evaluated in detail and the incurred accuracy, speed, robustness, and simplicity make the algorithm a suitable tool for automated retinal image analysis.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Vasos Retinianos/anatomia & histologia , Algoritmos , Área Sob a Curva , Criança , Bases de Dados Factuais , Árvores de Decisões , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...