Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Med Image Anal ; 94: 103153, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38569380

RESUMEN

Monitoring the healing progress of diabetic foot ulcers is a challenging process. Accurate segmentation of foot ulcers can help podiatrists to quantitatively measure the size of wound regions to assist prediction of healing status. The main challenge in this field is the lack of publicly available manual delineation, which can be time consuming and laborious. Recently, methods based on deep learning have shown excellent results in automatic segmentation of medical images, however, they require large-scale datasets for training, and there is limited consensus on which methods perform the best. The 2022 Diabetic Foot Ulcers segmentation challenge was held in conjunction with the 2022 International Conference on Medical Image Computing and Computer Assisted Intervention, which sought to address these issues and stimulate progress in this research domain. A training set of 2000 images exhibiting diabetic foot ulcers was released with corresponding segmentation ground truth masks. Of the 72 (approved) requests from 47 countries, 26 teams used this data to develop fully automated systems to predict the true segmentation masks on a test set of 2000 images, with the corresponding ground truth segmentation masks kept private. Predictions from participating teams were scored and ranked according to their average Dice similarity coefficient of the ground truth masks and prediction masks. The winning team achieved a Dice of 0.7287 for diabetic foot ulcer segmentation. This challenge has now entered a live leaderboard stage where it serves as a challenging benchmark for diabetic foot ulcer segmentation.


Asunto(s)
Diabetes Mellitus , Pie Diabético , Humanos , Pie Diabético/diagnóstico por imagen , Redes Neurales de la Computación , Benchmarking , Procesamiento de Imagen Asistido por Computador/métodos
2.
Comput Methods Programs Biomed ; 244: 107986, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38157827

RESUMEN

BACKGROUND AND OBJECTIVES: One of the more significant obstacles in classification of skin cancer is the presence of artifacts. This paper investigates the effect of dark corner artifacts, which result from the use of dermoscopes, on the performance of a deep learning binary classification task. Previous research attempted to remove and inpaint dark corner artifacts, with the intention of creating an ideal condition for models. However, such research has been shown to be inconclusive due to a lack of available datasets with corresponding labels for dark corner artifact cases. METHODS: To address these issues, we label 10,250 skin lesion images from publicly available datasets and introduce a balanced dataset with an equal number of melanoma and non-melanoma cases. The training set comprises 6126 images without artifacts, and the testing set comprises 4124 images with dark corner artifacts. We conduct three experiments to provide new understanding on the effects of dark corner artifacts, including inpainted and synthetically generated examples, on a deep learning method. RESULTS: Our results suggest that introducing synthetic dark corner artifacts which have been superimposed onto the training set improved model performance, particularly in terms of the true negative rate. This indicates that deep learning learnt to ignore dark corner artifacts, rather than treating it as melanoma, when dark corner artifacts were introduced into the training set. Further, we propose a new approach to quantifying heatmaps indicating network focus using a root mean square measure of the brightness intensity in the different regions of the heatmaps. CONCLUSIONS: The proposed artifact methods can be used in future experiments to help alleviate possible impacts on model performance. Additionally, the newly proposed heatmap quantification analysis will help to better understand the relationships between heatmap results and other model performance metrics.


Asunto(s)
Melanoma , Enfermedades de la Piel , Neoplasias Cutáneas , Humanos , Melanoma/diagnóstico por imagen , Artefactos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Cutáneas/diagnóstico por imagen
3.
Sci Data ; 10(1): 633, 2023 09 18.
Artículo en Inglés | MEDLINE | ID: mdl-37723189

RESUMEN

The field of human action recognition has made great strides in recent years, much helped by the availability of a wide variety of datasets that use Kinect to record human movement. Conversely, progress towards the use of Kinect in clinical practice has been hampered by the lack of appropriate data. In particular, datasets that contain clinically significant movements and appropriate metadata. This paper proposes a dataset to address this issue, namely KINECAL. It contains the recordings of 90 individuals carrying out 11 movements, commonly used in the clinical assessment of balance. The dataset contains relevant metadata, including clinical labelling, falls history labelling and postural sway metrics. KINECAL should be of interest to researchers interested in the clinical use of motion capture and motion analysis.


Asunto(s)
Movimiento , Humanos , Benchmarking , Metadatos , Movimiento (Física) , Medición de Riesgo , Accidentes por Caídas
4.
Med Phys ; 50(5): 3223-3243, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36794706

RESUMEN

PURPOSE: BUS-Set is a reproducible benchmark for breast ultrasound (BUS) lesion segmentation, comprising of publicly available images with the aim of improving future comparisons between machine learning models within the field of BUS. METHOD: Four publicly available datasets were compiled creating an overall set of 1154 BUS images, from five different scanner types. Full dataset details have been provided, which include clinical labels and detailed annotations. Furthermore, nine state-of-the-art deep learning architectures were selected to form the initial benchmark segmentation result, tested using five-fold cross-validation and MANOVA/ANOVA with Tukey statistical significance test with a threshold of 0.01. Additional evaluation of these architectures was conducted, exploring possible training bias, and lesion size and type effects. RESULTS: Of the nine state-of-the-art benchmarked architectures, Mask R-CNN obtained the highest overall results, with the following mean metric scores: Dice score of 0.851, intersection over union of 0.786 and pixel accuracy of 0.975. MANOVA/ANOVA and Tukey test results showed Mask R-CNN to be statistically significant better compared to all other benchmarked models with a p-value >0.01. Moreover, Mask R-CNN achieved the highest mean Dice score of 0.839 on an additional 16 image dataset, that contained multiple lesions per image. Further analysis on regions of interest was conducted, assessing Hamming distance, depth-to-width ratio (DWR), circularity, and elongation, which showed that the Mask R-CNN's segmentations maintained the most morphological features with correlation coefficients of 0.888, 0.532, 0.876 for DWR, circularity, and elongation, respectively. Based on the correlation coefficients, statistical test indicated that Mask R-CNN was only significantly different to Sk-U-Net. CONCLUSIONS: BUS-Set is a fully reproducible benchmark for BUS lesion segmentation obtained through the use of public datasets and GitHub. Of the state-of-the-art convolution neural network (CNN)-based architectures, Mask R-CNN achieved the highest performance overall, further analysis indicated that a training bias may have occurred due to the lesion size variation in the dataset. All dataset and architecture details are available at GitHub: https://github.com/corcor27/BUS-Set, which allows for a fully reproducible benchmark.


Asunto(s)
Benchmarking , Redes Neurales de la Computación , Femenino , Humanos , Ultrasonografía Mamaria , Aprendizaje Automático , Procesamiento de Imagen Asistido por Computador/métodos
5.
IEEE Trans Med Imaging ; 42(5): 1289-1300, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36455083

RESUMEN

Various deep learning methods have been proposed to segment breast lesions from ultrasound images. However, similar intensity distributions, variable tumor morphologies and blurred boundaries present challenges for breast lesions segmentation, especially for malignant tumors with irregular shapes. Considering the complexity of ultrasound images, we develop an adaptive attention U-net (AAU-net) to segment breast lesions automatically and stably from ultrasound images. Specifically, we introduce a hybrid adaptive attention module (HAAM), which mainly consists of a channel self-attention block and a spatial self-attention block, to replace the traditional convolution operation. Compared with the conventional convolution operation, the design of the hybrid adaptive attention module can help us capture more features under different receptive fields. Different from existing attention mechanisms, the HAAM module can guide the network to adaptively select more robust representation in channel and space dimensions to cope with more complex breast lesions segmentation. Extensive experiments with several state-of-the-art deep learning segmentation methods on three public breast ultrasound datasets show that our method has better performance on breast lesions segmentation. Furthermore, robustness analysis and external experiments demonstrate that our proposed AAU-net has better generalization performance in the breast lesion segmentation. Moreover, the HAAM module can be flexibly applied to existing network frameworks. The source code is available on https://github.com/CGPxy/AAU-net.


Asunto(s)
Programas Informáticos , Ultrasonografía Mamaria , Femenino , Humanos , Ultrasonografía , Procesamiento de Imagen Asistido por Computador
6.
World J Diabetes ; 13(12): 1131-1139, 2022 Dec 15.
Artículo en Inglés | MEDLINE | ID: mdl-36578875

RESUMEN

Foot ulcers are common complications of diabetes mellitus and substantially increase the morbidity and mortality due to this disease. Wound care by regular monitoring of the progress of healing with clinical review of the ulcers, dressing changes, appropriate antibiotic therapy for infection and proper offloading of the ulcer are the cornerstones of the management of foot ulcers. Assessing the progress of foot ulcers can be a challenge for the clinician and patient due to logistic issues such as regular attendance in the clinic. Foot clinics are often busy and because of manpower issues, ulcer reviews can be delayed with detrimental effects on the healing as a result of a lack of appropriate and timely changes in management. Wound photographs have been historically useful to assess the progress of diabetic foot ulcers over the past few decades. Mobile phones with digital cameras have recently revolutionized the capture of foot ulcer images. Patients can send ulcer photographs to diabetes care professionals electronically for remote monitoring, largely avoiding the logistics of patient transport to clinics with a reduction on clinic pressures. Artificial intelligence-based technologies have been developed in recent years to improve this remote monitoring of diabetic foot ulcers with the use of mobile apps. This is expected to make a huge impact on diabetic foot ulcer care with further research and development of more accurate and scientific technologies in future. This clinical update review aims to compile evidence on this hot topic to empower clinicians with the latest developments in the field.

8.
Med Image Anal ; 75: 102305, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34852988

RESUMEN

The International Skin Imaging Collaboration (ISIC) datasets have become a leading repository for researchers in machine learning for medical image analysis, especially in the field of skin cancer detection and malignancy assessment. They contain tens of thousands of dermoscopic photographs together with gold-standard lesion diagnosis metadata. The associated yearly challenges have resulted in major contributions to the field, with papers reporting measures well in excess of human experts. Skin cancers can be divided into two major groups - melanoma and non-melanoma. Although less prevalent, melanoma is considered to be more serious as it can quickly spread to other organs if not treated at an early stage. In this paper, we summarise the usage of the ISIC dataset images and present an analysis of yearly releases over a period of 2016 - 2020. Our analysis found a significant number of duplicate images, both within and between the datasets. Additionally, we also noted duplicates spread across testing and training sets. Due to these irregularities, we propose a duplicate removal strategy and recommend a curated dataset for researchers to use when working on ISIC datasets. Given that ISIC 2020 focused on melanoma classification, we conduct experiments to provide benchmark results on the ISIC 2020 test set, with additional analysis on the smaller ISIC 2017 test set. Testing was completed following the application of our duplicate removal strategy and an additional data balancing step. As a result of removing 14,310 duplicate images from the training set, our benchmark results show good levels of melanoma prediction with an AUC of 0.80 for the best performing model. As our aim was not to maximise network performance, we did not include additional steps in our experiments. Finally, we provide recommendations for future research by highlighting irregularities that may present research challenges. A list of image files with reference to the original ISIC dataset sources for the recommended curated training set will be shared on our GitHub repository (available at www.github.com/mmu-dermatology-research/isic_duplicate_removal_strategy).


Asunto(s)
Melanoma , Neoplasias Cutáneas , Benchmarking , Dermoscopía , Humanos , Melanoma/diagnóstico por imagen , Redes Neurales de la Computación , Neoplasias Cutáneas/diagnóstico por imagen
9.
Cancers (Basel) ; 13(23)2021 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-34885158

RESUMEN

Over the past few decades, different clinical diagnostic algorithms have been proposed to diagnose malignant melanoma in its early stages. Furthermore, the detection of skin moles driven by current deep learning based approaches yields impressive results in the classification of malignant melanoma. However, in all these approaches, the researchers do not take into account the origin of the skin lesion. It has been observed that the specific criteria for in situ and early invasive melanoma highly depend on the anatomic site of the body. To address this problem, we propose a deep learning architecture based framework to classify skin lesions into the three most important anatomic sites, including the face, trunk and extremities, and acral lesions. In this study, we take advantage of pretrained networks, including VGG19, ResNet50, Xception, DenseNet121, and EfficientNetB0, to calculate the features with an adjusted and densely connected classifier. Furthermore, we perform in depth analysis on database, architecture, and result regarding the effectiveness of the proposed framework. Experiments confirm the ability of the developed algorithms to classify skin lesions into the most important anatomical sites with 91.45% overall accuracy for the EfficientNetB0 architecture, which is a state-of-the-art result in this domain.

10.
Comput Biol Med ; 140: 105055, 2021 Nov 24.
Artículo en Inglés | MEDLINE | ID: mdl-34839183

RESUMEN

Diabetic foot ulcer (DFU) is a major complication of diabetes and can lead to lower limb amputation if not treated early and properly. In addition to the traditional clinical approaches, in recent years, research on automation using computer vision and machine learning methods plays an important role in DFU classification, achieving promising successes. The most recent automatic approaches to DFU classification are based on convolutional neural networks (CNNs), using solely RGB images as input. In this paper, we present a CNN-based DFU classification method in which we showed that feeding an appropriate feature (texture information) to the CNN model provides a complementary performance to the standard RGB-based deep models of the DFU classification task, and better performance can be obtained if both RGB images and their texture features are combined and used as input to the CNN. To this end, the proposed method consists of two main stages. The first stage extracts texture information from the RGB image using the mapped binary patterns technique. The obtained mapped image is used to aid the second stage in recognizing DFU as it contains texture information of ulcer. The stack of RGB and mapped binary patterns images are fed to the CNN as a tensor input or as a fused image, which is a linear combination of RGB and mapped binary patterns images. The performance of the proposed approach was evaluated using two recently published DFU datasets: the Part-A dataset of healthy and unhealthy (DFU) cases [17] and Part-B dataset of ischaemia and infection cases [18]. The results showed that the proposed methods provided better performance than the state-of-the-art CNN-based methods with 0.981% (AUC) and 0.952% (F-Measure) on the Part-A dataset, 0.995% (AUC) and 0.990% (F-measure) for the Part-B ischaemia dataset, and 0.820% (AUC) and 0.744% (F-measure) on the Part-B infection dataset.

11.
J Imaging ; 7(8)2021 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-34460778

RESUMEN

Long video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method-StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson's correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task.

12.
PLoS One ; 16(7): e0254763, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34320001

RESUMEN

Understanding the processes by which the mammalian embryo implants in the maternal uterus is a long-standing challenge in embryology. New insights into this morphogenetic event could be of great importance in helping, for example, to reduce human infertility. During implantation the blastocyst, composed of epiblast, trophectoderm and primitive endoderm, undergoes significant remodelling from an oval ball to an egg cylinder. A main feature of this transformation is symmetry breaking and reshaping of the epiblast into a "cup". Based on previous studies, we hypothesise that this event is the result of mechanical constraints originating from the trophectoderm, which is also significantly transformed during this process. In order to investigate this hypothesis we propose MG# (MechanoGenetic Sharp), an original computational model of biomechanics able to reproduce key cell shape changes and tissue level behaviours in silico. With this model, we simulate epiblast and trophectoderm morphogenesis during implantation. First, our results uphold experimental findings that repulsion at the apical surface of the epiblast is essential to drive lumenogenesis. Then, we provide new theoretical evidence that trophectoderm morphogenesis indeed can dictate the cup shape of the epiblast and fosters its movement towards the uterine tissue. Our results offer novel mechanical insights into mouse peri-implantation and highlight the usefulness of agent-based modelling methods in the study of embryogenesis.


Asunto(s)
Endodermo/citología , Estratos Germinativos/citología , Modelos Biológicos , Animales , Proliferación Celular , Implantación del Embrión , Embrión de Mamíferos/citología , Embrión de Mamíferos/metabolismo , Desarrollo Embrionario , Endodermo/metabolismo , Estratos Germinativos/metabolismo , Ratones
13.
Comput Biol Med ; 135: 104596, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34247133

RESUMEN

There has been a substantial amount of research involving computer methods and technology for the detection and recognition of diabetic foot ulcers (DFUs), but there is a lack of systematic comparisons of state-of-the-art deep learning object detection frameworks applied to this problem. DFUC2020 provided participants with a comprehensive dataset consisting of 2,000 images for training and 2,000 images for testing. This paper summarizes the results of DFUC2020 by comparing the deep learning-based algorithms proposed by the winning teams: Faster R-CNN, three variants of Faster R-CNN and an ensemble method; YOLOv3; YOLOv5; EfficientDet; and a new Cascade Attention Network. For each deep learning method, we provide a detailed description of model architecture, parameter settings for training and additional stages including pre-processing, data augmentation and post-processing. We provide a comprehensive evaluation for each method. All the methods required a data augmentation stage to increase the number of images available for training and a post-processing stage to remove false positives. The best performance was obtained from Deformable Convolution, a variant of Faster R-CNN, with a mean average precision (mAP) of 0.6940 and an F1-Score of 0.7434. Finally, we demonstrate that the ensemble method based on different deep learning methods can enhance the F1-Score but not the mAP.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus , Pie Diabético , Algoritmos , Pie Diabético/diagnóstico , Humanos , Proyectos de Investigación
14.
touchREV Endocrinol ; 17(1): 5-11, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35118441

RESUMEN

Every 20 seconds a limb is amputated somewhere in the world due to diabetes. This is a global health problem that requires a global solution. The International Conference on Medical Image Computing and Computer Assisted Intervention challenge, which concerns the automated detection of diabetic foot ulcers (DFUs) using machine learning techniques, will accelerate the development of innovative healthcare technology to address this unmet medical need. In an effort to improve patient care and reduce the strain on healthcare systems, recent research has focused on the creation of cloud-based detection algorithms. These can be consumed as a service by a mobile app that patients (or a carer, partner or family member) could use themselves at home to monitor their condition and to detect the appearance of a DFU. Collaborative work between Manchester Metropolitan University, Lancashire Teaching Hospitals and the Manchester University NHS Foundation Trust has created a repository of 4,000 DFU images for the purpose of supporting research toward more advanced methods of DFU detection. This paper presents a dataset description and analysis, assessment methods, benchmark algorithms and initial evaluation results. It facilitates the challenge by providing useful insights into state-of-the-art and ongoing research.

15.
Artif Intell Med ; 107: 101880, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32828439

RESUMEN

In current breast ultrasound computer aided diagnosis systems, the radiologist preselects a region of interest (ROI) as an input for computerised breast ultrasound image analysis. This task is time consuming and there is inconsistency among human experts. Researchers attempting to automate the process of obtaining the ROIs have been relying on image processing and conventional machine learning methods. We propose the use of a deep learning method for breast ultrasound ROI detection and lesion localisation. We use the most accurate object detection deep learning framework - Faster-RCNN with Inception-ResNet-v2 - as our deep learning network. Due to the lack of datasets, we use transfer learning and propose a new 3-channel artificial RGB method to improve the overall performance. We evaluate and compare the performance of our proposed methods on two datasets (namely, Dataset A and Dataset B), i.e. within individual datasets and composite dataset. We report the lesion detection results with two types of analysis: (1) detected point (centre of the segmented region or the detected bounding box) and (2) Intersection over Union (IoU). Our results demonstrate that the proposed methods achieved comparable results on detected point but with notable improvement on IoU. In addition, our proposed 3-channel artificial RGB method improves the recall of Dataset A. Finally, we outline some future directions for the research.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Ultrasonografía Mamaria
16.
Comput Biol Med ; 121: 103774, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32339095

RESUMEN

In recent years, the use of Convolutional Neural Networks (CNNs) in medical imaging has shown improved performance in terms of mass detection and classification compared to current state-of-the-art methods. This paper proposes a fully automated framework to detect masses in Full-Field Digital Mammograms (FFDM). This is based on the Faster Region-based Convolutional Neural Network (Faster-RCNN) model and is applied for detecting masses in the large-scale OPTIMAM Mammography Image Database (OMI-DB), which consists of ∼80,000 FFDMs mainly from Hologic and General Electric (GE) scanners. This research is the first to benchmark the performance of deep learning on OMI-DB. The proposed framework obtained a True Positive Rate (TPR) of 0.93 at 0.78 False Positive per Image (FPI) on FFDMs from the Hologic scanner. Transfer learning is then used in the Faster R-CNN model trained on Hologic images to detect masses in smaller databases containing FFDMs from the GE scanner and another public dataset INbreast (Siemens scanner). The detection framework obtained a TPR of 0.91±0.06 at 1.69 FPI for images from the GE scanner and also showed higher performance compared to state-of-the-art methods on the INbreast dataset, obtaining a TPR of 0.99±0.03 at 1.17 FPI for malignant and 0.85±0.08 at 1.0 FPI for benign masses, showing the potential to be used as part of an advanced CAD system for breast cancer screening.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Neoplasias de la Mama/diagnóstico por imagen , Diagnóstico por Computador , Detección Precoz del Cáncer , Femenino , Humanos , Mamografía , Redes Neurales de la Computación
17.
Comput Biol Med ; 117: 103616, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-32072964

RESUMEN

Recognition and analysis of Diabetic Foot Ulcers (DFU) using computerized methods is an emerging research area with the evolution of image-based machine learning algorithms. Existing research using visual computerized methods mainly focuses on recognition, detection, and segmentation of the visual appearance of the DFU as well as tissue classification. According to DFU medical classification systems, the presence of infection (bacteria in the wound) and ischaemia (inadequate blood supply) has important clinical implications for DFU assessment, which are used to predict the risk of amputation. In this work, we propose a new dataset and computer vision techniques to identify the presence of infection and ischaemia in DFU. This is the first time a DFU dataset with ground truth labels of ischaemia and infection cases is introduced for research purposes. For the handcrafted machine learning approach, we propose a new feature descriptor, namely the Superpixel Colour Descriptor. Then we use the Ensemble Convolutional Neural Network (CNN) model for more effective recognition of ischaemia and infection. We propose to use a natural data-augmentation method, which identifies the region of interest on foot images and focuses on finding the salient features existing in this area. Finally, we evaluate the performance of our proposed techniques on binary classification, i.e. ischaemia versus non-ischaemia and infection versus non-infection. Overall, our method performed better in the classification of ischaemia than infection. We found that our proposed Ensemble CNN deep learning algorithms performed better for both classification tasks as compared to handcrafted machine learning algorithms, with 90% accuracy in ischaemia classification and 73% in infection classification.


Asunto(s)
Diabetes Mellitus , Pie Diabético , Algoritmos , Pie Diabético/diagnóstico por imagen , Humanos , Isquemia/diagnóstico por imagen , Aprendizaje Automático , Redes Neurales de la Computación
18.
J Neurosci Methods ; 328: 108440, 2019 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-31560929

RESUMEN

BACKGROUND: Previous studies have demonstrated that analysing whisker movements and locomotion allows us to quantify the behavioural consequences of sensory, motor and cognitive deficits in rodents. Independent whisker and feet trackers exist but there is no fully-automated, open-source software and hardware solution, that measures both whisker movements and gait. NEW METHOD: We present the LocoWhisk arena and new accompanying software (ARTv2) that allows the automatic detection and measurement of both whisker and gait information from high-speed video footage. RESULTS: We demonstrate the new whisker and foot detector algorithms on high-speed video footage of freely moving small mammals, and show that whisker movement and gait measurements collected in the LocoWhisk arena are similar to previously reported values in the literature. COMPARISON WITH EXISTING METHOD(S): We demonstrate that the whisker and foot detector algorithms, are comparable in accuracy, and in some cases significantly better, than readily available software and manual trackers. CONCLUSION: The LocoWhisk system enables the collection of quantitative data from whisker movements and locomotion in freely behaving rodents. The software automatically records both whisker and gait information and provides added statistical tools to analyse the data. We hope the LocoWhisk system and software will serve as a solid foundation from which to support future research in whisker and gait analysis.


Asunto(s)
Conducta Animal/fisiología , Conducta Exploratoria/fisiología , Marcha/fisiología , Procesamiento de Imagen Asistido por Computador/métodos , Locomoción/fisiología , Neurociencias/métodos , Vibrisas/fisiología , Animales , Procesamiento de Imagen Asistido por Computador/normas , Ratones , Neurociencias/normas , Ratas , Programas Informáticos/normas , Grabación en Video
19.
J Med Imaging (Bellingham) ; 6(3): 031409, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35834317

RESUMEN

With recent advances in the field of deep learning, the use of convolutional neural networks (CNNs) in medical imaging has become very encouraging. The aim of our paper is to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition to evaluating CNNs pretrained with the ImageNet dataset, we investigate the use of transfer learning for a particular domain adaptation. First, the CNN is trained using a large public database of digitized mammograms (CBIS-DDSM dataset), and then the model is transferred and tested onto the smaller database of digital mammograms (INbreast dataset). We evaluate three widely used CNNs (VGG16, ResNet50, InceptionV3) and show that the InceptionV3 obtains the best performance for classifying the mass and nonmass breast region for CBIS-DDSM. We further show the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Mass detection evaluation follows a fivefold cross-validation strategy using free-response operating characteristic curves. Results show that the transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 ± 0.02 at 1.67 false positives per image (FPI), compared with transfer learning from ImageNet with TPR of 0.91 ± 0.07 at 2.1 FPI. In addition, the proposed framework improves upon mass detection results described in the literature on the INbreast database, in terms of both TPR and FPI.

20.
IEEE J Biomed Health Inform ; 23(4): 1730-1741, 2019 07.
Artículo en Inglés | MEDLINE | ID: mdl-30188841

RESUMEN

Current practice for diabetic foot ulcers (DFU) screening involves detection and localization by podiatrists. Existing automated solutions either focus on segmentation or classification. In this work, we design deep learning methods for real-time DFU localization. To produce a robust deep learning model, we collected an extensive database of 1775 images of DFU. Two medical experts produced the ground truths of this data set by outlining the region of interest of DFU with an annotator software. Using five-fold cross-validation, overall, faster R-CNN with InceptionV2 model using two-tier transfer learning achieved a mean average precision of 91.8%, the speed of 48 ms for inferencing a single image and with a model size of 57.2 MB. To demonstrate the robustness and practicality of our solution to real-time prediction, we evaluated the performance of the models on a NVIDIA Jetson TX2 and a smartphone app. This work demonstrates the capability of deep learning in real-time localization of DFU, which can be further improved with a more extensive data set.


Asunto(s)
Pie Diabético/diagnóstico por imagen , Pie/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Aplicaciones Móviles , Diseño de Equipo , Humanos , Interpretación de Imagen Asistida por Computador/instrumentación , Teléfono Inteligente
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...