Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Med Image Anal ; 89: 102920, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37572414

RESUMEN

Electron microscopy (EM) enables high-resolution imaging of tissues and cells based on 2D and 3D imaging techniques. Due to the laborious and time-consuming nature of manual segmentation of large-scale EM datasets, automated segmentation approaches are crucial. This review focuses on the progress of deep learning-based segmentation techniques in large-scale cellular EM throughout the last six years, during which significant progress has been made in both semantic and instance segmentation. A detailed account is given for the key datasets that contributed to the proliferation of deep learning in 2D and 3D EM segmentation. The review covers supervised, unsupervised, and self-supervised learning methods and examines how these algorithms were adapted to the task of segmenting cellular and sub-cellular structures in EM images. The special challenges posed by such images, like heterogeneity and spatial complexity, and the network architectures that overcame some of them are described. Moreover, an overview of the evaluation measures used to benchmark EM datasets in various segmentation tasks is provided. Finally, an outlook of current trends and future prospects of EM segmentation is given, especially with large-scale models and unlabeled images to learn generic features across EM datasets.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Microscopía Electrónica , Algoritmos , Imagenología Tridimensional/métodos
2.
Prev Vet Med ; 210: 105812, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36521412

RESUMEN

Dystocia or difficult calving in cattle is detrimental to the health of the afflicted cows and has a negative economic impact on the dairy industry. The goal of this study was to create a data-driven tool for predicting the calving difficulty of non-heifer cows using input variables that are known prior to the moment of insemination. Compared to past studies, we excluded input variables that can only be known during or after insemination, such as birth weight and gestation length. This makes the model suitable for informing mating decisions that could reduce the incidence of difficult calvings or mitigate their consequences. We used a dataset consisting of 131,527 calving records of Holstein cattle, from which we derived a total of 274 phenotypic features and estimated breeding values. The distribution of classes in the dataset was 96.7 % normal calvings, and 3.3 % difficult calvings. We used a gradient boosted trees (XGBoost) as the learning model and a bagging ensemble approach to deal with the extreme class imbalance. The model achieved an average area under the ROC curve of 0.73 on unseen test data. Using feature importance analysis, we identified a number of features that have a high discriminatory value for calving difficulty, including maternal and paternal breeding values, and past phenotypic measurements of the cow.


Asunto(s)
Enfermedades de los Bovinos , Industria Lechera , Distocia , Animales , Bovinos , Femenino , Embarazo , Peso al Nacer , Enfermedades de los Bovinos/diagnóstico , Industria Lechera/métodos , Distocia/diagnóstico , Distocia/veterinaria , Inseminación , Reproducción , Factores de Riesgo
3.
Am J Pathol ; 191(9): 1520-1525, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34197776

RESUMEN

The u-serrated immunodeposition pattern in direct immunofluorescence (DIF) microscopy is a recognizable feature and confirmative for the diagnosis of epidermolysis bullosa acquisita (EBA). Due to unfamiliarity with serrated patterns, serration pattern recognition is still of limited use in routine DIF microscopy. The objective of this study was to investigate the feasibility of using convolutional neural networks (CNNs) for the recognition of u-serrated patterns that can assist in the diagnosis of EBA. The nine most commonly used CNNs were trained and validated by using 220,800 manually delineated DIF image patches from 106 images of 46 different patients. The data set was split into 10 subsets: nine training subsets from 42 patients to train CNNs and the last subset from the remaining four patients for a validation data set of diagnostic accuracy. This process was repeated 10 times with a different subset used for validation. The best-performing CNN achieved a specificity of 89.3% and a corresponding sensitivity of 89.3% in the classification of u-serrated DIF image patches, an expert level of diagnostic accuracy. Experiments and results show the effectiveness of CNN approaches for u-serrated pattern recognition with a high accuracy. The proposed approach can assist clinicians and pathologists in recognition of u-serrated patterns in DIF images and facilitate the diagnosis of EBA.


Asunto(s)
Epidermólisis Ampollosa Adquirida/diagnóstico , Interpretación de Imagen Asistida por Computador/métodos , Redes Neurales de la Computación , Epidermólisis Ampollosa Adquirida/patología , Técnica del Anticuerpo Fluorescente Directa , Humanos , Microscopía Fluorescente/métodos , Sensibilidad y Especificidad
4.
Int J Sports Physiol Perform ; 16(10): 1522-1531, 2021 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-33931574

RESUMEN

PURPOSE: Staying injury free is a major factor for success in sports. Although injuries are difficult to forecast, novel technologies and data-science applications could provide important insights. Our purpose was to use machine learning for the prediction of injuries in runners, based on detailed training logs. METHODS: Prediction of injuries was evaluated on a new data set of 74 high-level middle- and long-distance runners, over a period of 7 years. Two analytic approaches were applied. First, the training load from the previous 7 days was expressed as a time series, with each day's training being described by 10 features. These features were a combination of objective data from a global positioning system watch (eg, duration, distance), together with subjective data about the exertion and success of the training. Second, a training week was summarized by 22 aggregate features, and a time window of 3 weeks before the injury was considered. RESULTS: A predictive system based on bagged XGBoost machine-learning models resulted in receiver operating characteristic curves with average areas under the curves of 0.724 and 0.678 for the day and week approaches, respectively. The results of the day approach especially reflect a reasonably high probability that our system makes correct injury predictions. CONCLUSIONS: Our machine-learning-based approach predicts a sizable portion of the injuries, in particular when the model is based on training-load data in the days preceding an injury. Overall, these results demonstrate the possible merits of using machine learning to predict injuries and tailor training programs for athletes.


Asunto(s)
Atletas , Aprendizaje Automático , Humanos
6.
Sensors (Basel) ; 20(16)2020 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-32796644

RESUMEN

Face recognition is a valuable forensic tool for criminal investigators since it certainly helps in identifying individuals in scenarios of criminal activity like fugitives or child sexual abuse. It is, however, a very challenging task as it must be able to handle low-quality images of real world settings and fulfill real time requirements. Deep learning approaches for face detection have proven to be very successful but they require large computation power and processing time. In this work, we evaluate the speed-accuracy tradeoff of three popular deep-learning-based face detectors on the WIDER Face and UFDD data sets in several CPUs and GPUs. We also develop a regression model capable to estimate the performance, both in terms of processing time and accuracy. We expect this to become a very useful tool for the end user in forensic laboratories in order to estimate the performance for different face detection options. Experimental results showed that the best speed-accuracy tradeoff is achieved with images resized to 50% of the original size in GPUs and images resized to 25% of the original size in CPUs. Moreover, performance can be estimated using multiple linear regression models with a Mean Absolute Error (MAE) of 0.113, which is very promising for the forensic field.


Asunto(s)
Aprendizaje Profundo , Cara , Niño , Ciencias Forenses , Humanos
7.
Front Robot AI ; 7: 71, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33501238

RESUMEN

Falling is among the most damaging event elderly people may experience. With the ever-growing aging population, there is an urgent need for the development of fall detection systems. Thanks to the rapid development of sensor networks and the Internet of Things (IoT), human-computer interaction using sensor fusion has been regarded as an effective method to address the problem of fall detection. In this paper, we provide a literature survey of work conducted on elderly fall detection using sensor networks and IoT. Although there are various existing studies which focus on the fall detection with individual sensors, such as wearable ones and depth cameras, the performance of these systems are still not satisfying as they suffer mostly from high false alarms. Literature shows that fusing the signals of different sensors could result in higher accuracy and lower false alarms, while improving the robustness of such systems. We approach this survey from different perspectives, including data collection, data transmission, sensor fusion, data analysis, security, and privacy. We also review the benchmark data sets available that have been used to quantify the performance of the proposed methods. The survey is meant to provide researchers in the field of elderly fall detection using sensor networks with a summary of progress achieved up to date and to identify areas where further effort would be beneficial.

8.
IEEE Trans Image Process ; 28(12): 5852-5866, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31247549

RESUMEN

Delineation of curvilinear structures in images is an important basic step of several image processing applications, such as segmentation of roads or rivers in aerial images, vessels or staining membranes in medical images, and cracks in pavements and roads, among others. Existing methods suffer from insufficient robustness to noise. In this paper, we propose a novel operator for the detection of curvilinear structures in images, which we demonstrate to be robust to various types of noise and effective in several applications. We call it RUSTICO, which stands for RobUST Inhibition-augmented Curvilinear Operator. It is inspired by the push-pull inhibition in visual cortex and takes as input the responses of two trainable B-COSFIRE filters of opposite polarity. The output of RUSTICO consists of a magnitude map and an orientation map. We carried out experiments on a data set of synthetic stimuli with noise drawn from different distributions, as well as on several benchmark data sets of retinal fundus images, crack pavements, and aerial images and a new data set of rose bushes used for automatic gardening. We evaluated the performance of RUSTICO by a metric that considers the structural properties of line networks (connectivity, area, and length) and demonstrated that RUSTICO outperforms many existing methods with high statistical significance. RUSTICO exhibits high robustness to noise and texture.

9.
Sci Total Environ ; 663: 162-169, 2019 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-30711582

RESUMEN

Rapid increments in the concentration of the radiocarbon in the atmosphere (Δ14C) have been identified in the years 774-775 CE and 993-994 CE (Miyake events) using annual measurements on known-age tree-rings. The level of cosmic radiation implied by such increases could cause the failure of satellite telecommunication systems, and thus, there is a need to model and predict them. In this work, we investigated several intelligent computational methods to identify similar events in the past. We apply state-of-the-art pattern matching techniques as well as feature representation, a procedure that typically is used in machine learning and classification. To validate our findings, we used as ground truth the two confirmed Miyake events, and several other dates that have been proposed in the literature. We show that some of the methods used in this study successfully identify most of the ground truth events (~1% false positive rate at 75% true positive rate). Our results show that computational methods can be used to identify comparable patterns of interest and hence potentially uncover sudden increments of Δ14C in the past.

10.
Int J Med Inform ; 122: 27-36, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30623781

RESUMEN

Direct immunofluorescence (DIF) microscopy of a skin biopsy is used by physicians and pathologists to diagnose autoimmune bullous dermatoses (AIBD). This technique is the reference standard for diagnosis of AIBD, which is used worldwide in medical laboratories. For diagnosis of subepidermal AIBD (sAIBD), two different types of serrated pattern of immunodepositions can be recognized from DIF images, namely n- and u-serrated patterns. The n-serrated pattern is typically found in the most common sAIBD bullous pemphigoid. Presence of the u-serrated pattern indicates the sAIBD subtype epidermolysis bullosa acquisita (EBA), which has a different prognosis and requires a different treatment. The manual identification of these serrated patterns is learnable but challenging. We propose an automatic technique that is able to localize u-serrated patterns for automated computer-assisted diagnosis of EBA. The distinctive feature of u-serrated patterns as compared to n-serrated patterns is the presence of ridge-endings. We introduce a novel ridge-ending detector which uses inhibition-augmented trainable COSFIRE filters. Then, we apply a hierarchical clustering approach to detect the suspicious u-serrated patterns from the detected ridge-endings. For each detected u-serrated pattern we provide a score that indicates the reliability of its detection. In order to evaluate the proposed approach, we created a data set with 180 DIF images for serration pattern analysis. This data set consists of seven subsets which were obtained from various biopsy samples under different conditions. We achieve an average recognition rate of 82.2% of the u-serrated pattern on these 180 DIF images, which is comparable to the recognition rate achieved by experienced medical doctors and pathologists.


Asunto(s)
Enfermedades Autoinmunes/diagnóstico , Epidermólisis Ampollosa Adquirida/diagnóstico , Técnica del Anticuerpo Fluorescente Directa/instrumentación , Técnica del Anticuerpo Fluorescente Directa/métodos , Interpretación de Imagen Asistida por Computador/métodos , Enfermedades Autoinmunes/diagnóstico por imagen , Diagnóstico Diferencial , Epidermólisis Ampollosa Adquirida/diagnóstico por imagen , Humanos , Reproducibilidad de los Resultados
11.
J Anim Sci ; 96(12): 4935-4943, 2018 Dec 03.
Artículo en Inglés | MEDLINE | ID: mdl-30239725

RESUMEN

The weight of a pig and the rate of its growth are key elements in pig production. In particular, predicting future growth is extremely useful, since it can help in determining feed costs, pen space requirements, and the age at which a pig reaches a desired slaughter weight. However, making these predictions is challenging, due to the natural variation in how individual pigs grow, and the different causes of this variation. In this paper, we used machine learning, namely random forest (RF) regression, for predicting the age at which the slaughter weight of 120 kg is reached. Additionally, we used the variable importance score from RF to quantify the importance of different types of input data for that prediction. Data of 32,979 purebred Large White pigs were provided by Topigs Norsvin, consisting of phenotypic data, estimated breeding values (EBVs), along with pedigree and pedigree-genetic relationships. Moreover, we presented a 2-step data reduction procedure, based on random projections (RPs) and principal component analysis (PCA), to extract features from the pedigree and genetic similarity matrices for use as inputs in the prediction models. Our results showed that relevant phenotypic features were the most effective in predicting the output (age at 120 kg), explaining approximately 62% of its variance (i.e., R2 = 0.62). Estimated breeding value, pedigree, or pedigree-genetic features interchangeably explain 2% of additional variance when added to the phenotypic features, while explaining, respectively, 38%, 39%, and 34% of the variance when used separately.


Asunto(s)
Porcinos/crecimiento & desarrollo , Porcinos/genética , Animales , Peso Corporal , Cruzamiento , Modelos Biológicos
12.
Med Image Anal ; 19(1): 46-57, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25240643

RESUMEN

Retinal imaging provides a non-invasive opportunity for the diagnosis of several medical pathologies. The automatic segmentation of the vessel tree is an important pre-processing step which facilitates subsequent automatic processes that contribute to such diagnosis. We introduce a novel method for the automatic segmentation of vessel trees in retinal fundus images. We propose a filter that selectively responds to vessels and that we call B-COSFIRE with B standing for bar which is an abstraction for a vessel. It is based on the existing COSFIRE (Combination Of Shifted Filter Responses) approach. A B-COSFIRE filter achieves orientation selectivity by computing the weighted geometric mean of the output of a pool of Difference-of-Gaussians filters, whose supports are aligned in a collinear manner. It achieves rotation invariance efficiently by simple shifting operations. The proposed filter is versatile as its selectivity is determined from any given vessel-like prototype pattern in an automatic configuration process. We configure two B-COSFIRE filters, namely symmetric and asymmetric, that are selective for bars and bar-endings, respectively. We achieve vessel segmentation by summing up the responses of the two rotation-invariant B-COSFIRE filters followed by thresholding. The results that we achieve on three publicly available data sets (DRIVE: Se=0.7655, Sp=0.9704; STARE: Se=0.7716, Sp=0.9701; CHASE_DB1: Se=0.7585, Sp=0.9587) are higher than many of the state-of-the-art methods. The proposed segmentation approach is also very efficient with a time complexity that is significantly lower than existing methods.


Asunto(s)
Inteligencia Artificial , Angiografía con Fluoresceína/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Enfermedades de la Retina/patología , Vasos Retinianos/patología , Algoritmos , Humanos , Aumento de la Imagen/métodos , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
13.
Artículo en Inglés | MEDLINE | ID: mdl-25126068

RESUMEN

The remarkable abilities of the primate visual system have inspired the construction of computational models of some visual neurons. We propose a trainable hierarchical object recognition model, which we call S-COSFIRE (S stands for Shape and COSFIRE stands for Combination Of Shifted FIlter REsponses) and use it to localize and recognize objects of interests embedded in complex scenes. It is inspired by the visual processing in the ventral stream (V1/V2 → V4 → TEO). Recognition and localization of objects embedded in complex scenes is important for many computer vision applications. Most existing methods require prior segmentation of the objects from the background which on its turn requires recognition. An S-COSFIRE filter is automatically configured to be selective for an arrangement of contour-based features that belong to a prototype shape specified by an example. The configuration comprises selecting relevant vertex detectors and determining certain blur and shift parameters. The response is computed as the weighted geometric mean of the blurred and shifted responses of the selected vertex detectors. S-COSFIRE filters share similar properties with some neurons in inferotemporal cortex, which provided inspiration for this work. We demonstrate the effectiveness of S-COSFIRE filters in two applications: letter and keyword spotting in handwritten manuscripts and object spotting in complex scenes for the computer vision system of a domestic robot. S-COSFIRE filters are effective to recognize and localize (deformable) objects in images of complex scenes without requiring prior segmentation. They are versatile trainable shape detectors, conceptually simple and easy to implement. The presented hierarchical shape representation contributes to a better understanding of the brain and to more robust computer vision algorithms.

14.
PLoS One ; 9(7): e98424, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25057813

RESUMEN

We propose a computational model of a simple cell with push-pull inhibition, a property that is observed in many real simple cells. It is based on an existing model called Combination of Receptive Fields or CORF for brevity. A CORF model uses as afferent inputs the responses of model LGN cells with appropriately aligned center-surround receptive fields, and combines their output with a weighted geometric mean. The output of the proposed model simple cell with push-pull inhibition, which we call push-pull CORF, is computed as the response of a CORF model cell that is selective for a stimulus with preferred orientation and preferred contrast minus a fraction of the response of a CORF model cell that responds to the same stimulus but of opposite contrast. We demonstrate that the proposed push-pull CORF model improves signal-to-noise ratio (SNR) and achieves further properties that are observed in real simple cells, namely separability of spatial frequency and orientation as well as contrast-dependent changes in spatial frequency tuning. We also demonstrate the effectiveness of the proposed push-pull CORF model in contour detection, which is believed to be the primary biological role of simple cells. We use the RuG (40 images) and Berkeley (500 images) benchmark data sets of images with natural scenes and show that the proposed model outperforms, with very high statistical significance, the basic CORF model without inhibition, Gabor-based models with isotropic surround inhibition, and the Canny edge detector. The push-pull CORF model that we propose is a contribution to a better understanding of how visual information is processed in the brain as it provides the ability to reproduce a wider range of properties exhibited by real simple cells. As a result of push-pull inhibition a CORF model exhibits an improved SNR, which is the reason for a more effective contour detection.


Asunto(s)
Sensibilidad de Contraste/fisiología , Modelos Neurológicos , Inhibición Neural/fisiología , Neuronas , Animales , Biología Computacional/métodos , Simulación por Computador , Potenciales Evocados Visuales , Humanos , Neuronas/citología , Neuronas/fisiología , Orientación/fisiología , Relación Señal-Ruido , Procesamiento Espacial/fisiología , Vías Visuales/fisiología
15.
IEEE Trans Pattern Anal Mach Intell ; 35(2): 490-503, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-22585100

RESUMEN

BACKGROUND: Keypoint detection is important for many computer vision applications. Existing methods suffer from insufficient selectivity regarding the shape properties of features and are vulnerable to contrast variations and to the presence of noise or texture. METHODS: We propose a trainable filter which we call Combination Of Shifted FIlter REsponses (COSFIRE) and use for keypoint detection and pattern recognition. It is automatically configured to be selective for a local contour pattern specified by an example. The configuration comprises selecting given channels of a bank of Gabor filters and determining certain blur and shift parameters. A COSFIRE filter response is computed as the weighted geometric mean of the blurred and shifted responses of the selected Gabor filters. It shares similar properties with some shape-selective neurons in visual cortex, which provided inspiration for this work. RESULTS: We demonstrate the effectiveness of the proposed filters in three applications: the detection of retinal vascular bifurcations (DRIVE dataset: 98.50 percent recall, 96.09 percent precision), the recognition of handwritten digits (MNIST dataset: 99.48 percent correct classification), and the detection and recognition of traffic signs in complex scenes (100 percent recall and precision). CONCLUSIONS: The proposed COSFIRE filters are conceptually simple and easy to implement. They are versatile keypoint detectors and are highly effective in practical computer vision applications.


Asunto(s)
Algoritmos , Inteligencia Artificial , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Aumento de la Imagen/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
16.
Biol Cybern ; 106(3): 177-89, 2012 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-22526357

RESUMEN

Simple cells in primary visual cortex are believed to extract local contour information from a visual scene. The 2D Gabor function (GF) model has gained particular popularity as a computational model of a simple cell. However, it short-cuts the LGN, it cannot reproduce a number of properties of real simple cells, and its effectiveness in contour detection tasks has never been compared with the effectiveness of alternative models. We propose a computational model that uses as afferent inputs the responses of model LGN cells with center-surround receptive fields (RFs) and we refer to it as a Combination of Receptive Fields (CORF) model. We use shifted gratings as test stimuli and simulated reverse correlation to explore the nature of the proposed model. We study its behavior regarding the effect of contrast on its response and orientation bandwidth as well as the effect of an orthogonal mask on the response to an optimally oriented stimulus. We also evaluate and compare the performances of the CORF and GF models regarding contour detection, using two public data sets of images of natural scenes with associated contour ground truths. The RF map of the proposed CORF model, determined with simulated reverse correlation, can be divided in elongated excitatory and inhibitory regions typical of simple cells. The modulated response to shifted gratings that this model shows is also characteristic of a simple cell. Furthermore, the CORF model exhibits cross orientation suppression, contrast invariant orientation tuning and response saturation. These properties are observed in real simple cells, but are not possessed by the GF model. The proposed CORF model outperforms the GF model in contour detection with high statistical confidence (RuG data set: p<10(-4), and Berkeley data set: p<10(-4)). The proposed CORF model is more realistic than the GF model and is more effective in contour detection, which is assumed to be the primary biological role of simple cells.


Asunto(s)
Modelos Teóricos , Neuronas/citología , Corteza Visual/citología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA