Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Sensors (Basel) ; 23(16)2023 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-37631778

RESUMO

As pollinators, insects play a crucial role in ecosystem management and world food production. However, insect populations are declining, necessitating efficient insect monitoring methods. Existing methods analyze video or time-lapse images of insects in nature, but analysis is challenging as insects are small objects in complex and dynamic natural vegetation scenes. In this work, we provide a dataset of primarily honeybees visiting three different plant species during two months of the summer. The dataset consists of 107,387 annotated time-lapse images from multiple cameras, including 9423 annotated insects. We present a method for detecting insects in time-lapse RGB images, which consists of a two-step process. Firstly, the time-lapse RGB images are preprocessed to enhance insects in the images. This motion-informed enhancement technique uses motion and colors to enhance insects in images. Secondly, the enhanced images are subsequently fed into a convolutional neural network (CNN) object detector. The method improves on the deep learning object detectors You Only Look Once (YOLO) and faster region-based CNN (Faster R-CNN). Using motion-informed enhancement, the YOLO detector improves the average micro F1-score from 0.49 to 0.71, and the Faster R-CNN detector improves the average micro F1-score from 0.32 to 0.56. Our dataset and proposed method provide a step forward for automating the time-lapse camera monitoring of flying insects.


Assuntos
Ecossistema , Insetos , Abelhas , Animais , Imagem com Lapso de Tempo , Alimentos , Movimento (Física)
2.
J Assist Reprod Genet ; 38(7): 1675-1689, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34173914

RESUMO

Embryo selection within in vitro fertilization (IVF) is the process of evaluating qualities of fertilized oocytes (embryos) and selecting the best embryo(s) available within a patient cohort for subsequent transfer or cryopreservation. In recent years, artificial intelligence (AI) has been used extensively to improve and automate the embryo ranking and selection procedure by extracting relevant information from embryo microscopy images. The AI models are evaluated based on their ability to identify the embryo(s) with the highest chance(s) of achieving a successful pregnancy. Whether such evaluations should be based on ranking performance or pregnancy prediction, however, seems to divide studies. As such, a variety of performance metrics are reported, and comparisons between studies are often made on different outcomes and data foundations. Moreover, superiority of AI methods over manual human evaluation is often claimed based on retrospective data, without any mentions of potential bias. In this paper, we provide a technical view on some of the major topics that divide how current AI models are trained, evaluated and compared. We explain and discuss the most common evaluation metrics and relate them to the two separate evaluation objectives, ranking and prediction. We also discuss when and how to compare AI models across studies and explain in detail how a selection bias is inevitable when comparing AI models against current embryo selection practice in retrospective cohort studies.


Assuntos
Inteligência Artificial , Blastocisto/citologia , Processamento de Imagem Assistida por Computador/métodos , Área Sob a Curva , Blastocisto/fisiologia , Calibragem , Criopreservação , Bases de Dados Factuais , Tomada de Decisões Assistida por Computador , Transferência Embrionária/métodos , Feminino , Fertilização in vitro/métodos , Humanos , Gravidez , Tamanho da Amostra , Sensibilidade e Especificidade
3.
Sensors (Basel) ; 21(20)2021 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-34695919

RESUMO

In agriculture, explainable deep neural networks (DNNs) can be used to pinpoint the discriminative part of weeds for an imagery classification task, albeit at a low resolution, to control the weed population. This paper proposes the use of a multi-layer attention procedure based on a transformer combined with a fusion rule to present an interpretation of the DNN decision through a high-resolution attention map. The fusion rule is a weighted average method that is used to combine attention maps from different layers based on saliency. Attention maps with an explanation for why a weed is or is not classified as a certain class help agronomists to shape the high-resolution weed identification keys (WIK) that the model perceives. The model is trained and evaluated on two agricultural datasets that contain plants grown under different conditions: the Plant Seedlings Dataset (PSD) and the Open Plant Phenotyping Dataset (OPPD). The model represents attention maps with highlighted requirements and information about misclassification to enable cross-dataset evaluations. State-of-the-art comparisons represent classification developments after applying attention maps. Average accuracies of 95.42% and 96% are gained for the negative and positive explanations of the PSD test sets, respectively. In OPPD evaluations, accuracies of 97.78% and 97.83% are obtained for negative and positive explanations, respectively. The visual comparison between attention maps also shows high-resolution information.


Assuntos
Atenção , Redes Neurais de Computação , Agricultura , Plantas Daninhas , Plântula
4.
Sensors (Basel) ; 21(1)2020 Dec 29.
Artigo em Inglês | MEDLINE | ID: mdl-33383904

RESUMO

Crop mixtures are often beneficial in crop rotations to enhance resource utilization and yield stability. While targeted management, dependent on the local species composition, has the potential to increase the crop value, it comes at a higher expense in terms of field surveys. As fine-grained species distribution mapping of within-field variation is typically unfeasible, the potential of targeted management remains an open research area. In this work, we propose a new method for determining the biomass species composition from high resolution color images using a DeepLabv3+ based convolutional neural network. Data collection has been performed at four separate experimental plot trial sites over three growing seasons. The method is thoroughly evaluated by predicting the biomass composition of different grass clover mixtures using only an image of the canopy. With a relative biomass clover content prediction of R2 = 0.91, we present new state-of-the-art results across the largely varying sites. Combining the algorithm with an all terrain vehicle (ATV)-mounted image acquisition system, we demonstrate a feasible method for robust coverage and species distribution mapping of 225 ha of mixed crops at a median capacity of 17 ha per hour at 173 images per hectare.

5.
Sensors (Basel) ; 17(11)2017 Nov 09.
Artigo em Inglês | MEDLINE | ID: mdl-29120383

RESUMO

In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.

6.
Sensors (Basel) ; 17(12)2017 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-29258215

RESUMO

Optimal fertilization of clover-grass fields relies on knowledge of the clover and grass fractions. This study shows how knowledge can be obtained by analyzing images collected in fields automatically. A fully convolutional neural network was trained to create a pixel-wise classification of clover, grass, and weeds in red, green, and blue (RGB) images of clover-grass mixtures. The estimated clover fractions of the dry matter from the images were found to be highly correlated with the real clover fractions of the dry matter, making this a cheap and non-destructive way of monitoring clover-grass fields. The network was trained solely on simulated top-down images of clover-grass fields. This enables the network to distinguish clover, grass, and weed pixels in real images. The use of simulated images for training reduces the manual labor to a few hours, as compared to more than 3000 h when all the real images are annotated for training. The network was tested on images with varied clover/grass ratios and achieved an overall pixel classification accuracy of 83.4%, while estimating the dry matter clover fraction with a standard deviation of 7.8%.

7.
Sensors (Basel) ; 16(11)2016 Nov 11.
Artigo em Inglês | MEDLINE | ID: mdl-27845717

RESUMO

Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks" (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45-90 m) than RCNN. RCNN has a similar performance at a short range (0-30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit).

8.
Sensors (Basel) ; 15(9): 21407-26, 2015 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-26343675

RESUMO

In this paper, we introduce a novel approach to estimate the illumination and reflectance of an image. The approach is based on illumination-reflectance model and wavelet theory. We use a homomorphic wavelet filter (HWF) and define a wavelet quotient image (WQI) model based on dyadic wavelet transform. The illumination and reflectance components are estimated by using HWF and WQI, respectively. Based on the illumination and reflectance estimation we develop an algorithm to segment sows in grayscale video recordings which are captured in complex farrowing pens. Experimental results demonstrate that the algorithm can be applied to detect the domestic animals in complex environments such as light changes, motionless foreground objects and dynamic background.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Iluminação/classificação , Gravação em Vídeo/métodos , Algoritmos , Animais , Humanos , Suínos
9.
Sensors (Basel) ; 15(3): 5096-111, 2015 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-25738766

RESUMO

Mechanical weeding is an important tool in organic farming. However, the use of mechanical weeding in conventional agriculture is increasing, due to public demands to lower the use of pesticides and an increased number of pesticide-resistant weeds. Ground nesting birds are highly susceptible to farming operations, like mechanical weeding, which may destroy the nests and reduce the survival of chicks and incubating females. This problem has limited focus within agricultural engineering. However, when the number of machines increases, destruction of nests will have an impact on various species. It is therefore necessary to explore and develop new technology in order to avoid these negative ethical consequences. This paper presents a vision-based approach to automated ground nest detection. The algorithm is based on the fusion of visual saliency, which mimics human attention, and incremental background modeling, which enables foreground detection with moving cameras. The algorithm achieves a good detection rate, as it detects 28 of 30 nests at an average distance of 3.8 m, with a true positive rate of 0.75.


Assuntos
Agricultura , Controle de Plantas Daninhas/métodos , Animais , Aves/fisiologia , Feminino , Humanos
10.
Sensors (Basel) ; 14(8): 13778-93, 2014 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-25196105

RESUMO

In agricultural mowing operations, thousands of animals are injured or killed each year, due to the increased working widths and speeds of agricultural machinery. Detection and recognition of wildlife within the agricultural fields is important to reduce wildlife mortality and, thereby, promote wildlife-friendly farming. The work presented in this paper contributes to the automated detection and classification of animals in thermal imaging. The methods and results are based on top-view images taken manually from a lift to motivate work towards unmanned aerial vehicle-based detection and recognition. Hot objects are detected based on a threshold dynamically adjusted to each frame. For the classification of animals, we propose a novel thermal feature extraction algorithm. For each detected object, a thermal signature is calculated using morphological operations. The thermal signature describes heat characteristics of objects and is partly invariant to translation, rotation, scale and posture. The discrete cosine transform (DCT) is used to parameterize the thermal signature and, thereby, calculate a feature vector, which is used for subsequent classification. Using a k-nearest-neighbor (kNN) classifier, animals are discriminated from non-animals with a balanced classification accuracy of 84.7% in an altitude range of 3-10 m and an accuracy of 75.2% for an altitude range of 10-20 m. To incorporate temporal information in the classification, a tracking algorithm is proposed. Using temporal information improves the balanced classification accuracy to 93.3% in an altitude range 3-10 of meters and 77.7% in an altitude range of 10-20 m.


Assuntos
Animais Selvagens/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Animais , Inteligência Artificial , Análise por Conglomerados
11.
Opt Express ; 20(3): 1953-62, 2012 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-22330436

RESUMO

Motion analysis of optically trapped objects is demonstrated using a simple 2D Fourier transform technique. The displacements of trapped objects are determined directly from the phase shift between the Fourier transform of subsequent images. Using end- and side-view imaging, the stiffness of the trap is determined in three dimensions. The Fourier transform method is simple to implement and applicable in cases where the trapped object changes shape or where the lighting conditions change. This is illustrated by tracking a fluorescent particle and a myoblast cell, with subsequent determination of diffusion coefficients and the trapping forces.


Assuntos
Modelos Biológicos , Modelos Químicos , Mioblastos/fisiologia , Mioblastos/efeitos da radiação , Nanopartículas/química , Nanopartículas/efeitos da radiação , Pinças Ópticas , Animais , Movimento Celular/fisiologia , Movimento Celular/efeitos da radiação , Células Cultivadas , Simulação por Computador , Análise de Fourier , Camundongos
12.
Sensors (Basel) ; 12(4): 3868-78, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22666006

RESUMO

To enhance sensor capabilities, sensor data readings from different modalities must be fused. The main contribution of this paper is to present a sensor data fusion approach that can reduce Kinect(TM) sensor limitations. This approach involves combining laser with Kinect(TM) sensors. Sensor data is modelled in a 3D environment based on octrees using a probabilistic occupancy estimation. The Bayesian method, which takes into account the uncertainty inherent in the sensor measurements, is used to fuse the sensor information and update the 3D octree map. The sensor fusion yields a significant increase of the field of view of the Kinect(TM) sensor that can be used for robot tasks.

13.
Sensors (Basel) ; 12(3): 3773-88, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22737037

RESUMO

Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86-97% sensitivity, 89-98% precision) and a reasonable recognition of flushing (79-86%, 66-80%) and landing behaviour(73-91%, 79-92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system.

14.
Sensors (Basel) ; 12(9): 11697-711, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23112678

RESUMO

Analysis of foot movement is essential in the treatment and prevention of foot-related disorders. Measuring the in-shoe foot movement during everyday activities, such as sports, has the potential to become an important diagnostic tool in clinical practice. The current paper describes the development of a thin, flexible and robust capacitive strain sensor for the in-shoe measurement of the navicular drop. The navicular drop is a well-recognized measure of foot movement. The position of the strain sensor on the foot was analyzed to determine the optimal points of attachment. The sensor was evaluated against a state-of-the-art video-based system that tracks reflective markers on the bare foot. Preliminary experimental results show that the developed strain sensor is able to measure navicular drop on the bare foot with an accuracy on par with the video-based system and with a high reproducibility. Temporal comparison of video-based, barefoot and in-shoe measurements indicate that the developed sensor measures the navicular drop accurately in shoes and can be used without any discomfort for the user.


Assuntos
Marcha/fisiologia , Movimento/fisiologia , Ossos do Tarso/fisiologia , Pé/fisiologia , Humanos , Sapatos
15.
Sci Rep ; 12(1): 8395, 2022 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-35589754

RESUMO

Classifying the state of the atmosphere into a finite number of large-scale circulation regimes is a popular way of investigating teleconnections, the predictability of severe weather events, and climate change. Here, we investigate a supervised machine learning approach based on deformable convolutional neural networks (deCNNs) and transfer learning to forecast the North Atlantic-European weather regimes during extended boreal winter for 1-15 days into the future. We apply state-of-the-art interpretation techniques from the machine learning literature to attribute particular regions of interest or potential teleconnections relevant for any given weather cluster prediction or regime transition. We demonstrate superior forecasting performance relative to several classical meteorological benchmarks, as well as logistic regression and random forests. Due to its wider field of view, we also observe deCNN achieving considerably better performance than regular convolutional neural networks at lead times beyond 5-6 days. Finally, we find transfer learning to be of paramount importance, similar to previous data-driven atmospheric forecasting studies.


Assuntos
Redes Neurais de Computação , Tempo (Meteorologia) , Atmosfera , Previsões , Aprendizado de Máquina
16.
IEEE Trans Med Imaging ; 41(2): 465-475, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34596537

RESUMO

With self-supervised learning, both labeled and unlabeled data can be used for representation learning and model pretraining. This is particularly relevant when automating the selection of a patient's fertilized eggs (embryos) during a fertility treatment, in which only the embryos that were transferred to the female uterus may have labels of pregnancy. In this paper, we apply a self-supervised video alignment method known as temporal cycle-consistency (TCC) on 38176 time-lapse videos of developing embryos, of which 14550 were labeled. We show how TCC can be used to extract temporal similarities between embryo videos and use these for predicting pregnancy likelihood. Our temporal similarity method outperforms the time alignment measurement (TAM) with an area under the receiver operating characteristic (AUC) of 0.64 vs. 0.56. Compared to existing embryo evaluation models, it places in between a pure temporal and a spatio-temporal model that both require manual annotations. Furthermore, we use TCC for transfer learning in a semi-supervised fashion and show significant performance improvements compared to standard supervised learning, when only a small subset of the dataset is labeled. Specifically, two variants of transfer learning both achieve an AUC of 0.66 compared to 0.63 for supervised learning when 16% of the dataset is labeled.


Assuntos
Aprendizado de Máquina Supervisionado , Feminino , Humanos , Gravidez , Probabilidade , Curva ROC , Imagem com Lapso de Tempo/métodos
17.
Stroke ; 40(6): 2055-61, 2009 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-19359626

RESUMO

BACKGROUND AND PURPOSE: Perfusion-weighted imaging can predict infarct growth in acute stroke and potentially be used to select patients with tissue at risk for reperfusion therapies. However, the lack of consensus and evidence on how to best create PWI maps that reflect tissue at risk challenges comparisons of results and acute decision-making in trials. Deconvolution using an arterial input function has been hypothesized to generate maps of a more quantitative nature and with better prognostic value than simpler summary measures such as time-to-peak or the first moment of the concentration time curve. We sought to compare 10 different perfusion parameters by their ability to predict tissue infarction in acute ischemic stroke. METHODS: In a retrospective analysis of 97 patients with acute stroke studied within 6 hours from symptom onset, we used receiver operating characteristics in a voxel-based analysis to compare 10 perfusion parameters: time-to-peak, first moment, cerebral blood volume and flow, and 6 variants of time to peak of the residue function and mean transit time maps. Subanalysis assessed the effect of reperfusion on outcome prediction. RESULTS: The most predictive maps were the summary measures first moment and time-to-peak. First moment was significantly more predictive than time to peak of the residue function and local arterial input function-based methods (P<0.05), but not significantly better than conventional mean transit time maps. CONCLUSIONS: Results indicated that if a single map type was to be used to predict infarction, first moment maps performed at least as well as deconvolved measures. Deconvolution decouples delay from tissue perfusion; we speculate this negatively impacts infarct prediction.


Assuntos
Circulação Cerebrovascular/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Acidente Vascular Cerebral/patologia , Acidente Vascular Cerebral/fisiopatologia , Idoso , Mapeamento Encefálico , Infarto Cerebral/patologia , Bases de Dados Factuais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC , Reperfusão
18.
Comput Biol Med ; 115: 103494, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31630027

RESUMO

BACKGROUND: Blastocyst morphology is a predictive marker for implantation success of in vitro fertilized human embryos. Morphology grading is therefore commonly used to select the embryo with the highest implantation potential. One of the challenges, however, is that morphology grading can be highly subjective when performed manually by embryologists. Grading systems generally discretize a continuous scale of low to high score, resulting in floating and unclear boundaries between grading categories. Manual annotations therefore suffer from large inter-and intra-observer variances. METHOD: In this paper, we propose a method based on deep learning to automatically grade the morphological appearance of human blastocysts from time-lapse imaging. A convolutional neural network is trained to jointly predict inner cell mass (ICM) and trophectoderm (TE) grades from a single image frame, and a recurrent neural network is applied on top to incorporate temporal information of the expanding blastocysts from multiple frames. RESULTS: Results showed that the method achieved above human-level accuracies when evaluated on majority votes from an independent test set labeled by multiple embryologists. Furthermore, when evaluating implantation rates for embryos grouped by morphology grades, human embryologists and our method had a similar correlation between predicted embryo quality and pregnancy outcome. CONCLUSIONS: The proposed method has shown improved performance of predicting ICM and TE grades on human blastocysts when utilizing temporal information available with time-lapse imaging. The algorithm is considered at least on par with human embryologists on quality estimation, as it performed better than the average human embryologist at ICM and TE prediction and provided a slightly better correlation between predicted embryo quality and implantability than human embryologists.


Assuntos
Blastocisto , Aprendizado Profundo , Fertilização in vitro , Processamento de Imagem Assistida por Computador , Imagem com Lapso de Tempo , Blastocisto/citologia , Blastocisto/metabolismo , Feminino , Humanos , Gravidez
19.
Front Robot AI ; 5: 28, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-33500915

RESUMO

Today, agricultural vehicles are available that can automatically perform tasks such as weed detection and spraying, mowing, and sowing while being steered automatically. However, for such systems to be fully autonomous and self-driven, not only their specific agricultural tasks must be automated. An accurate and robust perception system automatically detecting and avoiding all obstacles must also be realized to ensure safety of humans, animals, and other surroundings. In this paper, we present a multi-modal obstacle and environment detection and recognition approach for process evaluation in agricultural fields. The proposed pipeline detects and maps static and dynamic obstacles globally, while providing process-relevant information along the traversed trajectory. Detection algorithms are introduced for a variety of sensor technologies, including range sensors (lidar and radar) and cameras (stereo and thermal). Detection information is mapped globally into semantical occupancy grid maps and fused across all sensors with late fusion, resulting in accurate traversability assessment and semantical mapping of process-relevant categories (e.g., crop, ground, and obstacles). Finally, a decoding step uses a Hidden Markov model to extract relevant process-specific parameters along the trajectory of the vehicle, thus informing a potential control system of unexpected structures in the planned path. The method is evaluated on a public dataset for multi-modal obstacle detection in agricultural fields. Results show that a combination of multiple sensor modalities increases detection performance and that different fusion strategies must be applied between algorithms detecting similar and dissimilar classes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA