Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 64
Filtrar
1.
J Med Imaging (Bellingham) ; 11(2): 024504, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38576536

RESUMEN

Purpose: The Medical Imaging and Data Resource Center (MIDRC) was created to facilitate medical imaging machine learning (ML) research for tasks including early detection, diagnosis, prognosis, and assessment of treatment response related to the coronavirus disease 2019 pandemic and beyond. The purpose of this work was to create a publicly available metrology resource to assist researchers in evaluating the performance of their medical image analysis ML algorithms. Approach: An interactive decision tree, called MIDRC-MetricTree, has been developed, organized by the type of task that the ML algorithm was trained to perform. The criteria for this decision tree were that (1) users can select information such as the type of task, the nature of the reference standard, and the type of the algorithm output and (2) based on the user input, recommendations are provided regarding appropriate performance evaluation approaches and metrics, including literature references and, when possible, links to publicly available software/code as well as short tutorial videos. Results: Five types of tasks were identified for the decision tree: (a) classification, (b) detection/localization, (c) segmentation, (d) time-to-event (TTE) analysis, and (e) estimation. As an example, the classification branch of the decision tree includes two-class (binary) and multiclass classification tasks and provides suggestions for methods, metrics, software/code recommendations, and literature references for situations where the algorithm produces either binary or non-binary (e.g., continuous) output and for reference standards with negligible or non-negligible variability and unreliability. Conclusions: The publicly available decision tree is a resource to assist researchers in conducting task-specific performance evaluations, including classification, detection/localization, segmentation, TTE, and estimation tasks.

2.
Ann Bot ; 2024 Mar 29.
Artículo en Inglés | MEDLINE | ID: mdl-38551515

RESUMEN

BACKGROUND AND AIMS: Structural colour is responsible for the remarkable metallic blue colour seen in the leaves of several plants. Species belonging to only ten genera have been investigated to date, revealing four photonic structures responsible for blue leaves. One of these is the helicoidal cell wall, known to create structural colour in the leaf cells of five taxa. Here we investigate a broad selection of land plants to understand the phylogenetic distribution of this photonic structure in leaves. METHODS: We identified helicoidal structures in the leaf epidermal cells of 19 species using transmission electron microscopy. Pitch measurements of the helicoids were compared to the reflectance spectra of circularly polarised light from the cells to confirm the structure-colour relationship. RESULTS: By incorporating species examined with a polarising filter, our results increase the number of taxa with photonic helicoidal cell walls to species belonging to at least 35 genera. These include 23 monocot genera, from the orders Asparagales (Orchidaceae) and Poales (Cyperaceae, Eriocaulaceae, Rapateaceae) and 17 fern genera, from the orders Marattiales (Marattiaceae), Schizaeales (Anemiaceae) and Polypodiales (Blechnaceae, Dryopteridaceae, Lomariopsidaceae, Polypodiaceae, Pteridaceae, Tectariaceae). CONCLUSIONS: Our investigation adds considerably to the recorded diversity of plants with structurally coloured leaves. The iterative evolution of photonic helicoidal walls has resulted in a broad phylogenetic distribution, centred on ferns and monocots. We speculate that the primary function of the helicoidal wall is to provide strength and support, so structural colour could have evolved as a potentially beneficial chance function of this structure.

3.
Med Phys ; 51(3): 1812-1821, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37602841

RESUMEN

BACKGROUND: Artificial intelligence/computer-aided diagnosis (AI/CADx) and its use of radiomics have shown potential in diagnosis and prognosis of breast cancer. Performance metrics such as the area under the receiver operating characteristic (ROC) curve (AUC) are frequently used as figures of merit for the evaluation of CADx. Methods for evaluating lesion-based measures of performance may enhance the assessment of AI/CADx pipelines, particularly in the situation of comparing performances by classifier. PURPOSE: The purpose of this study was to investigate the use case of two standard classifiers to (1) compare overall classification performance of the classifiers in the task of distinguishing between benign and malignant breast lesions using radiomic features extracted from dynamic contrast-enhanced magnetic resonance (DCE-MR) images, (2) define a new repeatability metric (termed sureness), and (3) use sureness to examine if one classifier provides an advantage in AI diagnostic performance by lesion when using radiomic features. METHODS: Images of 1052 breast lesions (201 benign, 851 cancers) had been retrospectively collected under HIPAA/IRB compliance. The lesions had been segmented automatically using a fuzzy c-means method and thirty-two radiomic features had been extracted. Classification was investigated for the task of malignant lesions (81% of the dataset) versus benign lesions (19%). Two classifiers (linear discriminant analysis, LDA and support vector machines, SVM) were trained and tested within 0.632 bootstrap analyses (2000 iterations). Whole-set classification performance was evaluated at two levels: (1) the 0.632+ bias-corrected area under the ROC curve (AUC) and (2) performance metric curves which give variability in operating sensitivity and specificity at a target operating point (95% target sensitivity). Sureness was defined as 1-95% confidence interval of the classifier output for each lesion for each classifier. Lesion-based repeatability was evaluated at two levels: (1) repeatability profiles, which represent the distribution of sureness across the decision threshold and (2) sureness of each lesion. The latter was used to identify lesions with better sureness with one classifier over another while maintaining lesion-based performance across the bootstrap iterations. RESULTS: In classification performance assessment, the median and 95% CI of difference in AUC between the two classifiers did not show evidence of difference (ΔAUC = -0.003 [-0.031, 0.018]). Both classifiers achieved the target sensitivity. Sureness was more consistent across the classifier output range for the SVM classifier than the LDA classifier. The SVM resulted in a net gain of 33 benign lesions and 307 cancers with higher sureness and maintained lesion-based performance. However, with the LDA there was a notable percentage of benign lesions (42%) with better sureness but lower lesion-based performance. CONCLUSIONS: When there is no evidence for difference in performance between classifiers using AUC or other performance summary measures, a lesion-based sureness metric may provide additional insight into AI pipeline design. These findings present and emphasize the utility of lesion-based repeatability via sureness in AI/CADx as a complementary enhancement to other evaluation measures.


Asunto(s)
Inteligencia Artificial , Neoplasias de la Mama , Humanos , Femenino , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Neoplasias de la Mama/patología , Aprendizaje Automático
4.
J Med Imaging (Bellingham) ; 10(6): 064501, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38074627

RESUMEN

Purpose: The Medical Imaging and Data Resource Center (MIDRC) is a multi-institutional effort to accelerate medical imaging machine intelligence research and create a publicly available image repository/commons as well as a sequestered commons for performance evaluation and benchmarking of algorithms. After de-identification, approximately 80% of the medical images and associated metadata become part of the open commons and 20% are sequestered from the open commons. To ensure that both commons are representative of the population available, we introduced a stratified sampling method to balance the demographic characteristics across the two datasets. Approach: Our method uses multi-dimensional stratified sampling where several demographic variables of interest are sequentially used to separate the data into individual strata, each representing a unique combination of variables. Within each resulting stratum, patients are assigned to the open or sequestered commons. This algorithm was used on an example dataset containing 5000 patients using the variables of race, age, sex at birth, ethnicity, COVID-19 status, and image modality and compared resulting demographic distributions to naïve random sampling of the dataset over 2000 independent trials. Results: Resulting prevalence of each demographic variable matched the prevalence from the input dataset within one standard deviation. Mann-Whitney U test results supported the hypothesis that sequestration by stratified sampling provided more balanced subsets than naïve randomization, except for demographic subcategories with very low prevalence. Conclusions: The developed multi-dimensional stratified sampling algorithm can partition a large dataset while maintaining balance across several variables, superior to the balance achieved from naïve randomization.

5.
Behav Ecol ; 34(5): 751-758, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37744171

RESUMEN

Iridescence is a taxonomically widespread form of structural coloration that produces often intense hues that change with the angle of viewing. Its role as a signal has been investigated in multiple species, but recently, and counter-intuitively, it has been shown that it can function as camouflage. However, the property of iridescence that reduces detectability is, as yet, unclear. As viewing angle changes, iridescent objects change not only in hue but also in intensity, and many iridescent animals are also shiny or glossy; these "specular reflections," both from the target and background, have been implicated in crypsis. Here, we present a field experiment with natural avian predators that separate the relative contributions of color and gloss to the "survival" of iridescent and non-iridescent beetle-like targets. Consistent with previous research, we found that iridescent coloration, and high gloss of the leaves on which targets were placed, enhance survival. However, glossy targets survived less well than matt. We interpret the results in terms of signal-to-noise ratio: specular reflections from the background reduce detectability by increasing visual noise. While a specular reflection from the target attracts attention, a changeable color reduces the signal because, we suggest, normally, the color of an object is a stable feature for detection and identification.

6.
J Med Imaging (Bellingham) ; 10(4): 044504, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37608852

RESUMEN

Purpose: Image-based prediction of coronavirus disease 2019 (COVID-19) severity and resource needs can be an important means to address the COVID-19 pandemic. In this study, we propose an artificial intelligence/machine learning (AI/ML) COVID-19 prognosis method to predict patients' needs for intensive care by analyzing chest X-ray radiography (CXR) images using deep learning. Approach: The dataset consisted of 8357 CXR exams from 5046 COVID-19-positive patients as confirmed by reverse transcription polymerase chain reaction (RT-PCR) tests for the SARS-CoV-2 virus with a training/validation/test split of 64%/16%/20% on a by patient level. Our model involved a DenseNet121 network with a sequential transfer learning technique employed to train on a sequence of gradually more specific and complex tasks: (1) fine-tuning a model pretrained on ImageNet using a previously established CXR dataset with a broad spectrum of pathologies; (2) refining on another established dataset to detect pneumonia; and (3) fine-tuning using our in-house training/validation datasets to predict patients' needs for intensive care within 24, 48, 72, and 96 h following the CXR exams. The classification performances were evaluated on our independent test set (CXR exams of 1048 patients) using the area under the receiver operating characteristic curve (AUC) as the figure of merit in the task of distinguishing between those COVID-19-positive patients who required intensive care following the imaging exam and those who did not. Results: Our proposed AI/ML model achieved an AUC (95% confidence interval) of 0.78 (0.74, 0.81) when predicting the need for intensive care 24 h in advance, and at least 0.76 (0.73, 0.80) for 48 h or more in advance using predictions based on the AI prognostic marker derived from CXR images. Conclusions: This AI/ML prediction model for patients' needs for intensive care has the potential to support both clinical decision-making and resource management.

7.
Curr Opin Insect Sci ; 59: 101086, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37468044

RESUMEN

Flowers present information to their insect visitors in multiple simultaneous sensory modalities. Research has commonly focussed on information presented in visual and olfactory modalities. Recently, focus has shifted towards additional 'invisible' information, and whether information presented in multiple modalities enhances the interaction between flowers and their visitors. In this review, we highlight work that addresses how multimodality influences behaviour, focussing on work conducted on bumblebees (Bombus spp.), which are often used due to both their learning abilities and their ability to use multiple sensory modes to identify and differentiate between flowers. We review the evidence for bumblebees being able to use humidity, electrical potential, surface texture and temperature as additional modalities, and consider how multimodality enhances their performance. We consider mechanisms, including the cross-modal transfer of learning that occurs when bees are able to transfer patterns learnt in one modality to an additional modality without additional learning.


Asunto(s)
Flores , Aprendizaje , Abejas , Animales , Temperatura
8.
J Med Imaging (Bellingham) ; 10(6): 61105, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37469387

RESUMEN

Purpose: The Medical Imaging and Data Resource Center (MIDRC) open data commons was launched to accelerate the development of artificial intelligence (AI) algorithms to help address the COVID-19 pandemic. The purpose of this study was to quantify longitudinal representativeness of the demographic characteristics of the primary MIDRC dataset compared to the United States general population (US Census) and COVID-19 positive case counts from the Centers for Disease Control and Prevention (CDC). Approach: The Jensen-Shannon distance (JSD), a measure of similarity of two distributions, was used to longitudinally measure the representativeness of the distribution of (1) all unique patients in the MIDRC data to the 2020 US Census and (2) all unique COVID-19 positive patients in the MIDRC data to the case counts reported by the CDC. The distributions were evaluated in the demographic categories of age at index, sex, race, ethnicity, and the combination of race and ethnicity. Results: Representativeness of the MIDRC data by ethnicity and the combination of race and ethnicity was impacted by the percentage of CDC case counts for which this was not reported. The distributions by sex and race have retained their level of representativeness over time. Conclusion: The representativeness of the open medical imaging datasets in the curated public data commons at MIDRC has evolved over time as the number of contributing institutions and overall number of subjects have grown. The use of metrics, such as the JSD support measurement of representativeness, is one step needed for fair and generalizable AI algorithm development.

9.
Radiology ; 307(1): e220984, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36594836

RESUMEN

Background Breast cancer tumors can be identified as different luminal molecular subtypes depending on either immunohistochemical (IHC) staining or St Gallen criteria that includes Ki-67. Purpose To characterize molecular subtypes and understand the impact of disagreement among IHC and St Gallen molecular subtype reference standards on artificial intelligence classification of luminal A and luminal B tumors with use of radiomic features extracted from dynamic contrast-enhanced (DCE) MRI scans. Materials and Methods In this retrospective study, 28 radiomic features previously extracted from DCE-MRI scans of breast tumors imaged between February 2015 and October 2017 were examined in the following groups: (a) tumors classified as luminal A by both reference standards ("agreement"), (b) tumors classified as luminal A by IHC and luminal B by St Gallen ("disagreement"), and (c) tumors classified as luminal B by both ("agreement"). Luminal A or luminal B tumor classification with use of radiomic features was conducted with use of three sets: (a) IHC molecular subtyping, (b) St Gallen molecular subtyping, and (c) agreement tumors. The Kruskal-Wallis test was followed by the Mann-Whitney U test to determine pair-wise differences of radiomic features among agreement and disagreement tumors. Fivefold cross-validation with use of stepwise feature selection and linear discriminant analysis classified tumors in each set, with performance measured with use of area under the receiver operating characteristic curve (AUC). Results A total of 877 breast cancer tumors from 872 women (mean age, 48 years [range, 19-75 years]) were analyzed. Six features (sphericity, irregularity, surface area to volume ratio, variance of radial gradient histogram, sum average, volume of most enhancing voxels) were different (P ≤ .001) among agreement and disagreement tumors. AUC (median, 0.74 [95% CI: 0.68, 0.80]) was higher than when using tumors subtyped by either reference standard (IHC, 0.66 [0.60, 0.71], P = .003; St Gallen, 0.62 [0.58, 0.67], P = .001). Conclusion Differences in reference standards can hinder artificial intelligence classification performance of luminal molecular subtypes with dynamic contrast-enhanced MRI. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Bae in this issue.


Asunto(s)
Neoplasias de la Mama , Femenino , Humanos , Persona de Mediana Edad , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Inteligencia Artificial , Estudios Retrospectivos , Imagen por Resonancia Magnética/métodos , Estándares de Referencia
10.
Curr Biol ; 32(24): R1345-R1347, 2022 12 19.
Artículo en Inglés | MEDLINE | ID: mdl-36538885

RESUMEN

A single CRISPR-generated mutation in a MYB transcription factor in Petunia leads to a dual phenotype. This in turn has a dual effect on potential pollinating insects, deterring the original pollinator while increasing the visitation of a possible replacement.


Asunto(s)
Repeticiones Palindrómicas Cortas Agrupadas y Regularmente Espaciadas , Polinización , Animales , Ecología , Insectos/genética , Factores de Transcripción/genética , Flores/genética
11.
J Med Imaging (Bellingham) ; 9(3): 035502, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35656541

RESUMEN

Purpose: The aim of this study is to (1) demonstrate a graphical method and interpretation framework to extend performance evaluation beyond receiver operating characteristic curve analysis and (2) assess the impact of disease prevalence and variability in training and testing sets, particularly when a specific operating point is used. Approach: The proposed performance metric curves (PMCs) simultaneously assess sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV), and the 95% confidence intervals thereof, as a function of the threshold for the decision variable. We investigated the utility of PMCs using six example operating points associated with commonly used methods to select operating points (including the Youden index and maximum mutual information). As an example, we applied PMCs to the task of distinguishing between malignant and benign breast lesions using human-engineered radiomic features extracted from dynamic contrast-enhanced magnetic resonance images. The dataset had 1885 lesions, with the images acquired in 2015 and 2016 serving as the training set (1450 lesions) and those acquired in 2017 as the test set (435 lesions). Our study used this dataset in two ways: (1) the clinical dataset itself and (2) simulated datasets with features based on the clinical set but with five different disease prevalences. The median and 95% CI of the number of type I (false positive) and type II (false negative) errors were determined for each operating point of interest. Results: PMCs from both the clinical and simulated datasets demonstrated that PMCs could support interpretation of the impact of decision threshold choice on type I and type II errors of classification, particularly relevant to prevalence. Conclusion: PMCs allow simultaneous evaluation of the four performance metrics of sensitivity, specificity, PPV, and NPV as a function of the decision threshold. This may create a better understanding of two-class classifier performance in machine learning.

12.
J Med Imaging (Bellingham) ; 9(3): 034502, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35685120

RESUMEN

Purpose: We demonstrate continuous learning and assess its impact on the performance of artificial intelligence of breast dynamic contrast-enhanced magnetic resonance imaging in the task of distinguishing malignant from benign lesions on an independent clinical test dataset. Approach: The study included 1979 patients with 1990 lesions who underwent breast MR imaging during 2015, 2016, and 2017, retrospectively collected under an IRB-approved protocol; there were 1494 malignant and 496 benign lesions based on histopathology. AI was conducted in the task of distinguishing malignant and benign lesions, and independent testing was performed to assess the effect of increasing the numbers of training cases. Five training sets mimicking clinical implementation of continuous AI learning included cases from (1) first quarter of 2015, (2) first half of 2015, (3) all 2015, (4) all 2015 and first half of 2016, and (5) all 2015 and 2016. All classifiers were evaluated on the 2017 independent test set. The area under the ROC curve (AUC) served as the performance metric and was calculated over all lesions in the test set, as well as only mass lesions and only non-mass enhancements. The Mann-Kendall test was used to determine if continuous learning resulted in a positive trend in classification performance. P < 0.05 was considered to be statistically significant. Results: Over the continuous training period, the selected feature subsets tended to become more similar and stable. Performance of the five training conditions on the independent test dataset yielded AUCs of 0.86 (95% CI: [0.83,0.90]), 0.87 (95% CI: [0.83,0.90]), 0.88 (95% CI: [0.84,0.91]), 0.89 (95% CI: [0.85,0.92]), and 0.89 (95% CI: [0.86,0.92]). The Mann-Kendall test indicated a statistically significant positive trend ( P = 0.0167 ) in classification performance with continuous learning. Conclusions: Improved diagnostic performance over time was observed when continuous learning of AI was implemented on an independent clinical test dataset.

13.
Anim Behav ; 188: 45-50, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37649469

RESUMEN

It has recently been found that iridescence, a taxonomically widespread form of animal coloration defined by a change in hue with viewing angle, can act as a highly effective form of camouflage. However, little is known about whether iridescence can confer a survival benefit to prey postdetection and, if so, which optical properties of iridescent prey are important for this putative protective function. Here, we tested the effects of both iridescence and surface gloss (i.e. specular reflection) on the attack behaviour of prey-naïve avian predators. Using real and artificial jewel beetle, Sternocera aequisignata, wing cases, we found that iridescence provides initial protection against avian predation by significantly reducing the willingness to attack. Importantly, we found that the main factor explaining this aversion is iridescence, not multiple colours per se, with surface gloss also having an independent effect. Our results are important because they demonstrate that even when prey are presented up close and against a mismatching background, iridescence may confer a survival benefit by inducing hesitation or even, as sometimes observed, an aversion response in attacking birds. Furthermore, this means that even postdetection, prey do not necessarily need to have secondary defences such as sharp spines or toxins for iridescence to have a protective effect. Taken together, our results suggest that reduced avian predation could facilitate the initial evolution of iridescence in many species of insects and that it is the defining feature of iridescence, its colour changeability, that is important for this effect.

14.
Cancers (Basel) ; 13(19)2021 Sep 26.
Artículo en Inglés | MEDLINE | ID: mdl-34638294

RESUMEN

Radiomic features extracted from medical images may demonstrate a batch effect when cases come from different sources. We investigated classification performance using training and independent test sets drawn from two sources using both pre-harmonization and post-harmonization features. In this retrospective study, a database of thirty-two radiomic features, extracted from DCE-MR images of breast lesions after fuzzy c-means segmentation, was collected. There were 944 unique lesions in Database A (208 benign lesions, 736 cancers) and 1986 unique lesions in Database B (481 benign lesions, 1505 cancers). The lesions from each database were divided by year of image acquisition into training and independent test sets, separately by database and in combination. ComBat batch harmonization was conducted on the combined training set to minimize the batch effect on eligible features by database. The empirical Bayes estimates from the feature harmonization were applied to the eligible features of the combined independent test set. The training sets (A, B, and combined) were then used in training linear discriminant analysis classifiers after stepwise feature selection. The classifiers were then run on the A, B, and combined independent test sets. Classification performance was compared using pre-harmonization features to post-harmonization features, including their corresponding feature selection, evaluated using the area under the receiver operating characteristic curve (AUC) as the figure of merit. Four out of five training and independent test scenarios demonstrated statistically equivalent classification performance when compared pre- and post-harmonization. These results demonstrate that translation of machine learning techniques with batch data harmonization can potentially yield generalizable models that maintain classification performance.

15.
Radiol Artif Intell ; 3(3): e200159, 2021 May.
Artículo en Inglés | MEDLINE | ID: mdl-34235439

RESUMEN

PURPOSE: To develop a deep transfer learning method that incorporates four-dimensional (4D) information in dynamic contrast-enhanced (DCE) MRI to classify benign and malignant breast lesions. MATERIALS AND METHODS: The retrospective dataset is composed of 1990 distinct lesions (1494 malignant and 496 benign) from 1979 women (mean age, 47 years ± 10). Lesions were split into a training and validation set of 1455 lesions (acquired in 2015-2016) and an independent test set of 535 lesions (acquired in 2017). Features were extracted from a convolutional neural network (CNN), and lesions were classified as benign or malignant using support vector machines. Volumetric information was collapsed into two dimensions by taking the maximum intensity projection (MIP) at the image level or feature level within the CNN architecture. Performances were evaluated using the area under the receiver operating characteristic curve (AUC) as the figure of merit and were compared using the DeLong test. RESULTS: The image MIP and feature MIP methods yielded AUCs of 0.91 (95% CI: 0.87, 0.94) and 0.93 (95% CI: 0.91, 0.96), respectively, for the independent test set. The feature MIP method achieved higher performance than the image MIP method (∆AUC 95% CI: 0.003, 0.051; P = .03). CONCLUSION: Incorporating 4D information in DCE MRI by MIP of features in deep transfer learning demonstrated superior classification performance compared with using MIP images as input in the task of distinguishing between benign and malignant breast lesions.Keywords: Breast, Computer Aided Diagnosis (CAD), Convolutional Neural Network (CNN), MR-Dynamic Contrast Enhanced, Supervised learning, Support vector machines (SVM), Transfer learning, Volume Analysis © RSNA, 2021.

16.
Magn Reson Imaging ; 82: 111-121, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34174331

RESUMEN

Radiomic features extracted from breast lesion images have shown potential in diagnosis and prognosis of breast cancer. As medical centers transition from 1.5 T to 3.0 T magnetic resonance (MR) imaging, it is beneficial to identify potentially robust radiomic features across field strengths because images acquired at different field strengths could be used in machine learning models. Dynamic contrast-enhanced MR images of benign breast lesions and hormone receptor positive/HER2-negative (HR+/HER2-) breast cancers were acquired retrospectively, yielding 612 unique cases: 150 and 99 benign lesions imaged at 1.5 T and 3.0 T, and 223 and 140 HR+/HER2- cancerous lesions imaged at 1.5 T and 3.0 T, respectively. In addition, an independent set of seven lesions imaged at both field strengths, three benign lesions and four HR+/HER2- cancers, was analyzed separately. Lesions were automatically segmented using a 4D fuzzy c-means method; thirty-eight radiomic features were extracted. Feature value distributions were compared by cancer status and imaging field strength using the Kolmogorov-Smirnov test. Features that did not demonstrate a statistically significant difference were considered to be potentially robust. The area under the receiver operating characteristic curve (AUC), for the task of classifying lesions as benign or HR+/HER2- cancer, was determined for each feature at each field strength. Three features were found to be both potentially robust across field strength and of high classification performance, i.e., AUCs statistically greater than 0.5 in the classification task: one shape feature (irregularity), one texture feature (sum average) and one enhancement variance kinetics features (enhancement variance increasing rate). In the demonstration set of lesions imaged at both field strengths, two of the three potentially robust features showed qualitative agreement across field strength. These findings may contribute to the development of computer-aided diagnosis models that are robust across field strength for this classification task.


Asunto(s)
Neoplasias de la Mama , Imanes , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Medios de Contraste , Femenino , Hormonas , Humanos , Imagen por Resonancia Magnética , Estudios Retrospectivos
17.
J Exp Biol ; 224(12)2021 06 15.
Artículo en Inglés | MEDLINE | ID: mdl-34161560

RESUMEN

Floral humidity, a region of elevated humidity in the headspace of the flower, occurs in many plant species and may add to their multimodal floral displays. So far, the ability to detect and respond to floral humidity cues has been only established for hawkmoths when they locate and extract nectar while hovering in front of some moth-pollinated flowers. To test whether floral humidity can be used by other more widespread generalist pollinators, we designed artificial flowers that presented biologically relevant levels of humidity similar to those shown by flowering plants. Bumblebees showed a spontaneous preference for flowers that produced higher floral humidity. Furthermore, learning experiments showed that bumblebees are able to use differences in floral humidity to distinguish between rewarding and non-rewarding flowers. Our results indicate that bumblebees are sensitive to different levels of floral humidity. In this way floral humidity can add to the information provided by flowers and could impact pollinator behaviour more significantly than previously thought.


Asunto(s)
Mariposas Nocturnas , Polinización , Animales , Abejas , Flores , Humedad , Néctar de las Plantas
18.
New Phytol ; 229(2): 783-790, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32813888

RESUMEN

From global food security to textile production and biofuels, the demands currently made on plant photosynthetic productivity will continue to increase. Enhancing photosynthesis using designer, green and sustainable materials offers an attractive alternative to current genetic-based strategies and promising work with nanomaterials has recently started to emerge. Here we describe the in planta use of carbon-based nanoparticles produced by low-cost renewable routes that are bioavailable to mature plants. Uptake of these functionalised nanoparticles directly from the soil improves photosynthesis and also increases crop production. We show for the first time that glucose functionalisation enhances nanoparticle uptake, photoprotection and pigment production, unlocking enhanced yields. This was demonstrated in Triticum aestivum 'Apogee' (dwarf bread wheat) and resulted in an 18% increase in grain yield. This establishes the viability of a functional nanomaterial to augment photosynthesis as a route to increased crop productivity.


Asunto(s)
Carbono , Glucosa , Producción de Cultivos , Fotosíntesis , Triticum
19.
J Med Imaging (Bellingham) ; 7(4): 044502, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32864390

RESUMEN

Purpose: This study aims to develop and compare human-engineered radiomics methodologies that use multiparametric magnetic resonance imaging (mpMRI) to diagnose breast cancer. Approach: The dataset comprises clinical multiparametric MR images of 852 unique lesions from 612 patients. Each MR study included a dynamic contrast-enhanced (DCE)-MRI sequence and a T2-weighted (T2w) MRI sequence, and a subset of 389 lesions were also imaged with a diffusion-weighted imaging (DWI) sequence. Lesions were automatically segmented using the fuzzy C-means algorithm. Radiomic features were extracted from each MRI sequence. Two approaches, feature fusion and classifier fusion, to utilizing multiparametric information were investigated. A support vector machine classifier was trained for each method to differentiate between benign and malignant lesions. Area under the receiver operating characteristic curve (AUC) was used to evaluate and compare diagnostic performance. Analyses were first performed on the entire dataset and then on the subset that was imaged using the three-sequence protocol. Results: When using the full dataset, the single-parametric classifiers yielded the following AUCs and 95% confidence intervals: AUC DCE = 0.84 [0.82, 0.87], AUC T 2 w = 0.83 [0.80, 0.86], and AUC DWI = 0.69 [0.62, 0.75]. The two multiparametric classifiers both yielded AUCs of 0.87 [0.84, 0.89] and significantly outperformed all single-parametric methods classifiers. When using the three-sequence subset, the mpMRI classifiers' performances significantly decreased. Conclusions: The proposed mpMRI radiomics methods can improve the performance of computer-aided diagnostics for breast cancer and handle missing sequences in the imaging protocol.

20.
Sci Rep ; 10(1): 10536, 2020 06 29.
Artículo en Inglés | MEDLINE | ID: mdl-32601367

RESUMEN

Multiparametric magnetic resonance imaging (mpMRI) has been shown to improve radiologists' performance in the clinical diagnosis of breast cancer. This machine learning study develops a deep transfer learning computer-aided diagnosis (CADx) methodology to diagnose breast cancer using mpMRI. The retrospective study included clinical MR images of 927 unique lesions from 616 women. Each MR study included a dynamic contrast-enhanced (DCE)-MRI sequence and a T2-weighted (T2w) MRI sequence. A pretrained convolutional neural network (CNN) was used to extract features from the DCE and T2w sequences, and support vector machine classifiers were trained on the CNN features to distinguish between benign and malignant lesions. Three methods that integrate the sequences at different levels (image fusion, feature fusion, and classifier fusion) were investigated. Classification performance was evaluated using the receiver operating characteristic (ROC) curve and compared using the DeLong test. The single-sequence classifiers yielded areas under the ROC curves (AUCs) [95% confidence intervals] of AUCDCE = 0.85 [0.82, 0.88] and AUCT2w = 0.78 [0.75, 0.81]. The multiparametric schemes yielded AUCImageFusion = 0.85 [0.82, 0.88], AUCFeatureFusion = 0.87 [0.84, 0.89], and AUCClassifierFusion = 0.86 [0.83, 0.88]. The feature fusion method statistically significantly outperformed using DCE alone (P < 0.001). In conclusion, the proposed deep transfer learning CADx method for mpMRI may improve diagnostic performance by reducing the false positive rate and improving the positive predictive value in breast imaging interpretation.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Mama/diagnóstico por imagen , Aprendizaje Profundo , Diagnóstico por Computador/métodos , Imágenes de Resonancia Magnética Multiparamétrica , Adulto , Anciano , Femenino , Humanos , Interpretación de Imagen Asistida por Computador , Persona de Mediana Edad , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...