Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
Patterns (N Y) ; 5(5): 100964, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38800363

RESUMEN

Visual learning often occurs in a specific context, where an agent acquires skills through exploration and tracking of its location in a consistent environment. The historical spatial context of the agent provides a similarity signal for self-supervised contrastive learning. We present a unique approach, termed environmental spatial similarity (ESS), that complements existing contrastive learning methods. Using images from simulated, photorealistic environments as an experimental setting, we demonstrate that ESS outperforms traditional instance discrimination approaches. Moreover, sampling additional data from the same environment substantially improves accuracy and provides new augmentations. ESS allows remarkable proficiency in room classification and spatial prediction tasks, especially in unfamiliar environments. This learning paradigm has the potential to enable rapid visual learning in agents operating in new environments with unique visual characteristics. Potentially transformative applications span from robotics to space exploration. Our proof of concept demonstrates improved efficiency over methods that rely on extensive, disconnected datasets.

2.
Artículo en Inglés | MEDLINE | ID: mdl-38090870

RESUMEN

Most conventional crowd counting methods utilize a fully-supervised learning framework to establish a mapping between scene images and crowd density maps. They usually rely on a large quantity of costly and time-intensive pixel-level annotations for training supervision. One way to mitigate the intensive labeling effort and improve counting accuracy is to leverage large amounts of unlabeled images. This is attributed to the inherent self-structural information and rank consistency within a single image, offering additional qualitative relation supervision during training. Contrary to earlier methods that utilized the rank relations at the original image level, we explore such rank-consistency relation within the latent feature spaces. This approach enables the incorporation of numerous pyramid partial orders, strengthening the model representation capability. A notable advantage is that it can also increase the utilization ratio of unlabeled samples. Specifically, we propose a Deep Rank-consistEnt pyrAmid Model (), which makes full use of rank consistency across coarse-to-fine pyramid features in latent spaces for enhanced crowd counting with massive unlabeled images. In addition, we have collected a new unlabeled crowd counting dataset, FUDAN-UCC, comprising 4000 images for training purposes. Extensive experiments on four benchmark datasets, namely UCF-QNRF, ShanghaiTech PartA and PartB, and UCF-CC-50, show the effectiveness of our method compared with previous semi-supervised methods. The codes are available at https://github.com/bridgeqiqi/DREAM.

3.
Proc IEEE Inst Electr Electron Eng ; 111(10): 1236-1286, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37859667

RESUMEN

The emergence of artificial emotional intelligence technology is revolutionizing the fields of computers and robotics, allowing for a new level of communication and understanding of human behavior that was once thought impossible. While recent advancements in deep learning have transformed the field of computer vision, automated understanding of evoked or expressed emotions in visual media remains in its infancy. This foundering stems from the absence of a universally accepted definition of "emotion," coupled with the inherently subjective nature of emotions and their intricate nuances. In this article, we provide a comprehensive, multidisciplinary overview of the field of emotion analysis in visual media, drawing on insights from psychology, engineering, and the arts. We begin by exploring the psychological foundations of emotion and the computational principles that underpin the understanding of emotions from images and videos. We then review the latest research and systems within the field, accentuating the most promising approaches. We also discuss the current technological challenges and limitations of emotion analysis, underscoring the necessity for continued investigation and innovation. We contend that this represents a "Holy Grail" research problem in computing and delineate pivotal directions for future inquiry. Finally, we examine the ethical ramifications of emotion-understanding technologies and contemplate their potential societal impacts. Overall, this article endeavors to equip readers with a deeper understanding of the domain of emotion analysis in visual media and to inspire further research and development in this captivating and rapidly evolving field.

4.
Patterns (N Y) ; 4(10): 100816, 2023 Oct 13.
Artículo en Inglés | MEDLINE | ID: mdl-37876902

RESUMEN

Bodily expressed emotion understanding (BEEU) aims to automatically recognize human emotional expressions from body movements. Psychological research has demonstrated that people often move using specific motor elements to convey emotions. This work takes three steps to integrate human motor elements to study BEEU. First, we introduce BoME (body motor elements), a highly precise dataset for human motor elements. Second, we apply baseline models to estimate these elements on BoME, showing that deep learning methods are capable of learning effective representations of human movement. Finally, we propose a dual-source solution to enhance the BEEU model with the BoME dataset, which trains with both motor element and emotion labels and simultaneously produces predictions for both. Through experiments on the BoLD in-the-wild emotion understanding benchmark, we showcase the significant benefit of our approach. These results may inspire further research utilizing human motor elements for emotion understanding and mental health analysis.

5.
Comput Med Imaging Graph ; 107: 102236, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37146318

RESUMEN

Stroke is one of the leading causes of death and disability in the world. Despite intensive research on automatic stroke lesion segmentation from non-invasive imaging modalities including diffusion-weighted imaging (DWI), challenges remain such as a lack of sufficient labeled data for training deep learning models and failure in detecting small lesions. In this paper, we propose BBox-Guided Segmentor, a method that significantly improves the accuracy of stroke lesion segmentation by leveraging expert knowledge. Specifically, our model uses a very coarse bounding box label provided by the expert and then performs accurate segmentation automatically. The small overhead of having the expert provide a rough bounding box leads to large performance improvement in segmentation, which is paramount to accurate stroke diagnosis. To train our model, we employ a weakly-supervised approach that uses a large number of weakly-labeled images with only bounding boxes and a small number of fully labeled images. The scarce fully labeled images are used to train a generator segmentation network, while adversarial training is used to leverage the large number of weakly-labeled images to provide additional learning signals. We evaluate our method extensively using a unique clinical dataset of 99 fully labeled cases (i.e., with full segmentation map labels) and 831 weakly labeled cases (i.e., with only bounding box labels), and the results demonstrate the superior performance of our approach over state-of-the-art stroke lesion segmentation models. We also achieve competitive performance as a SOTA fully supervised method using less than one-tenth of the complete labels. Our proposed approach has the potential to improve stroke diagnosis and treatment planning, which may lead to better patient outcomes.


Asunto(s)
Imagen de Difusión por Resonancia Magnética , Accidente Cerebrovascular , Humanos , Accidente Cerebrovascular/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
6.
Neuroimage ; 273: 120075, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37054828

RESUMEN

Developmental reading disability is a prevalent and often enduring problem with varied mechanisms that contribute to its phenotypic heterogeneity. This mechanistic and phenotypic variation, as well as relatively modest sample sizes, may have limited the development of accurate neuroimaging-based classifiers for reading disability, including because of the large feature space of neuroimaging datasets. An unsupervised learning model was used to reduce deformation-based data to a lower-dimensional manifold and then supervised learning models were used to classify these latent representations in a dataset of 96 reading disability cases and 96 controls (mean age: 9.86 ± 1.56 years). A combined unsupervised autoencoder and supervised convolutional neural network approach provided an effective classification of cases and controls (accuracy: 77%; precision: 0.75; recall: 0.78). Brain regions that contributed to this classification accuracy were identified by adding noise to the voxel-level image data, which showed that reading disability classification accuracy was most influenced by the superior temporal sulcus, dorsal cingulate, and lateral occipital cortex. Regions that were most important for the accurate classification of controls included the supramarginal gyrus, orbitofrontal, and medial occipital cortex. The contribution of these regions reflected individual differences in reading-related abilities, such as non-word decoding or verbal comprehension. Together, the results demonstrate an optimal deep learning solution for classification using neuroimaging data. In contrast with standard mass-univariate test results, results from the deep learning model also provided evidence for regions that may be specifically affected in reading disability cases.


Asunto(s)
Aprendizaje Profundo , Dislexia , Humanos , Niño , Dislexia/diagnóstico por imagen , Encéfalo/diagnóstico por imagen , Neuroimagen/métodos , Comprensión
7.
Brain Inform (2023) ; 13974: 167-178, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38352916

RESUMEN

Specific learning disability of reading, or dyslexia, affects 5-17% of the population in the United States. Research on the neurobiology of dyslexia has included studies with relatively small sample sizes across research sites, thus limiting inference and the application of novel methods, such as deep learning. To address these issues and facilitate open science, we developed an online platform for data-sharing and advanced research programs to enhance opportunities for replication by providing researchers with secondary data that can be used in their research (https://www.dyslexiadata.org). This platform integrates a set of well-designed machine learning algorithms and tools to generate secondary datasets, such as cortical thickness, as well as regional brain volume metrics that have been consistently associated with dyslexia. Researchers can access shared data to address fundamental questions about dyslexia and development, replicate research findings, apply new methods, and educate the next generation of researchers. The overarching goal of this platform is to advance our understanding of a disorder that has significant academic, social, and economic impacts on children, their families, and society.

8.
Med Image Comput Comput Assist Interv ; 14225: 116-126, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-38911098

RESUMEN

The placenta is a valuable organ that can aid in understanding adverse events during pregnancy and predicting issues post-birth. Manual pathological examination and report generation, however, are laborious and resource-intensive. Limitations in diagnostic accuracy and model efficiency have impeded previous attempts to automate placenta analysis. This study presents a novel framework for the automatic analysis of placenta images that aims to improve accuracy and efficiency. Building on previous vision-language contrastive learning (VLC) methods, we propose two enhancements, namely Pathology Report Feature Recomposition and Distributional Feature Recomposition, which increase representation robustness and mitigate feature suppression. In addition, we employ efficient neural networks as image encoders to achieve model compression and inference acceleration. Experiments validate that the proposed approach outperforms prior work in both performance and efficiency by significant margins. The benefits of our method, including enhanced efficacy and deployability, may have significant implications for reproductive healthcare, particularly in rural areas or low- and middle-income countries.

9.
Patterns (N Y) ; 3(12): 100627, 2022 Dec 09.
Artículo en Inglés | MEDLINE | ID: mdl-36569557

RESUMEN

Automating the three-dimensional (3D) segmentation of stomatal guard cells and other confocal microscopy data is extremely challenging due to hardware limitations, hard-to-localize regions, and limited optical resolution. We present a memory-efficient, attention-based, one-stage segmentation neural network for 3D images of stomatal guard cells. Our model is trained end to end and achieved expert-level accuracy while leveraging only eight human-labeled volume images. As a proof of concept, we applied our model to 3D confocal data from a cell ablation experiment that tests the "polar stiffening" model of stomatal biomechanics. The resulting data allow us to refine this polar stiffening model. This work presents a comprehensive, automated, computer-based volumetric analysis of fluorescent guard cell images. We anticipate that our model will allow biologists to rapidly test cell mechanics and dynamics and help them identify plants that more efficiently use water, a major limiting factor in global agricultural production and an area of critical concern during climate change.

10.
Med Image Anal ; 80: 102522, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35810587

RESUMEN

In an emergency room (ER) setting, stroke triage or screening is a common challenge. A quick CT is usually done instead of MRI due to MRI's slow throughput and high cost. Clinical tests are commonly referred to during the process, but the misdiagnosis rate remains high. We propose a novel multimodal deep learning framework, DeepStroke, to achieve computer-aided stroke presence assessment by recognizing patterns of minor facial muscles incoordination and speech inability for patients with suspicion of stroke in an acute setting. Our proposed DeepStroke takes one-minute facial video data and audio data readily available during stroke triage for local facial paralysis detection and global speech disorder analysis. Transfer learning was adopted to reduce face-attribute biases and improve generalizability. We leverage a multi-modal lateral fusion to combine the low- and high-level features and provide mutual regularization for joint training. Novel adversarial training is introduced to obtain identity-free and stroke-discriminative features. Experiments on our video-audio dataset with actual ER patients show that DeepStroke outperforms state-of-the-art models and achieves better performance than both a triage team and ER doctors, attaining a 10.94% higher sensitivity and maintaining 7.37% higher accuracy than traditional stroke triage when specificity is aligned. Meanwhile, each assessment can be completed in less than six minutes, demonstrating the framework's great potential for clinical translation.


Asunto(s)
Aprendizaje Profundo , Accidente Cerebrovascular , Servicio de Urgencia en Hospital , Humanos , Imagen por Resonancia Magnética , Accidente Cerebrovascular/diagnóstico por imagen , Triaje
11.
IEEE Trans Cybern ; 52(11): 12175-12188, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34133294

RESUMEN

By utilizing physical models of the atmosphere collected from the current weather conditions, the numerical weather prediction model developed by the European Centre for Medium-range Weather Forecasts (ECMWF) can provide the indicators of severe weather such as heavy precipitation for an early-warning system. However, the performance of precipitation forecasts from ECMWF often suffers from considerable prediction biases due to the high complexity and uncertainty for the formation of precipitation. The bias correcting on precipitation (BCoP) was thus utilized for correcting these biases via forecasting variables, including the historical observations and variables of precipitation, and these variables, as predictors, from ECMWF are highly relevant to precipitation. The existing BCoP methods, such as model output statistics and ordinal boosting autoencoder, do not take advantage of both spatiotemporal (ST) dependencies of precipitation and scales of related predictors that can change with different precipitation. We propose an end-to-end deep-learning BCoP model, called the ST scale adaptive selection (SSAS) model, to automatically select the ST scales of the predictors via ST Scale-Selection Modules (S3M/TS2M) for acquiring the optimal high-level ST representations. Qualitative and quantitative experiments carried out on two benchmark datasets indicate that SSAS can achieve state-of-the-art performance, compared with 11 published BCoP methods, especially on heavy precipitation.

12.
Pattern Recognit Lett ; 140: 165-171, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33324026

RESUMEN

We propose a multi-region saliency-aware learning (MSL) method for cross-domain placenta image segmentation. Unlike most existing image-level transfer learning methods that fail to preserve the semantics of paired regions, our MSL incorporates the attention mechanism and a saliency constraint into the adversarial translation process, which can realize multi-region mappings in the semantic level. Specifically, the built-in attention module serves to detect the most discriminative semantic regions that the generator should focus on. Then we use the attention consistency as another guidance for retaining semantics after translation. Furthermore, we exploit the specially designed saliency-consistent constraint to enforce the semantic consistency by requiring the saliency regions unchanged. We conduct experiments using two real-world placenta datasets we have collected. We examine the efficacy of this approach in (1) segmentation and (2) prediction of the placental diagnoses of fetal and maternal inflammatory response (FIR, MIR). Experimental results show the superiority of the proposed approach over the state of the art.

13.
Comput Med Imaging Graph ; 84: 101744, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32634729

RESUMEN

Post-delivery analysis of the placenta is useful for evaluating health risks of both the mother and baby. In the U.S., however, only about 20% of placentas are assessed by pathology exams, and placental data is often missed in pregnancy research because of the additional time, cost, and expertise needed. A computer-based tool that can be used in any delivery setting at the time of birth to provide an immediate and comprehensive placental assessment would have the potential to not only to improve health care, but also to radically improve medical knowledge. In this paper, we tackle the problem of automatic placental assessment and examination using photos. More concretely, we first address morphological characterization, which includes the tasks of placental image segmentation, umbilical cord insertion point localization, and maternal/fetal side classification. We also tackle clinically meaningful feature analysis of placentas, which comprises detection of retained placenta (i.e., incomplete placenta), umbilical cord knot, meconium, abruption, chorioamnionitis, and hypercoiled cord, and categorization of umbilical cord insertion type. We curated a dataset consisting of approximately 1300 placenta images taken at Northwestern Memorial Hospital, with hand-labeled pixel-level segmentation map, cord insertion point and other information extracted from the associated pathology reports. We developed the AI-based Placental Assessment and Examination system (AI-PLAX), which is a novel two-stage photograph-based pipeline for fully automated analysis. In the first stage, we use three encoder-decoder convolutional neural networks with a shared encoder to address morphological characterization tasks by employing a transfer-learning training strategy. In the second stage, we employ distinct sub-models to solve different feature analysis tasks by using both the photograph and the output of the first stage. We evaluated the effectiveness of our pipeline by using the curated dataset as well as the pathology reports in the medical record. Through extensive experiments, we demonstrate our system is able to produce accurate morphological characterization and very promising performance on aforementioned feature analysis tasks, all of which may possess clinical impact and contribute to future pregnancy research. This work is the first for comprehensive, automated, computer-based placental analysis and will serve as a launchpad for potentially multiple future innovations.


Asunto(s)
Placenta , Cordón Umbilical , Benzoatos , Femenino , Feto , Humanos , Redes Neurales de la Computación , Placenta/diagnóstico por imagen , Embarazo , Dodecil Sulfato de Sodio
14.
Neurol Genet ; 6(2): e412, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32337338

RESUMEN

OBJECTIVE: Molecular genetic testing for hereditary neuromuscular disorders is increasingly used to identify disease subtypes, determine prevalence, and inform management and prognosis, and although many small disease-specific studies have demonstrated the utility of genetic testing, comprehensive data sets are better positioned to assess the complexity of genetic analysis. METHODS: Using high depth-of-coverage next-generation sequencing (NGS) with simultaneous detection of sequence variants and copy number variants (CNVs), we tested 25,356 unrelated individuals for subsets of 266 genes. RESULTS: A definitive molecular diagnosis was obtained in 20% of this cohort, with yields ranging from 4% among individuals with congenital myasthenic syndrome to 33% among those with a muscular dystrophy. CNVs accounted for as much as 39% of all clinically significant variants, with 10% of them occurring as rare, private pathogenic variants. Multigene testing successfully addressed differential diagnoses in at least 6% of individuals with positive results. Even for classic disorders like Duchenne muscular dystrophy, at least 49% of clinically significant results were identified through gene panels intended for differential diagnoses rather than through single-gene analysis. Variants of uncertain significance (VUS) were observed in 53% of individuals. Only 0.7% of these variants were later reclassified as clinically significant, most commonly in RYR1, GDAP1, SPAST, and MFN2, providing insight into the types of evidence that support VUS resolution and informing expectations of reclassification rates. CONCLUSIONS: These data provide guidance for clinicians using genetic testing to diagnose neuromuscular disorders and represent one of the largest studies demonstrating the utility of NGS-based testing for these disorders.

15.
Int J Comput Vis ; 128(1): 1-25, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33664553

RESUMEN

Humans are arguably innately prepared to comprehend others' emotional expressions from subtle body movements. If robots or computers can be empowered with this capability, a number of robotic applications become possible. Automatically recognizing human bodily expression in unconstrained situations, however, is daunting given the incomplete understanding of the relationship between emotional expressions and body movements. The current research, as a multidisciplinary effort among computer and information sciences, psychology, and statistics, proposes a scalable and reliable crowdsourcing approach for collecting in-the-wild perceived emotion data for computers to learn to recognize body languages of humans. To accomplish this task, a large and growing annotated dataset with 9876 video clips of body movements and 13,239 human characters, named Body Language Dataset (BoLD), has been created. Comprehensive statistical analysis of the dataset revealed many interesting insights. A system to model the emotional expressions based on bodily movements, named Automated Recognition of Bodily Expression of Emotion (ARBEE), has also been developed and evaluated. Our analysis shows the effectiveness of Laban Movement Analysis (LMA) features in characterizing arousal, and our experiments using LMA features further demonstrate computability of bodily expression. We report and compare results of several other baseline methods which were developed for action recognition based on two different modalities, body skeleton and raw image. The dataset and findings presented in this work will likely serve as a launchpad for future discoveries in body language understanding that will enable future robots to interact and collaborate more effectively with humans.

16.
Artículo en Inglés | MEDLINE | ID: mdl-31725380

RESUMEN

Crowd counting is a highly challenging problem in computer vision and machine learning. Most previous methods have focused on consistent density crowds, i.e., either a sparse or a dense crowd, meaning they performed well in global estimation while neglecting local accuracy. To make crowd counting more useful in the real world, we propose a new perspective, named pan-density crowd counting, which aims to count people in varying density crowds. Specifically, we propose the Pan-Density Network (PaDNet) which is composed of the following critical components. First, the Density-Aware Network (DAN) contains multiple subnetworks pretrained on scenarios with different densities. This module is capable of capturing pandensity information. Second, the Feature Enhancement Layer (FEL) effectively captures the global and local contextual features and generates a weight for each density-specific feature. Third, the Feature Fusion Network (FFN) embeds spatial context and fuses these density-specific features. Further, the metrics Patch MAE (PMAE) and Patch RMSE (PRMSE) are proposed to better evaluate the performance on the global and local estimations. Extensive experiments on four crowd counting benchmark datasets, the ShanghaiTech, the UCF-CC-50, the UCSD, and the UCFQNRF, indicate that PaDNet achieves state-of-the-art recognition performance and high robustness in pan-density crowd counting.

17.
IEEE Trans Affect Comput ; 10(1): 115-128, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31576202

RESUMEN

We proposed a probabilistic approach to joint modeling of participants' reliability and humans' regularity in crowdsourced affective studies. Reliability measures how likely a subject will respond to a question seriously; and regularity measures how often a human will agree with other seriously-entered responses coming from a targeted population. Crowdsourcing-based studies or experiments, which rely on human self-reported affect, pose additional challenges as compared with typical crowdsourcing studies that attempt to acquire concrete non-affective labels of objects. The reliability of participants has been massively pursued for typical non-affective crowdsourcing studies, whereas the regularity of humans in an affective experiment in its own right has not been thoroughly considered. It has been often observed that different individuals exhibit different feelings on the same test question, which does not have a sole correct response in the first place. High reliability of responses from one individual thus cannot conclusively result in high consensus across individuals. Instead, globally testing consensus of a population is of interest to investigators. Built upon the agreement multigraph among tasks and workers, our probabilistic model differentiates subject regularity from population reliability. We demonstrate the method's effectiveness for in-depth robust analysis of large-scale crowdsourced affective data, including emotion and aesthetic assessments collected by presenting visual stimuli to human subjects.

18.
J Exp Bot ; 70(14): 3561-3572, 2019 07 23.
Artículo en Inglés | MEDLINE | ID: mdl-30977824

RESUMEN

In plants, stomatal guard cells are one of the most dynamic cell types, rapidly changing their shape and size in response to environmental and intrinsic signals to control gas exchange at the plant surface. Quantitative and systematic knowledge of the biomechanical underpinnings of stomatal dynamics will enable strategies to optimize stomatal responsiveness and improve plant productivity by enhancing the efficiency of photosynthesis and water use. Recent developments in microscopy, mechanical measurements, and computational modeling have revealed new insights into the biomechanics of stomatal regulation and the genetic, biochemical, and structural origins of how plants achieve rapid and reliable stomatal function by tuning the mechanical properties of their guard cell walls. This review compares historical and recent experimental and modeling studies of the biomechanics of stomatal complexes, highlighting commonalities and contrasts between older and newer studies. Key gaps in our understanding of stomatal functionality are also presented, along with assessments of potential methods that could bridge those gaps.


Asunto(s)
Pared Celular/química , Estomas de Plantas/química , Fenómenos Biomecánicos , Modelos Biológicos , Plantas/química
19.
Front Plant Sci ; 9: 1566, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30455709

RESUMEN

Stomata function as osmotically tunable pores that facilitate gas exchange at the surface of plants. Stomatal opening and closure are regulated by turgor changes in guard cells that result in mechanically regulated deformations of guard cell walls. However, how the molecular, architectural, and mechanical heterogeneities that exist in guard cell walls affect stomatal dynamics is unclear. In this work, stomata of wild type Arabidopsis thaliana plants or of mutants lacking normal cellulose, hemicellulose, or pectins were experimentally induced to close or open. Three-dimensional images of these stomatal complexes were collected using confocal microscopy, images were landmarked, and three-dimensional finite element models (FEMs) were constructed for each complex. Stomatal opening was simulated with a 5 MPa turgor increase. By comparing experimentally measured and computationally modeled changes in stomatal geometry across genotypes, anisotropic mechanical properties of guard cell walls were determined and mapped to cell wall components. Deficiencies in cellulose or hemicellulose were both predicted to stiffen guard cell walls, but differentially affected stomatal pore area and the degree of stomatal opening. Additionally, reducing pectin molecular mass altered the anisotropy of calculated shear moduli in guard cell walls and enhanced stomatal opening. Based on the unique architecture of guard cell walls and our modeled changes in their mechanical properties in cell wall mutants, we discuss how each polysaccharide class contributes to wall architecture and mechanics in guard cells. This study provides new insights into how the walls of guard cells are constructed to meet the mechanical requirements of stomatal dynamics.

20.
Front Plant Sci ; 9: 1202, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30177940

RESUMEN

Guard cells are pairs of epidermal cells that control gas diffusion by regulating the opening and closure of stomatal pores. Guard cells, like other types of plant cells, are surrounded by a three-dimensional, extracellular network of polysaccharide-based wall polymers. In contrast to the walls of diffusely growing cells, guard cell walls have been hypothesized to be uniquely strong and elastic to meet the functional requirements of withstanding high turgor and allowing for reversible stomatal movements. Although the walls of guard cells were long underexplored as compared to extensive studies of stomatal development and guard cell signaling, recent research has provided new genetic, cytological, and physiological data demonstrating that guard cell walls function centrally in stomatal development and dynamics. In this review, we highlight and discuss the latest evidence for how wall polysaccharides are synthesized, deposited, reorganized, modified, and degraded in guard cells, and how these processes influence stomatal form and function. We also raise open questions and provide a perspective on experimental approaches that could be used in the future to shed light on the composition and architecture of guard cell walls.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...