Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38502618

RESUMEN

Generating virtual organ populations that capture sufficient variability while remaining plausible is essential to conduct in silico trials (ISTs) of medical devices. However, not all anatomical shapes of interest are always available for each individual in a population. The imaging examinations and modalities used can vary between subjects depending on their individualized clinical pathways. Different imaging modalities may have various fields of view and are sensitive to signals from other tissues/organs, or both. Hence, missing/partially overlapping anatomical information is often available across individuals. We introduce a generative shape model for multipart anatomical structures, learnable from sets of unpaired datasets, i.e., where each substructure in the shape assembly comes from datasets with missing or partially overlapping substructures from disjoint subjects of the same population. The proposed generative model can synthesize complete multipart shape assemblies coined virtual chimeras (VCs). We applied this framework to build VCs from databases of whole-heart shape assemblies that each contribute samples for heart substructures. Specifically, we propose a graph neural network-based generative shape compositional framework, which comprises two components, a part-aware generative shape model that captures the variability in shape observed for each structure of interest in the training population and a spatial composition network that assembles/composes the structures synthesized by the former into multipart shape assemblies (i.e., VCs). We also propose a novel self-supervised learning scheme that enables the spatial composition network to be trained with partially overlapping data and weak labels. We trained and validated our approach using shapes of cardiac structures derived from cardiac magnetic resonance (MR) images in the UK Biobank (UKBB). When trained with complete and partially overlapping data, our approach significantly outperforms a principal component analysis (PCA)-based shape model (trained with complete data) in terms of generalizability and specificity. This demonstrates the superiority of the proposed method, as the synthesized cardiac virtual populations are more plausible and capture a greater degree of shape variability than those generated by the PCA-based shape model.

2.
Int J Psychophysiol ; 200: 112339, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38554769

RESUMEN

Altered stimulus generalization has been well-documented in anxiety disorders; however, there is a paucity of research investigating this phenomenon in the context of depression. Depression is characterized by impaired reward processing and heightened attention to negative stimuli. It is hypothesized that individuals with depression exhibit reduced generalization of reward stimuli and enhanced generalization of loss stimuli. Nevertheless, no study has examined this process and its underlying neural mechanisms. In the present study, we recruited 25 participants with subthreshold depression (SD group) and 24 age-matched healthy controls (HC group). Participants completed an acquisition task, in which they learned to associate three distinct pure tones (conditioned stimuli, CSs) with a reward, a loss, or no outcome. Subsequently, a generalization session was conducted, during which similar tones (generalization stimuli, GSs) were presented, and participants were required to classify them as a reward tone, a loss tone, or neither. The results revealed that the SD group exhibited reduced generalization errors in the early phase of generalization, suggesting a diminished ability to generalize reward-related stimuli. The event-related potential (ERP) results indicated that the SD group exhibited decreased generalization of positive valence to reward-related GSs and heightened generalization of negative valence to loss-related GSs, as reflected by the N1 and P2 components. However, the late positive potential (LPP) was not modulated by depression in reward generalization or loss generalization. These findings suggested that individuals with subthreshold depression may have a blunted or reduced ability to generalize reward stimuli, shedding light on potential treatment strategies targeting this particular process.


Asunto(s)
Depresión , Electroencefalografía , Generalización Psicológica , Recompensa , Humanos , Masculino , Femenino , Generalización Psicológica/fisiología , Adulto Joven , Adulto , Depresión/fisiopatología , Potenciales Evocados/fisiología , Condicionamiento Clásico/fisiología
3.
Med Image Anal ; 87: 102810, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37054648

RESUMEN

Sensorless freehand 3D ultrasound (US) reconstruction based on deep networks shows promising advantages, such as large field of view, relatively high resolution, low cost, and ease of use. However, existing methods mainly consider vanilla scan strategies with limited inter-frame variations. These methods thus are degraded on complex but routine scan sequences in clinics. In this context, we propose a novel online learning framework for freehand 3D US reconstruction under complex scan strategies with diverse scanning velocities and poses. First, we devise a motion-weighted training loss in training phase to regularize the scan variation frame-by-frame and better mitigate the negative effects of uneven inter-frame velocity. Second, we effectively drive online learning with local-to-global pseudo supervisions. It mines both the frame-level contextual consistency and the path-level similarity constraint to improve the inter-frame transformation estimation. We explore a global adversarial shape before transferring the latent anatomical prior as supervision. Third, we build a feasible differentiable reconstruction approximation to enable the end-to-end optimization of our online learning. Experimental results illustrate that our freehand 3D US reconstruction framework outperformed current methods on two large, simulated datasets and one real dataset. In addition, we applied the proposed framework to clinical scan videos to further validate its effectiveness and generalizability.


Asunto(s)
Educación a Distancia , Imagenología Tridimensional , Humanos , Imagenología Tridimensional/métodos , Algoritmos , Ultrasonografía/métodos
4.
Neurosci Lett ; 802: 137173, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36898651

RESUMEN

Based on the mind-blindness hypothesis, a large number of studies have shown that individuals with autism-spectrum disorder (ASD) and autistic traits have empathy deficits. However, the recent double empathy theory contradicts the mind-blindness hypothesis and suggests that individuals with ASD and autistic traits do not necessarily lack empathy. Thus, the presence of empathy deficits in individuals with ASD and autistic traits is still controversial. We recruited 56 adolescents (28 high autistic traits, 28 low autistic traits, 14-17 years old) in this study to explore the relationship between empathy and autistic traits. The study participants were required to undertake the pain empathy task, during which the electroencephalograph (EEG) activities were recorded. Our results show that empathy was negatively associated with autistic traits at the questionnaire, behavioral, and EEG levels. Our results also suggested that empathy deficits in adolescents with autistic traits may be manifested mainly in the late stages of cognitive control processing.


Asunto(s)
Trastorno del Espectro Autista , Trastorno Autístico , Trastornos Generalizados del Desarrollo Infantil , Niño , Humanos , Adolescente , Empatía , Trastorno Autístico/psicología , Trastorno del Espectro Autista/psicología , Conducta Social
5.
Comput Methods Programs Biomed ; 233: 107477, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36972645

RESUMEN

BACKGROUND AND OBJECTIVE: Deep learning models often suffer from performance degradations when deployed in real clinical environments due to appearance shifts between training and testing images. Most extant methods use training-time adaptation, which almost require target domain samples in the training phase. However, these solutions are limited by the training process and cannot guarantee the accurate prediction of test samples with unforeseen appearance shifts. Further, it is impractical to collect target samples in advance. In this paper, we provide a general method of making existing segmentation models robust to samples with unknown appearance shifts when deployed in daily clinical practice. METHODS: Our proposed test-time bi-directional adaptation framework combines two complementary strategies. First, our image-to-model (I2M) adaptation strategy adapts appearance-agnostic test images to the learned segmentation model using a novel plug-and-play statistical alignment style transfer module during testing. Second, our model-to-image (M2I) adaptation strategy adapts the learned segmentation model to test images with unknown appearance shifts. This strategy applies an augmented self-supervised learning module to fine-tune the learned model with proxy labels that it generates. This innovative procedure can be adaptively constrained using our novel proxy consistency criterion. This complementary I2M and M2I framework demonstrably achieves robust segmentation against unknown appearance shifts using existing deep-learning models. RESULTS: Extensive experiments on 10 datasets containing fetal ultrasound, chest X-ray, and retinal fundus images demonstrate that our proposed method achieves promising robustness and efficiency in segmenting images with unknown appearance shifts. CONCLUSIONS: To address the appearance shift problem in clinically acquired medical images, we provide robust segmentation by using two complementary strategies. Our solution is general and amenable for deployment in clinical settings.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Ultrasonografía Prenatal , Femenino , Embarazo , Humanos , Fondo de Ojo
6.
Artículo en Inglés | MEDLINE | ID: mdl-36181957

RESUMEN

Humans gain knowledge about threats not only from their own experiences but also from observing others' behavior. A neutral stimulus is associated with a threat stimulus for several times and the neutral stimulus will evoke fear responses, which is known as fear conditioning. When encountering a new event that is similar to one previously associated with a threat, one may feel afraid and produce fear responses. This is called fear generalization. Previous studies have mostly focused on fear conditioning and generalization based on direct learning, but few have explored how observational fear learning affects fear conditioning and generalization. To the best of our knowledge, no previous study has focused on the neural correlations of fear conditioning and generalization based on observational learning. In the present study, 58 participants performed a differential conditioning paradigm in which they learned the associations between neutral cues (i.e., geometric figures) and threat stimuli (i.e., electric shock). The learning occurred on their own (i.e., direct learning) and by observing other participant's responses (i.e., observational learning); the study used a within-subjects design. After each learning condition, a fear generalization paradigm was conducted by each participant independently while their behavioral responses (i.e., expectation of a shock) and electroencephalography (EEG) recordings or responses were recorded. The shock expectancy ratings showed that observational learning, compared to direct learning, reduced the differentiation between the conditioned threatening stimuli and safety stimuli and the increased shock expectancy to the generalization stimuli. The EEG indicated that in fear learning, threatening conditioned stimuli in observational and direct learning increased early discrimination (P1) and late motivated attention (late positive potential [LPP]), compared with safety conditioned stimuli. In fear generalization, early discrimination, late motivated attention, and orienting attention (alpha-event-related desynchronization [alpha-ERD]) to generalization stimuli were reduced in the observational learning condition. These findings suggest that compared to direct learning, observational learning reduces differential fear learning and increases the generalization of fear, and this might be associated with reduced discrimination and attentional function related to generalization stimuli.


Asunto(s)
Miedo , Generalización Psicológica , Humanos , Miedo/fisiología , Generalización Psicológica/fisiología , Condicionamiento Clásico/fisiología , Aprendizaje/fisiología , Atención
7.
Med Image Anal ; 80: 102478, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35691144

RESUMEN

Breast Ultrasound (BUS) has proven to be an effective tool for the early detection of cancer in the breast. A lesion segmentation provides identification of the boundary, shape, and location of the target, and serves as a crucial step toward accurate diagnosis. Despite recent efforts in developing machine learning algorithms to automate this process, problems remain due to the blurry or occluded edges and highly irregular nodule shapes. Existing methods often produce over-smooth or inaccurate results, failing the need of identifying detailed boundary structures which are of clinical interest. To overcome these challenges, we propose a novel boundary-rendering framework that explicitly highlights the importance of boundary for automated nodule segmentation in BUS images. It utilizes a boundary selection module to automatically focuses on the ambiguous boundary region and a graph convolutional-based boundary rendering module to exploit global contour information. Furthermore, the proposed framework embeds nodule classification via semantic segmentation and encourages co-learning across tasks. Validation experiments were performed on different BUS datasets to verify the robustness of the proposed method. Results show that the proposed method outperforms states-of-art segmentation approaches (Dice=0.854, IOU=0.919, HD=17.8) in nodule delineation, as well as obtains a higher classification accuracy than classical classification models.


Asunto(s)
Algoritmos , Procesamiento de Imagen Asistido por Computador , Mama/diagnóstico por imagen , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía , Ultrasonografía Mamaria/métodos
8.
IEEE Access ; 10: 29322-29332, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35656515

RESUMEN

Deep learning models represent the state of the art in medical image segmentation. Most of these models are fully-convolutional networks (FCNs), namely each layer processes the output of the preceding layer with convolution operations. The convolution operation enjoys several important properties such as sparse interactions, parameter sharing, and translation equivariance. Because of these properties, FCNs possess a strong and useful inductive bias for image modeling and analysis. However, they also have certain important shortcomings, such as performing a fixed and pre-determined operation on a test image regardless of its content and difficulty in modeling long-range interactions. In this work we show that a different deep neural network architecture, based entirely on self-attention between neighboring image patches and without any convolution operations, can achieve more accurate segmentations than FCNs. Our proposed model is based directly on the transformer network architecture. Given a 3D image block, our network divides it into non-overlapping 3D patches and computes a 1D embedding for each patch. The network predicts the segmentation map for the block based on the self-attention between these patch embeddings. Furthermore, in order to address the common problem of scarcity of labeled medical images, we propose methods for pre-training this model on large corpora of unlabeled images. Our experiments show that the proposed model can achieve segmentation accuracies that are better than several state of the art FCN architectures on two datasets. Our proposed network can be trained using only tens of labeled images. Moreover, with the proposed pre-training strategies, our network outperforms FCNs when labeled training data is small.

9.
Psychophysiology ; 59(6): e13997, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35244973

RESUMEN

Humans have evolved to seek the proximity of attachment figures during times of threat in order to obtain a sense of safety. In this context, we examined whether or not the voice of an intimate partner (termed "attachment voice") could reduce fear-learning of conditioned stimuli (CS+) and enhance learning of safety signals (CS-). Although the ability to learn safety signals is vital for human survival, few studies have explored how attachment voices affect safety learning. To test our hypothesis, we recruited thirty-five young couples and performed a classic Pavlovian conditioning experiment, recording behavioral and electroencephalographic (EEG) data. The results showed that compared with a stranger's voice, the voices of the partners reduced expectancy of the unconditioned stimulus (a shock) during fear-conditioning, as well as the magnitude of P2 event-related potentials within the EEG responses, provided the voices were safety signals. Additionally, behavioral and EEG responses to the CS+ and CS- differed more when the participants heard their partner's voice than when they heard the stranger's voice. Thus, attachment voices, even as pure vowel sounds without any semantic information, enhanced acquisition of conditioned safety (CS-). These findings may provide implications for investigating other new techniques to improve clinical treatments for fear- and anxiety-related disorders and for psychological interventions against the mental health effects of the public health emergency.


Asunto(s)
Condicionamiento Clásico , Voz , Condicionamiento Clásico/fisiología , Potenciales Evocados , Miedo/fisiología , Humanos , Aprendizaje
10.
Sci Rep ; 11(1): 14210, 2021 07 09.
Artículo en Inglés | MEDLINE | ID: mdl-34244571

RESUMEN

Previous research indicates that excessive fear is a critical feature in anxiety disorders; however, recent studies suggest that disgust may also contribute to the etiology and maintenance of some anxiety disorders. It remains unclear if differences exist between these two threat-related emotions in conditioning and generalization. Evaluating different patterns of fear and disgust learning would facilitate a deeper understanding of how anxiety disorders develop. In this study, 32 college students completed threat conditioning tasks, including conditioned stimuli paired with frightening or disgusting images. Fear and disgust were divided into two randomly ordered blocks to examine differences by recording subjective US expectancy ratings and eye movements in the conditioning and generalization process. During conditioning, differing US expectancy ratings (fear vs. disgust) were found only on CS-, which may demonstrated that fear is associated with inferior discrimination learning. During the generalization test, participants exhibited greater US expectancy ratings to fear-related GS1 (generalized stimulus) and GS2 relative to disgust GS1 and GS2. Fear led to longer reaction times than disgust in both phases, and the pupil size and fixation duration for fear stimuli were larger than for disgust stimuli, suggesting that disgust generalization has a steeper gradient than fear generalization. These findings provide preliminary evidence for differences between fear- and disgust-related stimuli in conditioning and generalization, and suggest insights into treatment for anxiety and other fear- or disgust-related disorders.


Asunto(s)
Asco , Miedo/psicología , Adolescente , Adulto , Trastornos de Ansiedad/fisiopatología , Trastornos de Ansiedad/psicología , Miedo/fisiología , Femenino , Generalización Psicológica/fisiología , Humanos , Masculino , Heridas y Lesiones/fisiopatología , Heridas y Lesiones/psicología , Adulto Joven
11.
Med Image Anal ; 72: 102137, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34216958

RESUMEN

Recently, more clinicians have realized the diagnostic value of multi-modal ultrasound in breast cancer identification and began to incorporate Doppler imaging and Elastography in the routine examination. However, accurately recognizing patterns of malignancy in different types of sonography requires expertise. Furthermore, an accurate and robust diagnosis requires proper weights of multi-modal information as well as the ability to process missing data in practice. These two aspects are often overlooked by existing computer-aided diagnosis (CAD) approaches. To overcome these challenges, we propose a novel framework (called AW3M) that utilizes four types of sonography (i.e. B-mode, Doppler, Shear-wave Elastography, and Strain Elastography) jointly to assist breast cancer diagnosis. It can extract both modality-specific and modality-invariant features using a multi-stream CNN model equipped with self-supervised consistency loss. Instead of assigning the weights of different streams empirically, AW3M automatically learns the optimal weights using reinforcement learning techniques. Furthermore, we design a light-weight recovery block that can be inserted to a trained model to handle different modality-missing scenarios. Experimental results on a large multi-modal dataset demonstrate that our method can achieve promising performance compared with state-of-the-art methods. The AW3M framework is also tested on another independent B-mode dataset to prove its efficacy in general settings. Results show that the proposed recovery block can learn from the joint distribution of multi-modal features to further boost the classification accuracy given single modality input during the test.


Asunto(s)
Neoplasias de la Mama , Diagnóstico por Imagen de Elasticidad , Neoplasias de la Mama/diagnóstico por imagen , Diagnóstico por Computador , Femenino , Humanos , Ultrasonografía , Ultrasonografía Mamaria
12.
Sci Rep ; 11(1): 11754, 2021 06 03.
Artículo en Inglés | MEDLINE | ID: mdl-34083660

RESUMEN

Recent researches have provided evidence that stimulus-driven attentional bias for threats can be modulated by top-down goals. However, it is highlight essential to indicate whether and to what extent the top-down goals can affect the early stage of attention processing and its early neural mechanism. In this study, we collected electroencephalographic data from 28 healthy volunteers with a modified spatial cueing task. The results revealed that in the irrelevant task, there was no significant difference between the reaction time (RT) of the fearful and neutral faces. In the relevant task, we found that RT of fearful faces was faster than that of neutral faces in the valid cue condition, whereas the RT of fearful faces was slower than that of neutral faces in the invalid cue condition. The N170 component in our study showed a similar result compared with RT. Specifically, we noted that in the relevant task, fearful faces in the cue position of the target evoked a larger N170 amplitude than neutral faces, whereas this effect was suppressed in the irrelevant task. These results suggest that the irrelevant task may inhibit the early attention allocation to the fearful faces. Furthermore, the top-down goals can modulate the early attentional bias for threatening facial expressions.


Asunto(s)
Atención , Potenciales Evocados , Expresión Facial , Miedo , Adulto , Encéfalo/fisiología , Mapeo Encefálico , Señales (Psicología) , Electroencefalografía , Emociones , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Estimulación Luminosa , Tiempo de Reacción , Adulto Joven
13.
Med Image Anal ; 72: 102119, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34144345

RESUMEN

3D ultrasound (US) has become prevalent due to its rich spatial and diagnostic information not contained in 2D US. Moreover, 3D US can contain multiple standard planes (SPs) in one shot. Thus, automatically localizing SPs in 3D US has the potential to improve user-independence and scanning-efficiency. However, manual SP localization in 3D US is challenging because of the low image quality, huge search space and large anatomical variability. In this work, we propose a novel multi-agent reinforcement learning (MARL) framework to simultaneously localize multiple SPs in 3D US. Our contribution is four-fold. First, our proposed method is general and it can accurately localize multiple SPs in different challenging US datasets. Second, we equip the MARL system with a recurrent neural network (RNN) based collaborative module, which can strengthen the communication among agents and learn the spatial relationship among planes effectively. Third, we explore to adopt the neural architecture search (NAS) to automatically design the network architecture of both the agents and the collaborative module. Last, we believe we are the first to realize automatic SP localization in pelvic US volumes, and note that our approach can handle both normal and abnormal uterus cases. Extensively validated on two challenging datasets of the uterus and fetal brain, our proposed method achieves the average localization accuracy of 7.03∘/1.59mm and 9.75∘/1.19mm. Experimental results show that our light-weight MARL model has higher accuracy than state-of-the-art methods.


Asunto(s)
Redes Neurales de la Computación , Útero , Femenino , Humanos , Imagenología Tridimensional , Ultrasonografía
14.
Cogn Affect Behav Neurosci ; 21(5): 1054-1065, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34021495

RESUMEN

Learned fear can be generalized through both perceptual and conceptual information. This study investigated how perceptual and conceptual similarities influence this generalization process. Twenty-three healthy volunteers completed a fear-generalization test as brain activity was recorded in the form of event-related potentials (ERPs). Participants were exposed to a de novo fear acquisition paradigm with four categories of conditioned stimuli (CS): two conceptual cues (animals and furniture); and two perceptual cues (blue and purple shapes). Animals (C+) and purple shapes (P+) were paired with the unconditioned stimulus (US), whereas furniture (C-) and blue shapes (P-) never were. The generalized stimuli were thus blue animals (C+P+, determined danger), blue furniture (C-P+, perceptual danger), purple animals (C+P-, conceptual danger), and purple furniture (C-P-, determined safe). We found that perceptual cues elicited larger fear responses and shorter reaction times than did conceptual cues during fear acquisition. This suggests that a perceptually related pathway might evoke greater fear than a conceptually based route. During generalization, participants were more afraid of C+ exemplars than of C- exemplars. Furthermore, C+ trials elicited greater N400 amplitudes. Thus, participants appear able to use conceptually based cues to infer the value of the current stimuli. Additionally, compared with C+ exemplars, we found an enhanced late positive potential effect in response to C- exemplars, which seems to reflect a late inhibitory process and might index safety learning. These findings may offer new insights into the pathological mechanism of anxiety disorders.


Asunto(s)
Electroencefalografía , Potenciales Evocados , Condicionamiento Clásico , Miedo , Femenino , Generalización Psicológica , Humanos , Masculino
15.
IEEE Trans Med Imaging ; 40(7): 1950-1961, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33784618

RESUMEN

Accurate standard plane (SP) localization is the fundamental step for prenatal ultrasound (US) diagnosis. Typically, dozens of US SPs are collected to determine the clinical diagnosis. 2D US has to perform scanning for each SP, which is time-consuming and operator-dependent. While 3D US containing multiple SPs in one shot has the inherent advantages of less user-dependency and more efficiency. Automatically locating SP in 3D US is very challenging due to the huge search space and large fetal posture variations. Our previous study proposed a deep reinforcement learning (RL) framework with an alignment module and active termination to localize SPs in 3D US automatically. However, termination of agent search in RL is important and affects the practical deployment. In this study, we enhance our previous RL framework with a newly designed adaptive dynamic termination to enable an early stop for the agent searching, saving at most 67% inference time, thus boosting the accuracy and efficiency of the RL framework at the same time. Besides, we validate the effectiveness and generalizability of our algorithm extensively on our in-house multi-organ datasets containing 433 fetal brain volumes, 519 fetal abdomen volumes, and 683 uterus volumes. Our approach achieves localization error of 2.52mm/10.26° , 2.48mm/10.39° , 2.02mm/10.48° , 2.00mm/14.57° , 2.61mm/9.71° , 3.09mm/9.58° , 1.49mm/7.54° for the transcerebellar, transventricular, transthalamic planes in fetal brain, abdominal plane in fetal abdomen, and mid-sagittal, transverse and coronal planes in uterus, respectively. Experimental results show that our method is general and has the potential to improve the efficiency and standardization of US scanning.


Asunto(s)
Algoritmos , Ultrasonografía Prenatal , Abdomen/diagnóstico por imagen , Femenino , Humanos , Imagenología Tridimensional , Embarazo , Ultrasonografía
16.
IEEE Trans Med Imaging ; 40(4): 1123-1133, 2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33351755

RESUMEN

Fetal cortical plate segmentation is essential in quantitative analysis of fetal brain maturation and cortical folding. Manual segmentation of the cortical plate, or manual refinement of automatic segmentations is tedious and time-consuming. Automatic segmentation of the cortical plate, on the other hand, is challenged by the relatively low resolution of the reconstructed fetal brain MRI scans compared to the thin structure of the cortical plate, partial voluming, and the wide range of variations in the morphology of the cortical plate as the brain matures during gestation. To reduce the burden of manual refinement of segmentations, we have developed a new and powerful deep learning segmentation method. Our method exploits new deep attentive modules with mixed kernel convolutions within a fully convolutional neural network architecture that utilizes deep supervision and residual connections. We evaluated our method quantitatively based on several performance measures and expert evaluations. Results show that our method outperforms several state-of-the-art deep models for segmentation, as well as a state-of-the-art multi-atlas segmentation technique. We achieved average Dice similarity coefficient of 0.87, average Hausdorff distance of 0.96 mm, and average symmetric surface difference of 0.28 mm on reconstructed fetal brain MRI scans of fetuses scanned in the gestational age range of 16 to 39 weeks (28.6± 5.3). With a computation time of less than 1 minute per fetal brain, our method can facilitate and accelerate large-scale studies on normal and altered fetal brain cortical maturation and folding.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Corteza Cerebral/diagnóstico por imagen , Feto/diagnóstico por imagen , Imagen por Resonancia Magnética
17.
Psychopharmacology (Berl) ; 238(3): 677-689, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33241482

RESUMEN

BACKGROUND: A previously acquired fear response often spreads to perceptually or conceptually close stimuli or contexts. This process, known as fear generalization, facilitates the avoidance of danger, and dysregulations in this process play an important role in anxiety disorders. Oxytocin (OT) has been shown to modulate fear learning, yet effects on fear generalization remain unknown. METHODS: We employed a randomized, placebo-controlled, double-blind, between-subject design during which healthy male participants received either intranasal OT or placebo (PLC) following fear acquisition and before fear generalization with concomitant acquisition of skin conductance responses (SCRs). Twenty-four to 72 h before the fear learning and immediately after the fear generalization task, participants additionally complete a discrimination threshold task. RESULTS: Relative to PLC, OT significantly reduced perceived risk and SCRs towards the CS+ and GS1 (the generalization stimulus that is most similar to CS+) during fear generalization, whereas the discrimination threshold was not affected. CONCLUSIONS: Together, the results suggest that OT can attenuate fear generalization in the absence of effects on discrimination threshold. This study provides the first evidence for effects of OT on fear generalization in humans and suggests that OT may have therapeutic potential in anxiety disorders characterized by dysregulated fear generalization.


Asunto(s)
Trastornos de Ansiedad/tratamiento farmacológico , Discriminación en Psicología/efectos de los fármacos , Miedo/efectos de los fármacos , Generalización Psicológica/efectos de los fármacos , Oxitocina/farmacología , Administración Intranasal , Adulto , Condicionamiento Clásico/efectos de los fármacos , Método Doble Ciego , Reconocimiento Facial/efectos de los fármacos , Miedo/psicología , Femenino , Humanos , Aprendizaje/efectos de los fármacos , Masculino , Oxitocina/administración & dosificación , Adulto Joven
18.
Med Image Anal ; 65: 101759, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32623277

RESUMEN

Supervised training of deep learning models requires large labeled datasets. There is a growing interest in obtaining such datasets for medical image analysis applications. However, the impact of label noise has not received sufficient attention. Recent studies have shown that label noise can significantly impact the performance of deep learning models in many machine learning and computer vision applications. This is especially concerning for medical applications, where datasets are typically small, labeling requires domain expertise and suffers from high inter- and intra-observer variability, and erroneous predictions may influence decisions that directly impact human health. In this paper, we first review the state-of-the-art in handling label noise in deep learning. Then, we review studies that have dealt with label noise in deep learning for medical image analysis. Our review shows that recent progress on handling label noise in deep learning has gone largely unnoticed by the medical image analysis community. To help achieve a better understanding of the extent of the problem and its potential remedies, we conducted experiments with three medical imaging datasets with different types of label noise, where we investigated several existing strategies and developed new methods to combat the negative effect of label noise. Based on the results of these experiments and our review of the literature, we have made recommendations on methods that can be used to alleviate the effects of different types of label noise on deep models trained for medical image analysis. We hope that this article helps the medical image analysis researchers and developers in choosing and devising new techniques that effectively handle label noise in deep learning.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Aprendizaje Automático , Variaciones Dependientes del Observador
19.
Comput Methods Programs Biomed ; 194: 105519, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32447146

RESUMEN

BACKGROUND AND OBJECTIVE: Biometric measurements of fetal head are important indicators for maternal and fetal health monitoring during pregnancy. 3D ultrasound (US) has unique advantages over 2D scan in covering the whole fetal head and may promote the diagnoses. However, automatically segmenting the whole fetal head in US volumes still pends as an emerging and unsolved problem. The challenges that automated solutions need to tackle include the poor image quality, boundary ambiguity, long-span occlusion, and the appearance variability across different fetal poses and gestational ages. In this paper, we propose the first fully-automated solution to segment the whole fetal head in US volumes. METHODS: The segmentation task is firstly formulated as an end-to-end volumetric mapping under an encoder-decoder deep architecture. We then combine the segmentor with a proposed hybrid attention scheme (HAS) to select discriminative features and suppress the non-informative volumetric features in a composite and hierarchical way. With little computation overhead, HAS proves to be effective in addressing boundary ambiguity and deficiency. To enhance the spatial consistency in segmentation, we further organize multiple segmentors in a cascaded fashion to refine the results by revisiting context in the prediction of predecessors. RESULTS: Validated on a large dataset collected from 100 healthy volunteers, our method presents superior segmentation performance (DSC (Dice Similarity Coefficient), 96.05%), remarkable agreements with experts (-1.6±19.5 mL). With another 156 volumes collected from 52 volunteers, we ahieve high reproducibilities (mean standard deviation 11.524 mL) against scan variations. CONCLUSION: This is the first investigation about whole fetal head segmentation in 3D US. Our method is promising to be a feasible solution in assisting the volumetric US-based prenatal studies.


Asunto(s)
Biometría , Procesamiento de Imagen Asistido por Computador , Atención , Femenino , Cabeza/diagnóstico por imagen , Humanos , Embarazo , Ultrasonografía Prenatal
20.
Comput Methods Programs Biomed ; 189: 105275, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-31978805

RESUMEN

BACKGROUND AND OBJECTIVE: Automatic segmentation of breast lesion from ultrasound images is a crucial module for the computer aided diagnostic systems in clinical practice. Large-scale breast ultrasound (BUS) images remain unannotated and need to be effectively explored to improve the segmentation quality. To address this, a semi-supervised segmentation network is proposed based on generative adversarial networks (GAN). METHODS: In this paper, a semi-supervised learning model, denoted as BUS-GAN, consisting of a segmentation base network-BUS-S and an evaluation base network-BUS-E, is proposed. The BUS-S network can densely extract multi-scale features in order to accommodate the individual variance of breast lesion, thereby enhancing the robustness of segmentation. Besides, the BUS-E network adopts a dual-attentive-fusion block having two independent spatial attention paths on the predicted segmentation map and leverages the corresponding original image to distill geometrical-level and intensity-level information, respectively, so that to enlarge the difference between lesion region and background, thus improving the discriminative ability of the BUS-E network. Then, through adversarial training, the BUS-GAN model can achieve higher segmentation quality because the BUS-E network guides the BUS-S network to generate more accurate segmentation maps with more similar distribution as ground truth. RESULTS: The counterpart semi-supervised segmentation methods and the proposed BUS-GAN model were trained with 2000 in-house images, including 100 annotated images and 1900 unannotated images, and tested on two different sites, including 800 in-house images and 163 public images. The results validate that the proposed BUS-GAN model can achieve higher segmentation accuracy on both the in-house testing dataset and the public dataset than state-of-the-art semi-supervised segmentation methods. CONCLUSIONS: The developed BUS-GAN model can effectively utilize the unannotated breast ultrasound images to improve the segmentation quality. In the future, the proposed segmentation method can be a potential module for the automatic breast ultrasound diagnose system, thus relieving the burden of a tedious image annotation process and alleviating the subjective influence of physicians' experiences in clinical practice. Our code will be made available on https://github.com/fiy2W/BUS-GAN.


Asunto(s)
Mama/diagnóstico por imagen , Mama/fisiopatología , Procesamiento de Imagen Asistido por Computador/métodos , Ultrasonografía , Femenino , Humanos , Reconocimiento de Normas Patrones Automatizadas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...