Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
IEEE J Biomed Health Inform ; 28(4): 2235-2246, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38206782

RESUMO

The use of multimodal imaging has led to significant improvements in the diagnosis and treatment of many diseases. Similar to clinical practice, some works have demonstrated the benefits of multimodal fusion for automatic segmentation and classification using deep learning-based methods. However, current segmentation methods are limited to fusion of modalities with the same dimensionality (e.g., 3D + 3D, 2D + 2D), which is not always possible, and the fusion strategies implemented by classification methods are incompatible with localization tasks. In this work, we propose a novel deep learning-based framework for the fusion of multimodal data with heterogeneous dimensionality (e.g., 3D + 2D) that is compatible with localization tasks. The proposed framework extracts the features of the different modalities and projects them into the common feature subspace. The projected features are then fused and further processed to obtain the final prediction. The framework was validated on the following tasks: segmentation of geographic atrophy (GA), a late-stage manifestation of age-related macular degeneration, and segmentation of retinal blood vessels (RBV) in multimodal retinal imaging. Our results show that the proposed method outperforms the state-of-the-art monomodal methods on GA and RBV segmentation by up to 3.10% and 4.64% Dice, respectively.


Assuntos
Retina , Tomografia Computadorizada por Raios X , Humanos , Tomografia Computadorizada por Raios X/métodos , Processamento de Imagem Assistida por Computador/métodos
2.
Transl Vis Sci Technol ; 13(6): 7, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38874975

RESUMO

Purpose: The subsidence of the outer plexiform layer (OPL) is an important imaging biomarker on optical coherence tomography (OCT) associated with early outer retinal atrophy and a risk factor for progression to geographic atrophy in patients with intermediate age-related macular degeneration (AMD). Deep neural networks (DNNs) for OCT can support automated detection and localization of this biomarker. Methods: The method predicts potential OPL subsidence locations on retinal OCTs. A detection module (DM) infers bounding boxes around subsidences with a likelihood score, and a classification module (CM) assesses subsidence presence at the B-scan level. Overlapping boxes between B-scans are combined and scored by the product of the DM and CM predictions. The volume-wise score is the maximum prediction across all B-scans. One development and one independent external data set were used with 140 and 26 patients with AMD, respectively. Results: The system detected more than 85% of OPL subsidences with less than one false-positive (FP)/scan. The average area under the curve was 0.94 ± 0.03 for volume-level detection. Similar or better performance was achieved on the independent external data set. Conclusions: DNN systems can efficiently perform automated retinal layer subsidence detection in retinal OCT images. In particular, the proposed DNN system detects OPL subsidence with high sensitivity and a very limited number of FP detections. Translational Relevance: DNNs enable objective identification of early signs associated with high risk of progression to the atrophic late stage of AMD, ideally suited for screening and assessing the efficacy of the interventions aiming to slow disease progression.


Assuntos
Degeneração Macular , Redes Neurais de Computação , Tomografia de Coerência Óptica , Humanos , Tomografia de Coerência Óptica/métodos , Idoso , Feminino , Masculino , Degeneração Macular/diagnóstico por imagem , Degeneração Macular/diagnóstico , Degeneração Macular/patologia , Atrofia Geográfica/diagnóstico por imagem , Atrofia Geográfica/diagnóstico , Progressão da Doença , Retina/diagnóstico por imagem , Retina/patologia , Pessoa de Meia-Idade , Idoso de 80 Anos ou mais
3.
IEEE Trans Med Imaging ; 43(1): 542-557, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37713220

RESUMO

The early detection of glaucoma is essential in preventing visual impairment. Artificial intelligence (AI) can be used to analyze color fundus photographs (CFPs) in a cost-effective manner, making glaucoma screening more accessible. While AI models for glaucoma screening from CFPs have shown promising results in laboratory settings, their performance decreases significantly in real-world scenarios due to the presence of out-of-distribution and low-quality images. To address this issue, we propose the Artificial Intelligence for Robust Glaucoma Screening (AIROGS) challenge. This challenge includes a large dataset of around 113,000 images from about 60,000 patients and 500 different screening centers, and encourages the development of algorithms that are robust to ungradable and unexpected input data. We evaluated solutions from 14 teams in this paper and found that the best teams performed similarly to a set of 20 expert ophthalmologists and optometrists. The highest-scoring team achieved an area under the receiver operating characteristic curve of 0.99 (95% CI: 0.98-0.99) for detecting ungradable images on-the-fly. Additionally, many of the algorithms showed robust performance when tested on three other publicly available datasets. These results demonstrate the feasibility of robust AI-enabled glaucoma screening.


Assuntos
Inteligência Artificial , Glaucoma , Humanos , Glaucoma/diagnóstico por imagem , Fundo de Olho , Técnicas de Diagnóstico Oftalmológico , Algoritmos
4.
Sci Rep ; 13(1): 16231, 2023 09 27.
Artigo em Inglês | MEDLINE | ID: mdl-37758754

RESUMO

Deep neural networks have been increasingly proposed for automated screening and diagnosis of retinal diseases from optical coherence tomography (OCT), but often provide high-confidence predictions on out-of-distribution (OOD) cases, compromising their clinical usage. With this in mind, we performed an in-depth comparative analysis of the state-of-the-art uncertainty estimation methods for OOD detection in retinal OCT imaging. The analysis was performed within the use-case of automated screening and staging of age-related macular degeneration (AMD), one of the leading causes of blindness worldwide, where we achieved a macro-average area under the curve (AUC) of 0.981 for AMD classification. We focus on a few-shot Outlier Exposure (OE) method and the detection of near-OOD cases that share pathomorphological characteristics with the inlier AMD classes. Scoring the OOD case based on the Cosine distance in the feature space from the penultimate network layer proved to be a robust approach for OOD detection, especially in combination with the OE. Using Cosine distance and only 8 outliers exposed per class, we were able to improve the near-OOD detection performance of the OE with Reject Bucket method by [Formula: see text] 10% compared to without OE, reaching an AUC of 0.937. The Cosine distance served as a robust metric for OOD detection of both known and unknown classes and should thus be considered as an alternative to the reject bucket class probability in OE approaches, especially in the few-shot scenario. The inclusion of these methodologies did not come at the expense of classification performance, and can substantially improve the reliability and trustworthiness of the resulting deep learning-based diagnostic systems in the context of retinal OCT.


Assuntos
Aprendizado Profundo , Degeneração Macular , Humanos , Tomografia de Coerência Óptica , Reprodutibilidade dos Testes , Área Sob a Curva , Terapia Comportamental , Degeneração Macular/diagnóstico por imagem
5.
IEEE J Biomed Health Inform ; 27(1): 41-52, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36306300

RESUMO

Bruch's membrane (BM) segmentation on optical coherence tomography (OCT) is a pivotal step for the diagnosis and follow-up of age-related macular degeneration (AMD), one of the leading causes of blindness in the developed world. Automated BM segmentation methods exist, but they usually do not account for the anatomical coherence of the results, neither provide feedback on the confidence of the prediction. These factors limit the applicability of these systems in real-world scenarios. With this in mind, we propose an end-to-end deep learning method for automated BM segmentation in AMD patients. An Attention U-Net is trained to output a probability density function of the BM position, while taking into account the natural curvature of the surface. Besides the surface position, the method also estimates an A-scan wise uncertainty measure of the segmentation output. Subsequently, the A-scans with high uncertainty are interpolated using thin plate splines (TPS). We tested our method with ablation studies on an internal dataset with 138 patients covering all three AMD stages, and achieved a mean absolute localization error of 4.10 µm. In addition, the proposed segmentation method was compared against the state-of-the-art methods and showed a superior performance on an external publicly available dataset from a different patient cohort and OCT device, demonstrating strong generalization ability.


Assuntos
Lâmina Basilar da Corioide , Degeneração Macular , Humanos , Tomografia de Coerência Óptica/métodos , Incerteza , Retina
6.
Sci Rep ; 12(1): 6596, 2022 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-35449199

RESUMO

The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55-0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61-0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients.


Assuntos
COVID-19 , Aprendizado Profundo , COVID-19/diagnóstico por imagem , Humanos , Radiografia , Radiografia Torácica/métodos , Estudos Retrospectivos
7.
J Imaging ; 8(8)2022 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-36005456

RESUMO

Breast cancer is the most common malignancy in women worldwide, and is responsible for more than half a million deaths each year. The appropriate therapy depends on the evaluation of the expression of various biomarkers, such as the human epidermal growth factor receptor 2 (HER2) transmembrane protein, through specialized techniques, such as immunohistochemistry or in situ hybridization. In this work, we present the HER2 on hematoxylin and eosin (HEROHE) challenge, a parallel event of the 16th European Congress on Digital Pathology, which aimed to predict the HER2 status in breast cancer based only on hematoxylin-eosin-stained tissue samples, thus avoiding specialized techniques. The challenge consisted of a large, annotated, whole-slide images dataset (509), specifically collected for the challenge. Models for predicting HER2 status were presented by 21 teams worldwide. The best-performing models are presented by detailing the network architectures and key parameters. Methods are compared and approaches, core methodologies, and software choices contrasted. Different evaluation metrics are discussed, as well as the performance of the presented models for each of these metrics. Potential differences in ranking that would result from different choices of evaluation metrics highlight the need for careful consideration at the time of their selection, as the results show that some metrics may misrepresent the true potential of a model to solve the problem for which it was developed. The HEROHE dataset remains publicly available to promote advances in the field of computational pathology.

8.
Am J Clin Pathol ; 155(4): 527-536, 2021 03 15.
Artigo em Inglês | MEDLINE | ID: mdl-33118594

RESUMO

OBJECTIVES: This study evaluated the usefulness of artificial intelligence (AI) algorithms as tools in improving the accuracy of histologic classification of breast tissue. METHODS: Overall, 100 microscopic photographs (test A) and 152 regions of interest in whole-slide images (test B) of breast tissue were classified into 4 classes: normal, benign, carcinoma in situ (CIS), and invasive carcinoma. The accuracy of 4 pathologists and 3 pathology residents were evaluated without and with the assistance of algorithms. RESULTS: In test A, algorithm A had accuracy of 0.87, with the lowest accuracy in the benign class (0.72). The observers had average accuracy of 0.80, and most clinically relevant discordances occurred in distinguishing benign from CIS (7.1% of classifications). With the assistance of algorithm A, the observers significantly increased their average accuracy to 0.88. In test B, algorithm B had accuracy of 0.49, with the lowest accuracy in the CIS class (0.06). The observers had average accuracy of 0.86, and most clinically relevant discordances occurred in distinguishing benign from CIS (6.3% of classifications). With the assistance of algorithm B, the observers maintained their average accuracy. CONCLUSIONS: AI tools can increase the classification accuracy of pathologists in the setting of breast lesions.


Assuntos
Inteligência Artificial , Neoplasias da Mama/classificação , Neoplasias da Mama/patologia , Diagnóstico por Computador/métodos , Feminino , Humanos , Interpretação de Imagem Assistida por Computador/métodos
9.
Med Image Anal ; 70: 102027, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33740739

RESUMO

Lung cancer is the deadliest type of cancer worldwide and late detection is the major factor for the low survival rate of patients. Low dose computed tomography has been suggested as a potential screening tool but manual screening is costly and time-consuming. This has fuelled the development of automatic methods for the detection, segmentation and characterisation of pulmonary nodules. In spite of promising results, the application of automatic methods to clinical routine is not straightforward and only a limited number of studies have addressed the problem in a holistic way. With the goal of advancing the state of the art, the Lung Nodule Database (LNDb) Challenge on automatic lung cancer patient management was organized. The LNDb Challenge addressed lung nodule detection, segmentation and characterization as well as prediction of patient follow-up according to the 2017 Fleischner society pulmonary nodule guidelines. 294 CT scans were thus collected retrospectively at the Centro Hospitalar e Universitrio de So Joo in Porto, Portugal and each CT was annotated by at least one radiologist. Annotations comprised nodule centroids, segmentations and subjective characterization. 58 CTs and the corresponding annotations were withheld as a separate test set. A total of 947 users registered for the challenge and 11 successful submissions for at least one of the sub-challenges were received. For patient follow-up prediction, a maximum quadratic weighted Cohen's kappa of 0.580 was obtained. In terms of nodule detection, a sensitivity below 0.4 (and 0.7) at 1 false positive per scan was obtained for nodules identified by at least one (and two) radiologist(s). For nodule segmentation, a maximum Jaccard score of 0.567 was obtained, surpassing the interobserver variability. In terms of nodule texture characterization, a maximum quadratic weighted Cohen's kappa of 0.733 was obtained, with part solid nodules being particularly challenging to classify correctly. Detailed analysis of the proposed methods and the differences in performance allow to identify the major challenges remaining and future directions - data collection, augmentation/generation and evaluation of under-represented classes, the incorporation of scan-level information for better decision-making and the development of tools and challenges with clinical-oriented goals. The LNDb Challenge and associated data remain publicly available so that future methods can be tested and benchmarked, promoting the development of new algorithms in lung cancer medical image analysis and patient follow-up recommendation.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Algoritmos , Bases de Dados Factuais , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X
10.
Med Image Anal ; 63: 101715, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32434128

RESUMO

Diabetic retinopathy (DR) grading is crucial in determining the adequate treatment and follow up of patient, but the screening process can be tiresome and prone to errors. Deep learning approaches have shown promising performance as computer-aided diagnosis (CAD) systems, but their black-box behaviour hinders clinical application. We propose DR|GRADUATE, a novel deep learning-based DR grading CAD system that supports its decision by providing a medically interpretable explanation and an estimation of how uncertain that prediction is, allowing the ophthalmologist to measure how much that decision should be trusted. We designed DR|GRADUATE taking into account the ordinal nature of the DR grading problem. A novel Gaussian-sampling approach built upon a Multiple Instance Learning framework allow DR|GRADUATE to infer an image grade associated with an explanation map and a prediction uncertainty while being trained only with image-wise labels. DR|GRADUATE was trained on the Kaggle DR detection training set and evaluated across multiple datasets. In DR grading, a quadratic-weighted Cohen's kappa (κ) between 0.71 and 0.84 was achieved in five different datasets. We show that high κ values occur for images with low prediction uncertainty, thus indicating that this uncertainty is a valid measure of the predictions' quality. Further, bad quality images are generally associated with higher uncertainties, showing that images not suitable for diagnosis indeed lead to less trustworthy predictions. Additionally, tests on unfamiliar medical image data types suggest that DR|GRADUATE allows outlier detection. The attention maps generally highlight regions of interest for diagnosis. These results show the great potential of DR|GRADUATE as a second-opinion system in DR severity grading.


Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Retinopatia Diabética/diagnóstico por imagem , Diagnóstico por Computador , Fundo de Olho , Humanos , Incerteza
11.
IEEE J Biomed Health Inform ; 24(10): 2894-2901, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32092022

RESUMO

Early diagnosis of lung cancer via computed tomography can significantly reduce the morbidity and mortality rates associated with the pathology. However, searching lung nodules is a high complexity task, which affects the success of screening programs. Whilst computer-aided detection systems can be used as second observers, they may bias radiologists and introduce significant time overheads. With this in mind, this study assesses the potential of using gaze information for integrating automatic detection systems in the clinical practice. For that purpose, 4 radiologists were asked to annotate 20 scans from a public dataset while being monitored by an eye tracker device, and an automatic lung nodule detection system was developed. Our results show that radiologists follow a similar search routine and tend to have lower fixation periods in regions where finding errors occur. The overall detection sensitivity of the specialists was 0.67±0.07, whereas the system achieved 0.69. Combining the annotations of one radiologist with the automatic system significantly improves the detection performance to similar levels of two annotators. Filtering automatic detection candidates only for low fixation regions still significantly improves the detection sensitivity without increasing the number of false-positives.


Assuntos
Aprendizado Profundo , Tecnologia de Rastreamento Ocular , Neoplasias Pulmonares/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiologistas , Fixação Ocular/fisiologia , Humanos , Tomografia Computadorizada por Raios X/métodos
12.
Sci Rep ; 9(1): 11591, 2019 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-31406194

RESUMO

We propose iW-Net, a deep learning model that allows for both automatic and interactive segmentation of lung nodules in computed tomography images. iW-Net is composed of two blocks: the first one provides an automatic segmentation and the second one allows to correct it by analyzing 2 points introduced by the user in the nodule's boundary. For this purpose, a physics inspired weight map that takes the user input into account is proposed, which is used both as a feature map and in the system's loss function. Our approach is extensively evaluated on the public LIDC-IDRI dataset, where we achieve a state-of-the-art performance of 0.55 intersection over union vs the 0.59 inter-observer agreement. Also, we show that iW-Net allows to correct the segmentation of small nodules, essential for proper patient referral decision, as well as improve the segmentation of the challenging non-solid nodules and thus may be an important tool for increasing the early diagnosis of lung cancer.


Assuntos
Automação , Pneumopatias/diagnóstico por imagem , Algoritmos , Detecção Precoce de Câncer , Humanos , Pneumopatias/patologia , Redes Neurais de Computação
13.
Med Image Anal ; 56: 122-139, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31226662

RESUMO

Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time- and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.


Assuntos
Neoplasias da Mama/patologia , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Algoritmos , Feminino , Humanos , Microscopia , Coloração e Rotulagem
14.
Med Image Anal ; 52: 24-41, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30468970

RESUMO

Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.


Assuntos
Extração de Catarata/instrumentação , Aprendizado Profundo , Instrumentos Cirúrgicos , Algoritmos , Humanos , Gravação em Vídeo
15.
PLoS One ; 12(6): e0177544, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28570557

RESUMO

Breast cancer is one of the main causes of cancer death worldwide. The diagnosis of biopsy tissue with hematoxylin and eosin stained images is non-trivial and specialists often disagree on the final diagnosis. Computer-aided Diagnosis systems contribute to reduce the cost and increase the efficiency of this process. Conventional classification approaches rely on feature extraction methods designed for a specific problem based on field-knowledge. To overcome the many difficulties of the feature-based approaches, deep learning methods are becoming important alternatives. A method for the classification of hematoxylin and eosin stained breast biopsy images using Convolutional Neural Networks (CNNs) is proposed. Images are classified in four classes, normal tissue, benign lesion, in situ carcinoma and invasive carcinoma, and in two classes, carcinoma and non-carcinoma. The architecture of the network is designed to retrieve information at different scales, including both nuclei and overall tissue organization. This design allows the extension of the proposed system to whole-slide histology images. The features extracted by the CNN are also used for training a Support Vector Machine classifier. Accuracies of 77.8% for four class and 83.3% for carcinoma/non-carcinoma are achieved. The sensitivity of our method for cancer cases is 95.6%.


Assuntos
Neoplasias da Mama/patologia , Redes Neurais de Computação , Neoplasias da Mama/classificação , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Máquina de Vetores de Suporte
16.
J R Soc Interface ; 13(124)2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-28334696

RESUMO

Angiogenesis, the formation of blood vessels from pre-existing ones, is a key event in pathology, including cancer progression, but also in homeostasis and regeneration. As the phenotype of endothelial cells (ECs) is continuously regulated by local biomechanical forces, studying endothelial behaviour in altered gravity might contribute to new insights towards angiogenesis modulation. This study aimed at characterizing EC behaviour after hypergravity exposure (more than 1g), with special focus on cytoskeleton architecture and capillary-like structure formation. Herein, human umbilical vein ECs (HUVECs) were cultured under two-dimensional and three-dimensional conditions at 3g and 10g for 4 and 16 h inside the large diameter centrifuge at the European Space Research and Technology Centre (ESTEC) of the European Space Agency. Although no significant tendency regarding cytoskeleton organization was observed for cells exposed to high g's, a slight loss of the perinuclear localization of ß-tubulin was observed for cells exposed to 3g with less pronounced peripheral bodies of actin when compared with 1g control cells. Additionally, hypergravity exposure decreased the assembly of HUVECs into capillary-like structures, with a 10g level significantly reducing their organization capacity. In conclusion, short-term hypergravity seems to affect EC phenotype and their angiogenic potential in a time and g-level-dependent manner.


Assuntos
Células Endoteliais da Veia Umbilical Humana/metabolismo , Hipergravidade , Neovascularização Fisiológica , Actinas/metabolismo , Humanos , Tubulina (Proteína)/metabolismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA