Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 29
Filtrar
2.
Artigo em Inglês | MEDLINE | ID: mdl-38306259

RESUMO

We present a method for estimating dense continuous-time optical flow from event data. Traditional dense optical flow methods compute the pixel displacement between two images. Due to missing information, these approaches cannot recover the pixel trajectories in the blind time between two images. In this work, we show that it is possible to compute per-pixel, continuous-time optical flow using events from an event camera. Events provide temporally fine-grained information about movement in pixel space due to their asynchronous nature and microsecond response time. We leverage these benefits to predict pixel trajectories densely in continuous time via parameterized Bézier curves. To achieve this, we build a neural network with strong inductive biases for this task: First, we build multiple sequential correlation volumes in time using event data. Second, we use Bézier curves to index these correlation volumes at multiple timestamps along the trajectory. Third, we use the retrieved correlation to update the Bézier curve representations iteratively. Our method can optionally include image pairs to boost performance further. To the best of our knowledge, our model is the first method that can regress dense pixel trajectories from event data. To train and evaluate our model, we introduce a synthetic dataset (MultiFlow) that features moving objects and ground truth trajectories for every pixel. Our quantitative experiments not only suggest that our method successfully predicts pixel trajectories in continuous time but also that it is competitive in the traditional two-view pixel displacement metric on MultiFlow and DSEC-Flow. Open source code and datasets are released to the public.

3.
Sci Robot ; 8(82): eadg1462, 2023 Sep 27.
Artigo em Inglês | MEDLINE | ID: mdl-37703383

RESUMO

A central question in robotics is how to design a control system for an agile mobile robot. This paper studies this question systematically, focusing on a challenging setting: autonomous drone racing. We show that a neural network controller trained with reinforcement learning (RL) outperformed optimal control (OC) methods in this setting. We then investigated which fundamental factors have contributed to the success of RL or have limited OC. Our study indicates that the fundamental advantage of RL over OC is not that it optimizes its objective better but that it optimizes a better objective. OC decomposes the problem into planning and control with an explicit intermediate representation, such as a trajectory, that serves as an interface. This decomposition limits the range of behaviors that can be expressed by the controller, leading to inferior control performance when facing unmodeled effects. In contrast, RL can directly optimize a task-level objective and can leverage domain randomization to cope with model uncertainty, allowing the discovery of more robust control responses. Our findings allowed us to push an agile drone to its maximum performance, achieving a peak acceleration greater than 12 times the gravitational acceleration and a peak velocity of 108 kilometers per hour. Our policy achieved superhuman control within minutes of training on a standard workstation. This work presents a milestone in agile robotics and sheds light on the role of RL and OC in robot control.

4.
Nature ; 620(7976): 982-987, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37648758

RESUMO

First-person view (FPV) drone racing is a televised sport in which professional competitors pilot high-speed aircraft through a 3D circuit. Each pilot sees the environment from the perspective of their drone by means of video streamed from an onboard camera. Reaching the level of professional pilots with an autonomous drone is challenging because the robot needs to fly at its physical limits while estimating its speed and location in the circuit exclusively from onboard sensors1. Here we introduce Swift, an autonomous system that can race physical vehicles at the level of the human world champions. The system combines deep reinforcement learning (RL) in simulation with data collected in the physical world. Swift competed against three human champions, including the world champions of two international leagues, in real-world head-to-head races. Swift won several races against each of the human champions and demonstrated the fastest recorded race time. This work represents a milestone for mobile robotics and machine intelligence2, which may inspire the deployment of hybrid learning-based solutions in other physical systems.

5.
Sci Rep ; 13(1): 9727, 2023 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-37322248

RESUMO

Does gravity affect decision-making? This question comes into sharp focus as plans for interplanetary human space missions solidify. In the framework of Bayesian brain theories, gravity encapsulates a strong prior, anchoring agents to a reference frame via the vestibular system, informing their decisions and possibly their integration of uncertainty. What happens when such a strong prior is altered? We address this question using a self-motion estimation task in a space analog environment under conditions of altered gravity. Two participants were cast as remote drone operators orbiting Mars in a virtual reality environment on board a parabolic flight, where both hyper- and microgravity conditions were induced. From a first-person perspective, participants viewed a drone exiting a cave and had to first predict a collision and then provide a confidence estimate of their response. We evoked uncertainty in the task by manipulating the motion's trajectory angle. Post-decision subjective confidence reports were negatively predicted by stimulus uncertainty, as expected. Uncertainty alone did not impact overt behavioral responses (performance, choice) differentially across gravity conditions. However microgravity predicted higher subjective confidence, especially in interaction with stimulus uncertainty. These results suggest that variables relating to uncertainty affect decision-making distinctly in microgravity, highlighting the possible need for automatized, compensatory mechanisms when considering human factors in space research.


Assuntos
Gravidade Alterada , Voo Espacial , Ausência de Peso , Humanos , Teorema de Bayes , Incerteza , Encéfalo
6.
PLoS One ; 18(6): e0287611, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37390072

RESUMO

Double-blind peer review is considered a pillar of academic research because it is perceived to ensure a fair, unbiased, and fact-centered scientific discussion. Yet, experienced researchers can often correctly guess from which research group an anonymous submission originates, biasing the peer-review process. In this work, we present a transformer-based, neural-network architecture that only uses the text content and the author names in the bibliography to attribute an anonymous manuscript to an author. To train and evaluate our method, we created the largest authorship-identification dataset to date. It leverages all research papers publicly available on arXiv amounting to over 2 million manuscripts. In arXiv-subsets with up to 2,000 different authors, our method achieves an unprecedented authorship attribution accuracy, where up to 73% of papers are attributed correctly. We present a scaling analysis to highlight the applicability of the proposed method to even larger datasets when sufficient compute capabilities are more widely available to the academic community. Furthermore, we analyze the attribution accuracy in settings where the goal is to identify all authors of an anonymous manuscript. Thanks to our method, we are not only able to predict the author of an anonymous work but we also provide empirical evidence of the key aspects that make a paper attributable. We have open-sourced the necessary tools to reproduce our experiments.


Assuntos
Autoria , Aprendizado Profundo , Método Duplo-Cego , Fontes de Energia Elétrica , Redes Neurais de Computação
7.
Sci Robot ; 7(67): eabl6259, 2022 06 22.
Artigo em Inglês | MEDLINE | ID: mdl-35731886

RESUMO

Autonomous, agile quadrotor flight raises fundamental challenges for robotics research in terms of perception, planning, learning, and control. A versatile and standardized platform is needed to accelerate research and let practitioners focus on the core problems. To this end, we present Agilicious, a codesigned hardware and software framework tailored to autonomous, agile quadrotor flight. It is completely open source and open hardware and supports both model-based and neural network-based controllers. Also, it provides high thrust-to-weight and torque-to-inertia ratios for agility, onboard vision sensors, graphics processing unit (GPU)-accelerated compute hardware for real-time perception and neural network inference, a real-time flight controller, and a versatile software stack. In contrast to existing frameworks, Agilicious offers a unique combination of flexible software stack and high-performance hardware. We compare Agilicious with prior works and demonstrate it on different agile tasks, using both model-based and neural network-based controllers. Our demonstrators include trajectory tracking at up to 5g and 70 kilometers per hour in a motion capture system, and vision-based acrobatic flight and obstacle avoidance in both structured and unstructured environments using solely onboard perception. Last, we demonstrate its use for hardware-in-the-loop simulation in virtual reality environments. Because of its versatility, we believe that Agilicious supports the next generation of scientific and industrial quadrotor research.


Assuntos
Robótica , Simulação por Computador , Redes Neurais de Computação , Software , Visão Ocular
8.
PLoS One ; 17(3): e0264471, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35231038

RESUMO

Humans race drones faster than neural networks trained for end-to-end autonomous flight. This may be related to the ability of human pilots to select task-relevant visual information effectively. This work investigates whether neural networks capable of imitating human eye gaze behavior and attention can improve neural networks' performance for the challenging task of vision-based autonomous drone racing. We hypothesize that gaze-based attention prediction can be an efficient mechanism for visual information selection and decision making in a simulator-based drone racing task. We test this hypothesis using eye gaze and flight trajectory data from 18 human drone pilots to train a visual attention prediction model. We then use this visual attention prediction model to train an end-to-end controller for vision-based autonomous drone racing using imitation learning. We compare the drone racing performance of the attention-prediction controller to those using raw image inputs and image-based abstractions (i.e., feature tracks). Comparing success rates for completing a challenging race track by autonomous flight, our results show that the attention-prediction based controller (88% success rate) outperforms the RGB-image (61% success rate) and feature-tracks (55% success rate) controller baselines. Furthermore, visual attention-prediction and feature-track based models showed better generalization performance than image-based models when evaluated on hold-out reference trajectories. Our results demonstrate that human visual attention prediction improves the performance of autonomous vision-based drone racing agents and provides an essential step towards vision-based, fast, and agile autonomous flight that eventually can reach and even exceed human performances.


Assuntos
Redes Neurais de Computação , Dispositivos Aéreos não Tripulados , Fixação Ocular , Humanos , Visão Ocular
9.
Auton Robots ; 46(1): 307-320, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35221535

RESUMO

This paper presents a novel system for autonomous, vision-based drone racing combining learned data abstraction, nonlinear filtering, and time-optimal trajectory planning. The system has successfully been deployed at the first autonomous drone racing world championship: the 2019 AlphaPilot Challenge. Contrary to traditional drone racing systems, which only detect the next gate, our approach makes use of any visible gate and takes advantage of multiple, simultaneous gate detections to compensate for drift in the state estimate and build a global map of the gates. The global map and drift-compensated state estimate allow the drone to navigate through the race course even when the gates are not immediately visible and further enable to plan a near time-optimal path through the race course in real time based on approximate drone dynamics. The proposed system has been demonstrated to successfully guide the drone through tight race courses reaching speeds up to 8 m / s and ranked second at the 2019 AlphaPilot Challenge.

10.
IEEE Trans Pattern Anal Mach Intell ; 44(1): 154-180, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-32750812

RESUMO

Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of µs), very high dynamic range (140 dB versus 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.


Assuntos
Algoritmos , Robótica , Redes Neurais de Computação
11.
Int J Comput Vis ; 129(4): 821-844, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34720404

RESUMO

Visual Localization is one of the key enabling technologies for autonomous driving and augmented reality. High quality datasets with accurate 6 Degree-of-Freedom (DoF) reference poses are the foundation for benchmarking and improving existing methods. Traditionally, reference poses have been obtained via Structure-from-Motion (SfM). However, SfM itself relies on local features which are prone to fail when images were taken under different conditions, e.g., day/night changes. At the same time, manually annotating feature correspondences is not scalable and potentially inaccurate. In this work, we propose a semi-automated approach to generate reference poses based on feature matching between renderings of a 3D model and real images via learned features. Given an initial pose estimate, our approach iteratively refines the pose based on feature matches against a rendering of the model from the current pose estimate. We significantly improve the nighttime reference poses of the popular Aachen Day-Night dataset, showing that state-of-the-art visual localization methods perform better (up to 47%) than predicted by the original reference poses. We extend the dataset with new nighttime test images, provide uncertainty estimates for our new reference poses, and introduce a new evaluation criterion. We will make our reference poses and our framework publicly available upon publication.

12.
Sci Robot ; 6(59): eabg5810, 2021 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-34613820

RESUMO

Quadrotors are agile. Unlike most other machines, they can traverse extremely complex environments at high speeds. To date, only expert human pilots have been able to fully exploit their capabilities. Autonomous operation with onboard sensing and computation has been limited to low speeds. State-of-the-art methods generally separate the navigation problem into subtasks: sensing, mapping, and planning. Although this approach has proven successful at low speeds, the separation it builds upon can be problematic for high-speed navigation in cluttered environments. The subtasks are executed sequentially, leading to increased processing latency and a compounding of errors through the pipeline. Here, we propose an end-to-end approach that can autonomously fly quadrotors through complex natural and human-made environments at high speeds with purely onboard sensing and computation. The key principle is to directly map noisy sensory observations to collision-free trajectories in a receding-horizon fashion. This direct mapping drastically reduces processing latency and increases robustness to noisy and incomplete perception. The sensorimotor mapping is performed by a convolutional network that is trained exclusively in simulation via privileged learning: imitating an expert with access to privileged information. By simulating realistic sensor noise, our approach achieves zero-shot transfer from simulation to challenging real-world environments that were never experienced during training: dense forests, snow-covered terrain, derailed trains, and collapsed buildings. Our work demonstrates that end-to-end policies trained in simulation enable high-speed autonomous flight through challenging environments, outperforming traditional obstacle avoidance pipelines.

13.
Sci Robot ; 6(56)2021 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-34290102

RESUMO

Quadrotors are among the most agile flying robots. However, planning time-optimal trajectories at the actuation limit through multiple waypoints remains an open problem. This is crucial for applications such as inspection, delivery, search and rescue, and drone racing. Early works used polynomial trajectory formulations, which do not exploit the full actuator potential because of their inherent smoothness. Recent works resorted to numerical optimization but require waypoints to be allocated as costs or constraints at specific discrete times. However, this time allocation is a priori unknown and renders previous works incapable of producing truly time-optimal trajectories. To generate truly time-optimal trajectories, we propose a solution to the time allocation problem while exploiting the full quadrotor's actuator potential. We achieve this by introducing a formulation of progress along the trajectory, which enables the simultaneous optimization of the time allocation and the trajectory itself. We compare our method against related approaches and validate it in real-world flights in one of the world's largest motion-capture systems, where we outperform human expert drone pilots in a drone-racing task.

14.
Magn Reson Med ; 86(4): 1829-1844, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33973674

RESUMO

PURPOSE: We introduce a novel, generalized tracer kinetic model selection framework to quantify microvascular characteristics of liver and tumor tissue in gadoxetate-enhanced dynamic contrast-enhanced MRI (DCE-MRI). METHODS: Our framework includes a hierarchy of nested models, from which physiological parameters are derived in 2 regimes, corresponding to the active transport and free diffusion of gadoxetate. We use simulations to show the sensitivity of model selection and parameter estimation to temporal resolution, time-series duration, and noise. We apply the framework in 8 healthy volunteers (time-series duration up to 24 minutes) and 10 patients with hepatocellular carcinoma (6 minutes). RESULTS: The active transport regime is preferred in 98.6% of voxels in volunteers, 82.1% of patients' non-tumorous liver, and 32.2% of tumor voxels. Interpatient variations correspond to known co-morbidities. Simulations suggest both datasets have sufficient temporal resolution and signal-to-noise ratio, while patient data would be improved by using a time-series duration of at least 12 minutes. CONCLUSIONS: In patient data, gadoxetate exhibits different kinetics: (a) between liver and tumor regions and (b) within regions due to liver disease and/or tumor heterogeneity. Our generalized framework selects a physiological interpretation at each voxel, without preselecting a model for each region or duplicating time-consuming optimizations for models with identical functional forms.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Carcinoma Hepatocelular/diagnóstico por imagem , Meios de Contraste , Gadolínio DTPA , Humanos , Fígado/diagnóstico por imagem , Neoplasias Hepáticas/diagnóstico por imagem , Imageamento por Ressonância Magnética
15.
IEEE Trans Pattern Anal Mach Intell ; 43(6): 1964-1980, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31902754

RESUMO

Event cameras are novel sensors that report brightness changes in the form of a stream of asynchronous "events" instead of intensity frames. They offer significant advantages with respect to conventional cameras: high temporal resolution, high dynamic range, and no motion blur. While the stream of events encodes in principle the complete visual signal, the reconstruction of an intensity image from a stream of events is an ill-posed problem in practice. Existing reconstruction approaches are based on hand-crafted priors and strong assumptions about the imaging process as well as the statistics of natural images. In this work we propose to learn to reconstruct intensity images from event streams directly from data instead of relying on any hand-crafted priors. We propose a novel recurrent network to reconstruct videos from a stream of events, and train it on a large amount of simulated event data. During training we propose to use a perceptual loss to encourage reconstructions to follow natural image statistics. We further extend our approach to synthesize color images from color event streams. Our quantitative experiments show that our network surpasses state-of-the-art reconstruction methods by a large margin in terms of image quality ( ), while comfortably running in real-time. We show that the network is able to synthesize high framerate videos ( frames per second) of high-speed phenomena (e.g., a bullet hitting an object) and is able to provide high dynamic range reconstructions in challenging lighting conditions. As an additional contribution, we demonstrate the effectiveness of our reconstructions as an intermediate representation for event data. We show that off-the-shelf computer vision algorithms can be applied to our reconstructions for tasks such as object classification and visual-inertial odometry and that this strategy consistently outperforms algorithms that were specifically designed for event data. We release the reconstruction code, a pre-trained model and the datasets to enable further research.

16.
Sci Robot ; 5(40)2020 03 18.
Artigo em Inglês | MEDLINE | ID: mdl-33022598

RESUMO

Today's autonomous drones have reaction times of tens of milliseconds, which is not enough for navigating fast in complex dynamic environments. To safely avoid fast moving objects, drones need low-latency sensors and algorithms. We departed from state-of-the-art approaches by using event cameras, which are bioinspired sensors with reaction times of microseconds. Our approach exploits the temporal information contained in the event stream to distinguish between static and dynamic objects and leverages a fast strategy to generate the motor commands necessary to avoid the approaching obstacles. Standard vision algorithms cannot be applied to event cameras because the output of these sensors is not images but a stream of asynchronous events that encode per-pixel intensity changes. Our resulting algorithm has an overall latency of only 3.5 milliseconds, which is sufficient for reliable detection and avoidance of fast-moving obstacles. We demonstrate the effectiveness of our approach on an autonomous quadrotor using only onboard sensing and computation. Our drone was capable of avoiding multiple obstacles of different sizes and shapes, at relative speeds up to 10 meters/second, both indoors and outdoors.

17.
BJR Case Rep ; 6(2): 20190065, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33029362

RESUMO

The onset of an autoimmune, sarcoidosis-like reaction during or after treatment with immunomodulatory drugs as Ipilimumab is an atypical but renowned eventuality. Awareness of this scenario and its radiological features helps the Radiologist to avoid misdiagnosis of disease progression. In this case report, we present a patient operated for advanced cutaneous melanoma of the left forearm who developed hilar adenopathies with lung and splenic nodules during therapy with Ipilimumab in adjuvant setting. These findings were at first referred to as disease recurrences. Based on discrepancies between imaging, clinic and blood test findings we decided to put the patient on strict follow-up which showed a spontaneous complete regression on the visceral lesions few months after Ipilimumab withheld.

18.
Surg Oncol ; 35: 89-96, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32858390

RESUMO

BACKGROUND: Selection criteria to propose neoadjuvant (re)chemoradiation (CHRT) in locally recurrent rectal cancer (LRRC) are required, since re-irradiation is sometimes associated to severe adverse effects. Aim of the present study was to compare chances of R0 surgery and disease-free survival (DFS) in LRRC patients (pts) treated by neoadjuvant (re)CHRT followed by surgery vs. upfront surgery, stratifying pts by each localization of LRRC. METHODS: LRRC pts treated at the National Cancer Institute of Milan (Italy) were retrospectively divided into two groups: neoadjuvant (re)CHRT vs. upfront surgery. According to our Milan classification, LRRC were categorized as S1, if located centrally (S1a-b) or anteriorly (S1c) within the pelvis; S2, in case of sacral involvement; S3, in case of lateral pelvic wall infiltration. RESULTS: 152 pts were candidate for multimodal treatment: 49 (32.2%) by neoadjuvant (re)CHRT and surgery, including 33 re-irradiations, vs. 103 (67.8%) by upfront surgery. No difference was observed in R0 resection rates (respectively 47.6% vs. 51.0%). However, neoadjuvant (re)CHRT followed by surgery improved the DFS (p = 0.028), also in R1 procedures (p = 0.013), compared with upfront surgery. At multivariate analysis, the R+ surgery (p < 0.0001) strongly predicted unfavorable DFS, while neoadjuvant (re)CHRT followed by surgery was independently associated to better DFS (p = 0.0197). Stratifying by LRRC localization, the combined approach significantly improved DFS in the S1c (p = 0.029) and S2 (p = 0.004) subgroups compared to upfront surgery, but not in S1a-b and S3 pts. CONCLUSION: Anterior (S1c) and sacral-invasive (S2) pelvic recurrences significantly benefit in terms of DFS by combination of neoadjuvant (re)CHRT and radical surgery, also after R1 resection.


Assuntos
Quimiorradioterapia/mortalidade , Terapia Neoadjuvante/mortalidade , Recidiva Local de Neoplasia/mortalidade , Neoplasias Pélvicas/mortalidade , Neoplasias Retais/mortalidade , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Recidiva Local de Neoplasia/patologia , Recidiva Local de Neoplasia/terapia , Neoplasias Pélvicas/secundário , Neoplasias Pélvicas/terapia , Prognóstico , Neoplasias Retais/patologia , Neoplasias Retais/terapia , Estudos Retrospectivos , Taxa de Sobrevida
19.
J Surg Oncol ; 122(2): 350-359, 2020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32424824

RESUMO

BACKGROUND AND OBJECTIVES: Selection of patients affected by pelvic recurrence of rectal cancer (PRRC) who are likely to achieve a R0 resection is mandatory. The aim of this study was to propose a classification for PRRC to predict both radical surgery and disease-free survival (DFS). METHODS: PRRC patients treated at the National Cancer Institute of Milan (Italy) were included in the study. PRRC were classified as S1, if located centrally (S1a-S1b) or anteriorly (S1c) within the pelvis; S2, in case of sacral involvement below (S2a) or above (S2b) the second sacral vertebra; S3, in case of lateral pelvic involvement. RESULTS: Of 280 reviewed PRRC patients, 152 (54.3%) were evaluated for curative surgery. The strongest predictor of R+ resection was the S3 category (OR, 6.37; P = .011). Abdominosacral resection (P = .012), anterior exenteration (P = .012) and extended rectal re-excision (P = .003) were predictive of R0 resection. S3 category was highly predictive of poor DFS (HR 2.53; P = .038). DFS was significantly improved after R0 surgery for S1 (P < .0001) and S2 (P = .015) patients but not for S3 cases (P = .525). CONCLUSIONS: The proposed classification allows selection of subjects candidates to curative surgery, emphasizing that lateral pelvic involvement is the main predictor of R+ resection and independently affects the DFS.


Assuntos
Tomada de Decisões , Recidiva Local de Neoplasia/classificação , Recidiva Local de Neoplasia/cirurgia , Neoplasias Pélvicas/classificação , Neoplasias Pélvicas/cirurgia , Neoplasias Retais/classificação , Neoplasias Retais/cirurgia , Análise de Variância , Quimioterapia Adjuvante , Intervalo Livre de Doença , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Recidiva Local de Neoplasia/patologia , Neoplasias Pélvicas/patologia , Modelos de Riscos Proporcionais , Radioterapia Adjuvante , Neoplasias Retais/patologia , Neoplasias Retais/terapia , Taxa de Sobrevida
20.
BJR Case Rep ; 5(2): 20180036, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31501694

RESUMO

Solid tumors of the spleen are rare, with an incidence of 0.007% in all operating and autopsy specimens. In terms of microscopic structure and function, the spleen consists of two parts: the white pulp, which plays an important role in the immune system and the red pulp, which filters the blood.Primary splenic neoplasms can be classified into lymphoid neoplasms arising from the white pulp, and vascular neoplasms which arise from the red pulp.Primary tumors arising from vascular elements include benign lesions such as hemangioma, lymphangioma and hamartoma, intermediate lesions such as hemangioendothelioma, hemangiopericytoma and littoral cell angioma as well as the frankly malignant hemangiosarcoma.It is usually difficult to distinguish a benign from a malignant lesion with preoperative imaging studies and cytological exam by fine-needle aspiration (FNA), that is not easily obtained because of the risk of bleeding.Therefore a splenectomy should be necessary for a definitive diagnosis of splenic tumors.Martel and all for the first time described the sclerosing angiomatoid nodular transformation (SANT), like a vascular lesion of the spleen, with benign clinical course consisting by altered red pulp tissue that has been entrapped by a non-neoplastic stromal proliferative process.We describe a rare case of benign splenic mass documented with FDG/PET-CT (referred as equivocal), CT and MRI.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...