Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Indian J Public Health ; 63(3): 254-257, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31552858

RESUMEN

The present study aimed to find out the effect of disease-related impairments on functional status in individuals with spinal muscular atrophy and identify perceived barriers to undergo physiotherapy. The cross-sectional observational study was conducted on 90 participants from January to March 2018 using validated patient-reported questionnaire via electronic mail, along with Fatigue Severity Scale and ACTIVLIM. Results revealed that difficulty in sitting was due to scoliosis (36%) and muscle weakness (23%), the latter also contributing toward difficulty in standing and walking (59%). Inverse relationship exists between ACTIVLIM measures and fatigue severity scores (r = -0.338, P < 0.05), body mass index (r = -0.225, P < 0.05), age (r = -0.258, P < 0.05), and duration of illness (r = -0.257, P < 0.05). Economic constraints (27%), difficulty in traveling (17%), and lack of family support and mobility (14%) are perceived barriers to undergo physiotherapy. Functional impairments and identified barriers must be addressed as part of rehabilitation.


Asunto(s)
Atrofia Muscular Espinal/fisiopatología , Atrofia Muscular Espinal/rehabilitación , Aceptación de la Atención de Salud/psicología , Adolescente , Adulto , Factores de Edad , Índice de Masa Corporal , Niño , Estudios Transversales , Fatiga/etiología , Femenino , Accesibilidad a los Servicios de Salud , Humanos , India , Masculino , Fatiga Muscular , Atrofia Muscular Espinal/complicaciones , Rendimiento Físico Funcional , Escoliosis/etiología , Índice de Severidad de la Enfermedad , Factores Socioeconómicos , Factores de Tiempo , Caminata , Adulto Joven
2.
Artículo en Inglés | MEDLINE | ID: mdl-37260834

RESUMEN

Recently, deep learning networks have achieved considerable success in segmenting organs in medical images. Several methods have used volumetric information with deep networks to achieve segmentation accuracy. However, these networks suffer from interference, risk of overfitting, and low accuracy as a result of artifacts, in the case of very challenging objects like the brachial plexuses. In this paper, to address these issues, we synergize the strengths of high-level human knowledge (i.e., natural intelligence (NI)) with deep learning (i.e., artificial intelligence (AI)) for recognition and delineation of the thoracic brachial plexuses (BPs) in computed tomography (CT) images. We formulate an anatomy-guided deep learning hybrid intelligence approach for segmenting thoracic right and left brachial plexuses consisting of 2 key stages. In the first stage (AAR-R), objects are recognized based on a previously created fuzzy anatomy model of the body region with its key organs relevant for the task at hand wherein high-level human anatomic knowledge is precisely codified. The second stage (DL-D) uses information from AAR-R to limit the search region to just where each object is most likely to reside and performs encoder-decoder delineation in slices. The proposed method is tested on a dataset that consists of 125 images of the thorax acquired for radiation therapy planning of tumors in the thorax and achieves a Dice coefficient of 0.659.

3.
Med Image Anal ; 81: 102527, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35830745

RESUMEN

PURPOSE: Despite advances in deep learning, robust medical image segmentation in the presence of artifacts, pathology, and other imaging shortcomings has remained a challenge. In this paper, we demonstrate that by synergistically marrying the unmatched strengths of high-level human knowledge (i.e., natural intelligence (NI)) with the capabilities of deep learning (DL) networks (i.e., artificial intelligence (AI)) in garnering intricate details, these challenges can be significantly overcome. Focusing on the object recognition task, we formulate an anatomy-guided deep learning object recognition approach named AAR-DL which combines an advanced anatomy-modeling strategy, model-based non-deep-learning object recognition, and deep learning object detection networks to achieve expert human-like performance. METHODS: The AAR-DL approach consists of 4 key modules wherein prior knowledge (NI) is made use of judiciously at every stage. In the first module AAR-R, objects are recognized based on a previously created fuzzy anatomy model of the body region with all its organs following the automatic anatomy recognition (AAR) approach wherein high-level human anatomic knowledge is precisely codified. This module is purely model-based with no DL involvement. Although the AAR-R operation lacks accuracy, it is robust to artifacts and deviations (much like NI), and provides the much-needed anatomic guidance in the form of rough regions-of-interest (ROIs) for the following DL modules. The 2nd module DL-R makes use of the ROI information to limit the search region to just where each object is most likely to reside and performs DL-based detection of the 2D bounding boxes (BBs) in slices. The 2D BBs hug the shape of the 3D object much better than 3D BBs and their detection is feasible only due to anatomy guidance from AAR-R. In the 3rd module, the AAR model is deformed via the found 2D BBs providing refined model information which now embodies both NI and AI decisions. The refined AAR model more actively guides the 4th refined DL-R module to perform final object detection via DL. Anatomy knowledge is made use of in designing the DL networks wherein spatially sparse objects and non-sparse objects are handled differently to provide the required level of attention for each. RESULTS: Utilizing 150 thoracic and 225 head and neck (H&N) computed tomography (CT) data sets of cancer patients undergoing routine radiation therapy planning, the recognition performance of the AAR-DL approach is evaluated on 10 thoracic and 16 H&N organs in comparison to pure model-based approach (AAR-R) and pure DL approach without anatomy guidance. Recognition accuracy is assessed via location error/ centroid distance error, scale or size error, and wall distance error. The results demonstrate how the errors are gradually and systematically reduced from the 1st module to the 4th module as high-level knowledge is infused via NI at various stages into the processing pipeline. This improvement is especially dramatic for sparse and artifact-prone challenging objects, achieving a location error over all objects of 4.4 mm and 4.3 mm for the two body regions, respectively. The pure DL approach failed on several very challenging sparse objects while AAR-DL achieved accurate recognition, almost matching human performance, showing the importance of anatomy guidance for robust operation. Anatomy guidance also reduces the time required for training DL networks considerably. CONCLUSIONS: (i) High-level anatomy guidance improves recognition performance of DL methods. (ii) This improvement is especially noteworthy for spatially sparse, low-contrast, inconspicuous, and artifact-prone objects. (iii) Once anatomy guidance is provided, 3D objects can be detected much more accurately via 2D BBs than 3D BBs and the 2D BBs represent object containment with much more specificity. (iv) Anatomy guidance brings stability and robustness to DL approaches for object localization. (v) The training time can be greatly reduced by making use of anatomy guidance.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Algoritmos , Inteligencia Artificial , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos
4.
Med Phys ; 49(11): 7118-7149, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35833287

RESUMEN

BACKGROUND: Automatic segmentation of 3D objects in computed tomography (CT) is challenging. Current methods, based mainly on artificial intelligence (AI) and end-to-end deep learning (DL) networks, are weak in garnering high-level anatomic information, which leads to compromised efficiency and robustness. This can be overcome by incorporating natural intelligence (NI) into AI methods via computational models of human anatomic knowledge. PURPOSE: We formulate a hybrid intelligence (HI) approach that integrates the complementary strengths of NI and AI for organ segmentation in CT images and illustrate performance in the application of radiation therapy (RT) planning via multisite clinical evaluation. METHODS: The system employs five modules: (i) body region recognition, which automatically trims a given image to a precisely defined target body region; (ii) NI-based automatic anatomy recognition object recognition (AAR-R), which performs object recognition in the trimmed image without DL and outputs a localized fuzzy model for each object; (iii) DL-based recognition (DL-R), which refines the coarse recognition results of AAR-R and outputs a stack of 2D bounding boxes (BBs) for each object; (iv) model morphing (MM), which deforms the AAR-R fuzzy model of each object guided by the BBs output by DL-R; and (v) DL-based delineation (DL-D), which employs the object containment information provided by MM to delineate each object. NI from (ii), AI from (i), (iii), and (v), and their combination from (iv) facilitate the HI system. RESULTS: The HI system was tested on 26 organs in neck and thorax body regions on CT images obtained prospectively from 464 patients in a study involving four RT centers. Data sets from one separate independent institution involving 125 patients were employed in training/model building for each of the two body regions, whereas 104 and 110 data sets from the 4 RT centers were utilized for testing on neck and thorax, respectively. In the testing data sets, 83% of the images had limitations such as streak artifacts, poor contrast, shape distortion, pathology, or implants. The contours output by the HI system were compared to contours drawn in clinical practice at the four RT centers by utilizing an independently established ground-truth set of contours as reference. Three sets of measures were employed: accuracy via Dice coefficient (DC) and Hausdorff boundary distance (HD), subjective clinical acceptability via a blinded reader study, and efficiency by measuring human time saved in contouring by the HI system. Overall, the HI system achieved a mean DC of 0.78 and 0.87 and a mean HD of 2.22 and 4.53 mm for neck and thorax, respectively. It significantly outperformed clinical contouring in accuracy and saved overall 70% of human time over clinical contouring time, whereas acceptability scores varied significantly from site to site for both auto-contours and clinically drawn contours. CONCLUSIONS: The HI system is observed to behave like an expert human in robustness in the contouring task but vastly more efficiently. It seems to use NI help where image information alone will not suffice to decide, first for the correct localization of the object and then for the precise delineation of the boundary.


Asunto(s)
Inteligencia Artificial , Humanos , Tomografía Computarizada de Haz Cónico
5.
Med Image Anal ; 54: 45-62, 2019 05.
Artículo en Inglés | MEDLINE | ID: mdl-30831357

RESUMEN

Contouring (segmentation) of Organs at Risk (OARs) in medical images is required for accurate radiation therapy (RT) planning. In current clinical practice, OAR contouring is performed with low levels of automation. Although several approaches have been proposed in the literature for improving automation, it is difficult to gain an understanding of how well these methods would perform in a realistic clinical setting. This is chiefly due to three key factors - small number of patient studies used for evaluation, lack of performance evaluation as a function of input image quality, and lack of precise anatomic definitions of OARs. In this paper, extending our previous body-wide Automatic Anatomy Recognition (AAR) framework to RT planning of OARs in the head and neck (H&N) and thoracic body regions, we present a methodology called AAR-RT to overcome some of these hurdles. AAR-RT follows AAR's 3-stage paradigm of model-building, object-recognition, and object-delineation. Model-building: Three key advances were made over AAR. (i) AAR-RT (like AAR) starts off with a computationally precise definition of the two body regions and all of their OARs. Ground truth delineations of OARs are then generated following these definitions strictly. We retrospectively gathered patient data sets and the associated contour data sets that have been created previously in routine clinical RT planning from our Radiation Oncology department and mended the contours to conform to these definitions. We then derived an Object Quality Score (OQS) for each OAR sample and an Image Quality Score (IQS) for each study, both on a 1-to-10 scale, based on quality grades assigned to each OAR sample following 9 key quality criteria. Only studies with high IQS and high OQS for all of their OARs were selected for model building. IQS and OQS were employed for evaluating AAR-RT's performance as a function of image/object quality. (ii) In place of the previous hand-crafted hierarchy for organizing OARs in AAR, we devised a method to find an optimal hierarchy for each body region. Optimality was based on minimizing object recognition error. (iii) In addition to the parent-to-child relationship encoded in the hierarchy in previous AAR, we developed a directed probability graph technique to further improve recognition accuracy by learning and encoding in the model "steady" relationships that may exist among OAR boundaries in the three orthogonal planes. Object-recognition: The two key improvements over the previous approach are (i) use of the optimal hierarchy for actual recognition of OARs in a given image, and (ii) refined recognition by making use of the trained probability graph. Object-delineation: We use a kNN classifier confined to the fuzzy object mask localized by the recognition step and then fit optimally the fuzzy mask to the kNN-derived voxel cluster to bring back shape constraint on the object. We evaluated AAR-RT on 205 thoracic and 298 H&N (total 503) studies, involving both planning and re-planning scans and a total of 21 organs (9 - thorax, 12 - H&N). The studies were gathered from two patient age groups for each gender - 40-59 years and 60-79 years. The number of 3D OAR samples analyzed from the two body regions was 4301. IQS and OQS tended to cluster at the two ends of the score scale. Accordingly, we considered two quality groups for each gender - good and poor. Good quality data sets typically had OQS ≥ 6 and had distortions, artifacts, pathology etc. in not more than 3 slices through the object. The number of model-worthy data sets used for training were 38 for thorax and 36 for H&N, and the remaining 479 studies were used for testing AAR-RT. Accordingly, we created 4 anatomy models, one each for: Thorax male (20 model-worthy data sets), Thorax female (18 model-worthy data sets), H&N male (20 model-worthy data sets), and H&N female (16 model-worthy data sets). On "good" cases, AAR-RT's recognition accuracy was within 2 voxels and delineation boundary distance was within ∼1 voxel. This was similar to the variability observed between two dosimetrists in manually contouring 5-6 OARs in each of 169 studies. On "poor" cases, AAR-RT's errors hovered around 5 voxels for recognition and 2 voxels for boundary distance. The performance was similar on planning and replanning cases, and there was no gender difference in performance. AAR-RT's recognition operation is much more robust than delineation. Understanding object and image quality and how they influence performance is crucial for devising effective object recognition and delineation algorithms. OQS seems to be more important than IQS in determining accuracy. Streak artifacts arising from dental implants and fillings and beam hardening from bone pose the greatest challenge to auto-contouring methods.


Asunto(s)
Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Órganos en Riesgo/diagnóstico por imagen , Planificación de la Radioterapia Asistida por Computador/métodos , Neoplasias Torácicas/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Adulto , Anciano , Puntos Anatómicos de Referencia , Femenino , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Masculino , Persona de Mediana Edad , Modelos Anatómicos , Reconocimiento de Normas Patrones Automatizadas , Estudios Retrospectivos , Neoplasias Torácicas/radioterapia
6.
J Heart Lung Transplant ; 38(12): 1246-1256, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31474492

RESUMEN

BACKGROUND: Obesity is associated with an increased risk of primary graft dysfunction (PGD) after lung transplantation. The contribution of specific adipose tissue depots is unknown. METHODS: We performed a prospective cohort study of adult lung transplant recipients at 4 U.S. transplant centers. We measured cross-sectional areas of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) on chest and abdominal computed tomography (CT) scans and indexed each measurement to height.2 We used logistic regression to examine the associations of adipose indices and adipose classes with grade 3 PGD at 48 or 72 hours, and Cox proportional hazards models to examine survival. We used latent class analyses to identify the patterns of adipose distribution. We examined the associations of adipose indices with plasma biomarkers of obesity and PGD. RESULTS: A total of 262 and 117 subjects had available chest CT scans and underwent protocol abdominal CT scans, respectively. In the adjusted models, a greater abdominal SAT index was associated with an increased risk of PGD (odds ratio 1.9, 95% CI 1.02-3.4, p = 0.04) but not with survival time. VAT indices were not associated with PGD risk or survival time. A greater abdominal SAT index correlated with greater pre- and post-transplant leptin (r = 0.61, p < 0.001, and r = 0.44, p < 0.001), pre-transplant IL-1RA (r = 0.25, p = 0.04), and post-transplant ICAM-1 (r = 0.25, p = 0.04). We identified 3 latent patterns of adiposity. The class defined by high thoracic and abdominal SAT had the greatest risk of PGD. CONCLUSIONS: Subcutaneous, but not visceral, adiposity is associated with an increased risk of PGD after lung transplantation.


Asunto(s)
Tejido Adiposo/anatomía & histología , Trasplante de Pulmón , Disfunción Primaria del Injerto/epidemiología , Tejido Adiposo/diagnóstico por imagen , Anciano , Composición Corporal , Femenino , Humanos , Masculino , Persona de Mediana Edad , Obesidad/complicaciones , Tamaño de los Órganos , Disfunción Primaria del Injerto/etiología , Estudios Prospectivos , Medición de Riesgo , Tomografía Computarizada por Rayos X
7.
Artículo en Inglés | MEDLINE | ID: mdl-30111903

RESUMEN

Algorithms for image segmentation (including object recognition and delineation) are influenced by the quality of object appearance in the image and overall image quality. However, the issue of how to perform segmentation evaluation as a function of these quality factors has not been addressed in the literature. In this paper, we present a solution to this problem. We devised a set of key quality criteria that influence segmentation (global and regional): posture deviations, image noise, beam hardening artifacts (streak artifacts), shape distortion, presence of pathology, object intensity deviation, and object contrast. A trained reader assigned a grade to each object for each criterion in each study. We developed algorithms based on logical predicates for determining a 1 to 10 numeric quality score for each object and each image from reader-assigned quality grades. We analyzed these object and image quality scores (OQS and IQS, respectively) in our data cohort by gender and age. We performed recognition and delineation of all objects using recent adaptations [8, 9] of our Automatic Anatomy Recognition (AAR) framework [6] and analyzed the accuracy of recognition and delineation of each object. We illustrate our method on 216 head & neck and 211 thoracic cancer computed tomography (CT) studies.

8.
Artículo en Inglés | MEDLINE | ID: mdl-30190630

RESUMEN

Segmentation of organs at risk (OARs) is a key step during the radiation therapy (RT) treatment planning process. Automatic anatomy recognition (AAR) is a recently developed body-wide multiple object segmentation approach, where segmentation is designed as two dichotomous steps: object recognition (or localization) and object delineation. Recognition is the high-level process of determining the whereabouts of an object, and delineation is the meticulous low-level process of precisely indicating the space occupied by an object. This study focuses on recognition. The purpose of this paper is to introduce new features of the AAR-recognition approach (abbreviated as AAR-R from now on) of combining texture and intensity information into the recognition procedure, using the optimal spanning tree to achieve the optimal hierarchy for recognition to minimize recognition errors, and to illustrate recognition performance by using large-scale testing computed tomography (CT) data sets. The data sets pertain to 216 non-serial (planning) and 82 serial (re-planning) studies of head and neck (H&N) cancer patients undergoing radiation therapy, involving a total of ~2600 object samples. Texture property "maximum probability of occurrence" derived from the co-occurrence matrix was determined to be the best property and is utilized in conjunction with intensity properties in AAR-R. An optimal spanning tree is found in the complete graph whose nodes are individual objects, and then the tree is used as the hierarchy in recognition. Texture information combined with intensity can significantly reduce location error for gland-related objects (parotid and submandibular glands). We also report recognition results by considering image quality, which is a novel concept. AAR-R with new features achieves a location error of less than 4 mm (~1.5 voxels in our studies) for good quality images for both serial and non-serial studies.

9.
Artículo en Inglés | MEDLINE | ID: mdl-30190629

RESUMEN

Contouring of the organs at risk is a vital part of routine radiation therapy planning. For the head and neck (H&N) region, this is more challenging due to the complexity of anatomy, the presence of streak artifacts, and the variations of object appearance. In this paper, we describe the latest advances in our Automatic Anatomy Recognition (AAR) approach, which aims to automatically contour multiple objects in the head and neck region on planning CT images. Our method has three major steps: model building, object recognition, and object delineation. First, the better-quality images from our cohort of H&N CT studies are used to build fuzzy models and find the optimal hierarchy for arranging objects based on the relationship between objects. Then, the object recognition step exploits the rich prior anatomic information encoded in the hierarchy to derive the location and pose for each object, which leads to generalizable and robust methods and mitigation of object localization challenges. Finally, the delineation algorithms employ local features to contour the boundary based on object recognition results. We make several improvements within the AAR framework, including finding recognition-error-driven optimal hierarchy, modeling boundary relationships, combining texture and intensity, and evaluating object quality. Experiments were conducted on the largest ensemble of clinical data sets reported to date, including 216 planning CT studies and over 2,600 object samples. The preliminary results show that on data sets with minimal (<4 slices) streak artifacts and other deviations, overall recognition accuracy reaches 2 voxels, with overall delineation Dice coefficient close to 0.8 and Hausdorff Distance within 1 voxel.

11.
PLoS One ; 12(1): e0168932, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28046024

RESUMEN

PURPOSE: Overweight and underweight conditions are considered relative contraindications to lung transplantation due to their association with excess mortality. Yet, recent work suggests that body mass index (BMI) does not accurately reflect adipose tissue mass in adults with advanced lung diseases. Alternative and more accurate measures of adiposity are needed. Chest fat estimation by routine computed tomography (CT) imaging may therefore be important for identifying high-risk lung transplant candidates. In this paper, an approach to chest fat quantification and quality assessment based on a recently formulated concept of standardized anatomic space (SAS) is presented. The goal of the paper is to seek answers to several key questions related to chest fat quantity and quality assessment based on a single slice CT (whether in the chest, abdomen, or thigh) versus a volumetric CT, which have not been addressed in the literature. METHODS: Unenhanced chest CT image data sets from 40 adult lung transplant candidates (age 58 ± 12 yrs and BMI 26.4 ± 4.3 kg/m2), 16 with chronic obstructive pulmonary disease (COPD), 16 with idiopathic pulmonary fibrosis (IPF), and the remainder with other conditions were analyzed together with a single slice acquired for each patient at the L5 vertebral level and mid-thigh level. The thoracic body region and the interface between subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the chest were consistently defined in all patients and delineated using Live Wire tools. The SAT and VAT components of chest were then segmented guided by this interface. The SAS approach was used to identify the corresponding anatomic slices in each chest CT study, and SAT and VAT areas in each slice as well as their whole volumes were quantified. Similarly, the SAT and VAT components were segmented in the abdomen and thigh slices. Key parameters of the attenuation (Hounsfield unit (HU) distributions) were determined from each chest slice and from the whole chest volume separately for SAT and VAT components. The same parameters were also computed from the single abdominal and thigh slices. The ability of the slice at each anatomic location in the chest (and abdomen and thigh) to act as a marker of the measures derived from the whole chest volume was assessed via Pearson correlation coefficient (PCC) analysis. RESULTS: The SAS approach correctly identified slice locations in different subjects in terms of vertebral levels. PCC between chest fat volume and chest slice fat area was maximal at the T8 level for SAT (0.97) and at the T7 level for VAT (0.86), and was modest between chest fat volume and abdominal slice fat area for SAT and VAT (0.73 and 0.75, respectively). However, correlation was weak for chest fat volume and thigh slice fat area for SAT and VAT (0.52 and 0.37, respectively), and for chest fat volume for SAT and VAT and BMI (0.65 and 0.28, respectively). These same single slice locations with maximal PCC were found for SAT and VAT within both COPD and IPF groups. Most of the attenuation properties derived from the whole chest volume and single best chest slice for VAT (but not for SAT) were significantly different between COPD and IPF groups. CONCLUSIONS: This study demonstrates a new way of optimally selecting slices whose measurements may be used as markers of similar measurements made on the whole chest volume. The results suggest that one or two slices imaged at T7 and T8 vertebral levels may be enough to estimate reliably the total SAT and VAT components of chest fat and the quality of chest fat as determined by attenuation distributions in the entire chest volume.


Asunto(s)
Tejido Adiposo/diagnóstico por imagen , Trasplante de Pulmón , Pulmón/anatomía & histología , Tórax/diagnóstico por imagen , Adiposidad , Adulto , Anciano , Índice de Masa Corporal , Femenino , Humanos , Fibrosis Pulmonar Idiopática/diagnóstico por imagen , Fibrosis Pulmonar Idiopática/cirugía , Procesamiento de Imagen Asistido por Computador , Masculino , Persona de Mediana Edad , Dinámicas no Lineales , Variaciones Dependientes del Observador , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico por imagen , Enfermedad Pulmonar Obstructiva Crónica/cirugía , Tomografía Computarizada por Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA