Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
1.
Med Image Anal ; 95: 103162, 2024 Apr 04.
Artículo en Inglés | MEDLINE | ID: mdl-38593644

RESUMEN

Active Learning (AL) has the potential to solve a major problem of digital pathology: the efficient acquisition of labeled data for machine learning algorithms. However, existing AL methods often struggle in realistic settings with artifacts, ambiguities, and class imbalances, as commonly seen in the medical field. The lack of precise uncertainty estimations leads to the acquisition of images with a low informative value. To address these challenges, we propose Focused Active Learning (FocAL), which combines a Bayesian Neural Network with Out-of-Distribution detection to estimate different uncertainties for the acquisition function. Specifically, the weighted epistemic uncertainty accounts for the class imbalance, aleatoric uncertainty for ambiguous images, and an OoD score for artifacts. We perform extensive experiments to validate our method on MNIST and the real-world Panda dataset for the classification of prostate cancer. The results confirm that other AL methods are 'distracted' by ambiguities and artifacts which harm the performance. FocAL effectively focuses on the most informative images, avoiding ambiguities and artifacts during acquisition. For both experiments, FocAL outperforms existing AL approaches, reaching a Cohen's kappa of 0.764 with only 0.69% of the labeled Panda data.

2.
Med Image Anal ; 84: 102696, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36495600

RESUMEN

Brain pathologies often manifest as partial or complete loss of tissue. The goal of many neuroimaging studies is to capture the location and amount of tissue changes with respect to a clinical variable of interest, such as disease progression. Morphometric analysis approaches capture local differences in the distribution of tissue or other quantities of interest in relation to a clinical variable. We propose to augment morphometric analysis with an additional feature extraction step based on unbalanced optimal transport. The optimal transport feature extraction step increases statistical power for pathologies that cause spatially dispersed tissue loss, minimizes sensitivity to shifts due to spatial misalignment or differences in brain topology, and separates changes due to volume differences from changes due to tissue location. We demonstrate the proposed optimal transport feature extraction step in the context of a volumetric morphometric analysis of the OASIS-1 study for Alzheimer's disease. The results demonstrate that the proposed approach can identify tissue changes and differences that are not otherwise measurable.


Asunto(s)
Enfermedad de Alzheimer , Imagen por Resonancia Magnética , Humanos , Imagen por Resonancia Magnética/métodos , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Neuroimagen/métodos , Enfermedad de Alzheimer/diagnóstico por imagen , Enfermedad de Alzheimer/patología , Progresión de la Enfermedad
3.
Int J Comput Assist Radiol Surg ; 17(2): 403-411, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34837564

RESUMEN

PURPOSE: Surgery for nasal airway obstruction (NAO) has a high failure rate, with up to 50% of patients reporting persistent symptoms postoperatively. Virtual surgery planning has the potential to improve surgical outcomes, but current manual methods are too labor-intensive to be adopted on a large scale. This manuscript introduces an automatic atlas-based approach for performing virtual septoplasties. METHODS: A cohort of 47 healthy subjects and 26 NAO patients was investigated. An atlas of healthy nasal geometry was constructed. The automatic virtual septoplasty method consists of a multi-stage registration approach to fit the atlas to a target NAO patient, automatically segment the patient's septum and airway, and deform the patient image to have a non-deviated septum. RESULTS: Our automatic virtual septoplasty method straightened the septum successfully in 18 out of 26 NAO patients (69% of cases). In these cases, the ratio of the higher to the lower airspace cross-sectional areas in the left and right nasal cavities improved from 1.47 ± 0.45 to 1.16 ± 0.33 in the region surrounding the septal deviation, showing that the nasal airway became more symmetric after virtual septoplasty. CONCLUSION: This automated virtual septoplasty technique has the potential to greatly reduce the effort required to perform computational fluid dynamics (CFD) analysis of nasal airflow for NAO surgical planning. Future studies are needed to investigate if virtual surgery planning using this method is predictive of subjective symptoms in NAO patients after septoplasty.


Asunto(s)
Obstrucción Nasal , Rinoplastia , Humanos , Hidrodinámica , Cavidad Nasal , Obstrucción Nasal/diagnóstico por imagen , Obstrucción Nasal/cirugía , Tabique Nasal/diagnóstico por imagen , Tabique Nasal/cirugía , Resultado del Tratamiento
4.
Plast Reconstr Surg ; 146(3): 314e-323e, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32459727

RESUMEN

BACKGROUND: Current methods to analyze three-dimensional photography do not quantify intracranial volume, an important metric of development. This study presents the first noninvasive, radiation-free, accurate, and reproducible method to quantify intracranial volume from three-dimensional photography. METHODS: In this retrospective study, cranial bones and head skin were automatically segmented from computed tomographic images of 575 subjects without cranial abnormality (average age, 5 ± 5 years; range, 0 to 16 years). The intracranial volume and the head volume were measured at the cranial vault region, and their relation was modeled by polynomial regression, also accounting for age and sex. Then, the regression model was used to estimate the intracranial volume of 30 independent pediatric patients from their head volume measured using three-dimensional photography. Evaluation was performed by comparing the estimated intracranial volume with the true intracranial volume of these patients computed from paired computed tomographic images; two growth models were used to compensate for the time gap between computed tomographic and three-dimensional photography. RESULTS: The regression model estimated the intracranial volume of the normative population from the head volume calculated from computed tomographic images with an average error of 3.81 ± 3.15 percent (p = 0.93) and a correlation (R) of 0.96. The authors obtained an average error of 4.07 ± 3.01 percent (p = 0.57) in estimating the intracranial volume of the patients from three-dimensional photography using the regression model. CONCLUSION: Three-dimensional photography with image analysis provides measurement of intracranial volume with clinically acceptable accuracy, thus offering a noninvasive, precise, and reproducible method to evaluate normal and abnormal brain development in young children. CLINICAL QUESTION/LEVEL OF EVIDENCE: Diagnostic, V.


Asunto(s)
Imagenología Tridimensional , Fotograbar/métodos , Cráneo/anatomía & histología , Cráneo/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Adolescente , Niño , Preescolar , Femenino , Humanos , Lactante , Masculino , Tamaño de los Órganos , Estudios Retrospectivos
5.
Sci Rep ; 10(1): 5829, 2020 04 02.
Artículo en Inglés | MEDLINE | ID: mdl-32242131

RESUMEN

This article presents a real-time approach for classification of burn depth based on B-mode ultrasound imaging. A grey-level co-occurrence matrix (GLCM) computed from the ultrasound images of the tissue is employed to construct the textural feature set and the classification is performed using nonlinear support vector machine and kernel Fisher discriminant analysis. A leave-one-out cross-validation is used for the independent assessment of the classifiers. The model is tested for pair-wise binary classification of four burn conditions in ex vivo porcine skin tissue: (i) 200 °F for 10 s, (ii) 200 °F for 30 s, (iii) 450 °F for 10 s, and (iv) 450 °F for 30 s. The average classification accuracy for pairwise separation is 99% with just over 30 samples in each burn group and the average multiclass classification accuracy is 93%. The results highlight that the ultrasound imaging-based burn classification approach in conjunction with the GLCM texture features provide an accurate assessment of altered tissue characteristics with relatively moderate sample sizes, which is often the case with experimental and clinical datasets. The proposed method is shown to have the potential to assist with the real-time clinical assessment of burn degrees, particularly for discriminating between superficial and deep second degree burns, which is challenging in clinical practice.


Asunto(s)
Quemaduras/diagnóstico por imagen , Algoritmos , Animales , Piel/diagnóstico por imagen , Máquina de Vectores de Soporte , Porcinos , Ultrasonografía/métodos
6.
Surg Endosc ; 34(7): 3135-3144, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-31482354

RESUMEN

BACKGROUND: The virtual basic laparoscopic skill trainer suturing simulator (VBLaST-SS©) was developed to simulate the intracorporeal suturing task in the FLS program. The purpose of this study was to evaluate the training effectiveness and participants' learning curves on the VBLaST-SS© and to assess whether the skills were retained after 2 weeks without training. METHODS: Fourteen medical students participated in the study. Participants were randomly assigned to two training groups (7 per group): VBLaST-SS© or FLS, based on the modality of training. Participants practiced on their assigned system for one session (30 min or up to ten repetitions) a day, 5 days a week for three consecutive weeks. Their baseline, post-test, and retention (after 2 weeks) performance were also analyzed. Participants' performance scores were calculated based on the original FLS scoring system. The cumulative summation (CUSUM) method was used to evaluate learning. Two-way mixed factorial ANOVA was used to compare the effects of group, time point (baseline, post-test, and retention), and their interaction on performance. RESULTS: Six out of seven participants in each group reached the predefined proficiency level after 7 days of training. Participants' performance improved significantly (p < 0.001) after training within their assigned group. The CUSUM learning curve shows that one participant in each group achieved 5% failure rate by the end of the training period. Twelve out of fourteen participants' CUSUM curves showed a negative trend toward achieving the 5% failure rate after further training. CONCLUSION: The VBLaST-SS© is effective in training laparoscopic suturing skill. Participants' performance of intracorporeal suturing was significantly improved after training on both systems and was retained after 2 weeks of no training.


Asunto(s)
Laparoscopía/educación , Estudiantes de Medicina , Suturas , Realidad Virtual , Adulto , Competencia Clínica , Simulación por Computador , Femenino , Humanos , Laparoscopía/métodos , Curva de Aprendizaje , Masculino , Entrenamiento Simulado , Interfaz Usuario-Computador , Adulto Joven
7.
Plast Reconstr Surg ; 144(6): 1051e-1060e, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31764657

RESUMEN

BACKGROUND: Evaluation of surgical treatment for craniosynostosis is typically based on subjective visual assessment or simple clinical metrics of cranial shape that are prone to interobserver variability. Three-dimensional photography provides cheap and noninvasive information to assess surgical outcomes, but there are no clinical tools to analyze it. The authors aim to objectively and automatically quantify head shape from three-dimensional photography. METHODS: The authors present an automatic method to quantify intuitive metrics of local head shape from three-dimensional photography using a normative statistical head shape model built from 201 subjects. The authors use these metrics together with a machine learning classifier to distinguish between patients with (n = 266) and without (n = 201) craniosynostosis (aged 0 to 6 years). The authors also use their algorithms to quantify objectively local surgical head shape improvements on 18 patients with presurgical and postsurgical three-dimensional photographs. RESULTS: The authors' methods detected craniosynostosis automatically with 94.74 percent sensitivity and 96.02 percent specificity. Within the data set of patients with craniosynostosis, the authors identified correctly the fused sutures with 99.51 percent sensitivity and 99.13 percent specificity. When the authors compared quantitatively the presurgical and postsurgical head shapes of patients with craniosynostosis, they obtained a significant reduction of head shape abnormalities (p < 0.05), in agreement with the treatment approach and the clinical observations. CONCLUSIONS: Quantitative head shape analysis and three-dimensional photography provide an accurate and objective tool to screen for head shape abnormalities at low cost and avoiding imaging with radiation and/or sedation. The authors' automatic quantitative framework allows for the evaluation of surgical outcomes and has the potential to detect relapses. CLINICAL QUESTION/LEVEL OF EVIDENCE: Diagnostic, I.


Asunto(s)
Craneosinostosis/cirugía , Cabeza/anomalías , Niño , Preescolar , Craneosinostosis/diagnóstico por imagen , Craneosinostosis/patología , Craneotomía/métodos , Femenino , Cabeza/diagnóstico por imagen , Humanos , Imagenología Tridimensional , Lactante , Recién Nacido , Masculino , Fotograbar , Cuidados Preoperatorios/métodos , Estudios Retrospectivos , Cráneo/anomalías , Cráneo/diagnóstico por imagen
8.
Artículo en Inglés | MEDLINE | ID: mdl-31474785

RESUMEN

Ultrasound (US)-guided renal biopsy is a critically important tool in the evaluation and management of non-malignant renal pathologies with diagnostic and prognostic significance. It requires a good biopsy technique and skill to safely and consistently obtain high yield biopsy samples for tissue analysis. This project aims to develop a virtual trainer to help clinicians to improve procedural skill competence in real-time ultrasound-guided renal biopsy. This paper presents a cost-effective, high-fidelity trainer built using low-cost hardware components and open source visualization and interactive simulation libraries: interactive medical simulation toolkit (iMSTK) and 3D Slicer. We used a physical mannequin to simulate the tactile feedback that trainees experience while scanning a real patient and to provide trainees with spatial awareness of the US scanning plane with respect to the patient's anatomy. The ultrasound probe and biopsy needle were modeled using commonly used clinical tools and were instrumented to communicate with the simulator. 3D Slicer was used to visualize an image sliced from a pre-acquired 3-D ultrasound volume based on the location of the probe, with a realistic needle rendering. The simulation engine in iMSTK modeled the interaction between the needle and the virtual tissue to generate visual deformations on the tissue and tactile forces on the needle which are transmitted to the needle that the user holds. Initial testing has shown promising results with respect to quality of simulated images and system responsiveness. Further evaluation by clinicians is planned for the next stage.

9.
Int J Comput Assist Radiol Surg ; 14(12): 2187-2198, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31512193

RESUMEN

PURPOSE: Given the ability of positron emission tomography (PET) imaging to localize malignancies in heterogeneous tumors and tumors that lack an X-ray computed tomography (CT) correlate, combined PET/CT-guided biopsy may improve the diagnostic yield of biopsies. However, PET and CT images are naturally susceptible to problems due to respiratory motion, leading to imprecise tumor localization and shape distortion. To facilitate PET/CT-guided needle biopsy, we developed and investigated the feasibility of a workflow that allows to bring PET image guidance into interventional CT suite while accounting for respiratory motion. METHODS: The performance of PET/CT respiratory motion correction using registered and summed phases method was evaluated through computer simulations using the mathematical 4D extended cardiac-torso phantom, with motion simulated from real respiratory traces. The performance of PET/CT-guided biopsy procedure was evaluated through operation on a physical anthropomorphic phantom. Vials containing radiolabeled 18F-fluorodeoxyglucose were placed within the physical phantom thorax as biopsy targets. We measured the average distance between target center and the simulated biopsy location among multiple trials to evaluate the biopsy localization accuracy. RESULTS: The computer simulation results showed that the RASP method generated PET images with a significantly reduced noise of 0.10 ± 0.01 standardized uptake value (SUV) as compared to an end-of-expiration image noise of 0.34 ± 0.04 SUV. The respiratory motion increased the apparent liver lesion size from 5.4 ± 1.1 to 35.3 ± 3.0 cc. The RASP algorithm reduced this to 15.7 ± 3.7 cc. The distances between the centroids for the static image lesion and two moving lesions in the liver and lung, when reconstructed with the RASP algorithm, were 0.83 ± 0.72 mm and 0.42 ± 0.72 mm. For the ungated imaging, these values increased to 3.48 ± 1.45 mm and 2.5 ± 0.12 mm, respectively. For the ungated imaging, this increased to 1.99 ± 1.72 mm. In addition, the lesion activity estimation (e.g., SUV) was accurate and constant for images reconstructed using the RASP algorithm, whereas large activity bias and variations (± 50%) were observed for lesions in the ungated images. The physical phantom studies demonstrated a biopsy needle localization error of 2.9 ± 0.9 mm from CT. Combined with the localization errors due to respiration for the PET images from simulations, the overall estimated lesion localization error would be 3.08 mm for PET-guided biopsies images using RASP and 3.64 mm when using ungated PET images. In other words, RASP reduced the localization error by approximately 0.6 mm. The combined error analysis showed that replacing the standard end-of-expiration images with the proposed RASP method in PET/CT-guided biopsy workflow yields comparable lesion localization accuracy and reduced image noise. CONCLUSION: The RASP method can produce PET images with reduced noise, attenuation artifacts and respiratory motion, resulting in more accurate lesion localization. Testing the PET/CT-guided biopsy workflow using computer simulation and physical phantoms with respiratory motion, we demonstrated that guided biopsy procedure with the RASP method can benefit from improved PET image quality due to noise reduction, without compromising the accuracy of lesion localization.


Asunto(s)
Simulación por Computador , Biopsia Guiada por Imagen/métodos , Hígado/patología , Pulmón/patología , Movimientos de los Órganos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Mecánica Respiratoria , Algoritmos , Artefactos , Humanos , Hígado/diagnóstico por imagen , Pulmón/diagnóstico por imagen , Fantasmas de Imagen
10.
Artículo en Inglés | MEDLINE | ID: mdl-31379402

RESUMEN

The evaluation of head malformations plays an essential role in the early diagnosis, the decision to perform surgery and the assessment of the surgical outcome of patients with craniosynostosis. Clinicians rely on two metrics to evaluate the head shape: head circumference (HC) and cephalic index (CI). However, they present a high inter-observer variability and they do not take into account the location of the head abnormalities. In this study, we present an automated framework to objectively quantify the head malformations, HC, and CI from three-dimensional (3D) photography, a radiation-free, fast and non-invasive imaging modality. Our method automatically extracts the head shape using a set of landmarks identified by registering the head surface of a patient to a reference template in which the position of the landmarks is known. Then, we quantify head malformations as the local distances between the patient's head and its closest normal from a normative statistical head shape multi-atlas. We calculated cranial malformations, HC, and CI for 28 patients with craniosynostosis, and we compared them with those computed from the normative population. Malformation differences between the two populations were statistically significant (p<0.05) at the head regions with abnormal development due to suture fusion. We also trained a support vector machine classifier using the malformations calculated and we obtained an improved accuracy of 91.03% in the detection of craniosynostosis, compared to 78.21% obtained with HC or CI. This method has the potential to assist in the longitudinal evaluation of cranial malformations after surgical treatment of craniosynostosis.

11.
Surg Endosc ; 33(8): 2473-2474, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-30519884

RESUMEN

The surname of Sreekanth Arikatla incorrectly appeared as Sreekanth Artikala.

12.
Surg Endosc ; 33(8): 2468-2472, 2019 08.
Artículo en Inglés | MEDLINE | ID: mdl-30334151

RESUMEN

BACKGROUND: Intracorporeal suturing is one of the most important and difficult procedures in laparoscopic surgery. Practicing on a FLS trainer box is effective but requires large number of consumables, and the scoring is somewhat subjective and not immediate. A virtualbasic laparoscopic skill trainer (VBLaST©) was developed to simulate the five tasks of the FLS Trainer Box. The purpose of this study is to evaluate the face and content validity of the VBLaST suturing simulator (VBLaST-SS©). METHODS: Twenty-five medical students and residents completed an evaluation of the simulator. The participants were asked to perform the standard intracorporeal suturing task on both VBLaST-SS© and the traditional FLS box trainer. The performance scores on each system were calculated based on time (s), deviations to the black dots (mm), and incision gap (mm). The participants were then asked to finish a 13-item questionnaire with ratings from 1 (not realistic/useful) to 5 (very realistic/useful) regarding the face validity of the simulator. A Wilcoxon signed rank test was performed to identify differences in performance on the VBLaST-SS© compared to that of the traditional FLS box trainer. RESULTS: Three questions from the face validity questionnaire were excluded due to lack of response. Ratings to 8 of the remaining 10 questions (80%) averaged above 3.0 out of 5. Average intracorporeal suturing completion time on the VBLaST-SS© was 421 (SD = 168 s) seconds compared to 406 (175 s) seconds on the box trainer (p = 0.620). There was a significant difference between systems for the incision gap (p = 0.048). Deviation in needle insertion from the black dot was smaller for the box trainer than the virtual simulator (1.68 vs. 7.12, p < 0.001). CONCLUSION: Participants showed comparable performance on the VBLaST-SS© and traditional box trainer. Overall, the VBLaST-SS© system showed face validity and has the potential to support training for the suturing skills.


Asunto(s)
Algoritmos , Competencia Clínica , Simulación por Computador , Educación de Postgrado en Medicina/métodos , Laparoscopía/educación , Técnicas de Sutura/educación , Interfaz Usuario-Computador , Adulto , Femenino , Humanos , Laparoscopía/métodos , Masculino , Técnicas de Sutura/instrumentación , Adulto Joven
13.
Surg Endosc ; 33(6): 1927-1937, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30324462

RESUMEN

BACKGROUND: The fundamentals of laparoscopic surgery (FLS) trainer box, which is now established as a standard for evaluating minimally invasive surgical skills, consists of five tasks: peg transfer, pattern cutting, ligation, intra- and extracorporeal suturing. Virtual simulators of these tasks have been developed and validated as part of the Virtual Basic Laparoscopic Skill Trainer (VBLaST) (Arikatla et al. in Int J Med Robot Comput Assist Surg 10:344-355, 2014; Zhang et al. in Surg Endosc 27(10):3603-3615, 2013; Sankaranarayanan et al. in J Laparoendosc Adv Surg Tech 20(2):153-157, 2010; Qi et al. J Biomed Inform 75:48-62, 2017). The virtual task trainers have many advantages including automatic real-time objective scoring, reduced costs, and eliminating human proctors. In this paper, we extend VBLaST by adding two camera navigation system tasks: (a) pattern matching and (b) path tracing. METHODS: A comprehensive camera navigation simulator with two virtual tasks, simplified and cheaper hardware interface (compared to the prior version of VBLaST), graphical user interface, and automated metrics has been designed and developed. Face validity of the system is tested with medical students and residents from the University at Buffalo's medical school. RESULTS: The subjects rated the simulator highly in all aspects including its usefulness in training to center the target and to teach sizing skills. The quality and usefulness of the force feedback scored the lowest at 2.62.


Asunto(s)
Simulación por Computador , Laparoscopía/educación , Entrenamiento Simulado , Adulto , Competencia Clínica , Femenino , Humanos , Masculino , Interfaz Usuario-Computador , Adulto Joven
14.
Healthc Technol Lett ; 6(6): 210-213, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32038859

RESUMEN

The overall prevalence of chronic kidney disease in the general population is ∼14% with more than 661,000 Americans having a kidney failure. Ultrasound (US)-guided renal biopsy is a critically important tool in the evaluation and management of renal pathologies. This Letter presents KBVTrainer, a virtual simulator that the authors developed to train clinicians to improve procedural skill competence in US-guided renal biopsy. The simulator was built using low-cost hardware components and open source software libraries. They conducted a face validation study with five experts who were either adult/pediatric nephrologists or interventional/diagnostic radiologists. The trainer was rated very highly (>4.4) for the usefulness of the real US images (highest at 4.8), potential usefulness of the trainer in training for needle visualization, tracking, steadiness and hand-eye coordination, and overall promise of the trainer to be useful for training US-guided needle biopsies. The lowest score of 2.4 was received for the look and feel of the US probe and needle compared to clinical practice. The force feedback received a moderate score of 3.0. The clinical experts provided abundant verbal and written subjective feedback and were highly enthusiastic about using the trainer as a valuable tool for future trainees.

15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 5802-5805, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30441654

RESUMEN

Upper airway obstructions leading todifficulty breathing are significant problems that often require surgery to improve patient quality of life. However, these surgeries often have poor outcomes with little symptom improvement. This paper outlines the design of an interactive, patient-specific virtual surgical planning system that uses patient CT scans to generate three-dimensional representations of the airways and incorporates computational fluid dynamics (CFD) as a part of the surgical planning process. Individualized virtual surgeries can be performed by editing these models, which are then analyzed using CFD to compare pre- and post- surgery flow characteristics to assess patient symptom improvement. The prototype system shows significant promise by being intuitive, interactive, with a potential fast flow solver that provides near real-time feedback to the clinician.


Asunto(s)
Interpretación de Imagen Asistida por Computador , Imagenología Tridimensional , Obstrucción Nasal/cirugía , Procedimientos Quirúrgicos Operativos , Simulación por Computador , Humanos , Hidrodinámica , Interfaz Usuario-Computador
16.
Tomography ; 4(3): 148-158, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-30320214

RESUMEN

Multicenter clinical trials that use positron emission tomography (PET) imaging frequently rely on stable bias in imaging biomarkers to assess drug effectiveness. Many well-documented factors cause variability in PET intensity values. Two of the largest scanner-dependent errors are scanner calibration and reconstructed image resolution variations. For clinical trials, an increase in measurement error significantly increases the number of patient scans needed. We aim to provide a robust quality assurance system using portable PET/computed tomography "pocket" phantoms and automated image analysis algorithms with the goal of reducing PET measurement variability. A set of the "pocket" phantoms was scanned with patients, affixed to the underside of a patient bed. Our software analyzed the obtained images and estimated the image parameters. The analysis consisted of 2 steps, automated phantom detection and estimation of PET image resolution and global bias. Performance of the algorithm was tested under variations in image bias, resolution, noise, and errors in the expected sphere size. A web-based application was implemented to deploy the image analysis pipeline in a cloud-based infrastructure to support multicenter data acquisition, under Software-as-a-Service (SaaS) model. The automated detection algorithm localized the phantom reliably. Simulation results showed stable behavior when image properties and input parameters were varied. The PET "pocket" phantom has the potential to reduce and/or check for standardized uptake value measurement errors.

17.
Artículo en Inglés | MEDLINE | ID: mdl-29977103

RESUMEN

Surgical simulators are powerful tools that assist in providing advanced training for complex craniofacial surgical procedures and objective skills assessment such as the ones needed to perform Bilateral Sagittal Split Osteotomy (BSSO). One of the crucial steps in simulating BSSO is accurately cutting the mandible in a specific area of the jaw, where surgeons rely on high fidelity visual and haptic cues. In this paper, we present methods to simulate drilling and cutting of the bone using the burr and the motorized oscillating saw respectively. Our method allows low computational cost bone drilling or cutting while providing high fidelity haptic feedback that is suitable for real-time virtual surgery simulation.

18.
IEEE Trans Med Imaging ; 37(7): 1690-1700, 2018 07.
Artículo en Inglés | MEDLINE | ID: mdl-29969419

RESUMEN

Metopic craniosynostosis is a condition caused by the premature fusion of the metopic cranial suture. If untreated, it can result into brain growth restriction, increased intra-cranial pressure, visual impairment, and cognitive delay. Fronto-orbital advancement is the widely accepted surgical approach to correct cranial shape abnormalities in patients with metopic craniosynostosis, but the outcome of the surgery remains very dependent on the expertise of the surgeon because of the lack of objective and personalized cranial shape metrics to target during the intervention. We propose in this paper a locally affine diffeomorphic surface registration framework to create an optimal interventional plan personalized to each patient. Our method calculates the optimal surgical plan by minimizing cranial shape abnormalities, which are quantified using objective metrics based on a normative model of cranial shapes built from 198 healthy cases. It is guided by clinical osteotomy templates for fronto-orbital advancement, and it automatically calculates how much and in which direction each bone piece needs to be translated, rotated, and/or bent. Our locally affine framework models separately the transformation of each bone piece while ensuring the consistency of the global transformation. We used our method to calculate the optimal surgical plan for 23 patients, obtaining a significant reduction of malformations (p < 0.001) between 40.38% and 50.85% in the simulated outcome of the surgery using different osteotomy templates. In addition, malformation values were within healthy ranges (p > 0.01).


Asunto(s)
Craneosinostosis , Hueso Frontal , Interpretación de Imagen Asistida por Computador/métodos , Órbita , Cirugía Asistida por Computador/métodos , Estudios de Casos y Controles , Craneosinostosis/diagnóstico por imagen , Craneosinostosis/cirugía , Femenino , Hueso Frontal/diagnóstico por imagen , Hueso Frontal/cirugía , Humanos , Lactante , Masculino , Órbita/diagnóstico por imagen , Órbita/cirugía
19.
Artículo en Inglés | MEDLINE | ID: mdl-31379400

RESUMEN

The evaluation of cranial malformations plays an essential role both in the early diagnosis and in the decision to perform surgical treatment for craniosynostosis. In clinical practice, both cranial shape and suture fusion are evaluated using CT images, which involve the use of harmful radiation on children. Three-dimensional (3D) photography offers non-invasive, radiation-free, and anesthetic-free evaluation of craniofacial morphology. The aim of this study is to develop an automated framework to objectively quantify cranial malformations in patients with craniosynostosis from 3D photography. We propose a new method that automatically extracts the cranial shape by identifying a set of landmarks from a 3D photograph. Specifically, it registers the 3D photograph of a patient to a reference template in which the position of the landmarks is known. Then, the method finds the closest cranial shape to that of the patient from a normative statistical shape multi-atlas built from 3D photographs of healthy cases, and uses it to quantify objectively cranial malformations. We calculated the cranial malformations on 17 craniosynostosis patients and we compared them with the malformations of the normative population used to build the multi-atlas. The average malformations of the craniosynostosis cases were 2.68 ± 0.75 mm, which is significantly higher (p<0.001) than the average malformations of 1.70 ± 0.41 mm obtained from the normative cases. Our approach can support the quantitative assessment of surgical procedures for cranial vault reconstruction without exposing pediatric patients to harmful radiation.

20.
Artículo en Inglés | MEDLINE | ID: mdl-36246427

RESUMEN

There has been a recent emphasis in surgical science on supplementing surgical training outside of the Operating Room (OR). Combining simulation training with the current surgical apprenticeship enhances surgical skills in the OR, without increasing the time spent in the OR practicing. Computer-assisted surgical (CAS) planning consists of performing operative techniques virtually using three-dimensional (3D) computer-based models reconstructed from 3D cross-sectional imaging. The purpose of this paper is to present a CAS system to rehearse, visualize and quantify osteotomies, and demonstrate its usefulness in two different osteotomy surgical procedures, cranial vault reconstruction and femoral osteotomy. We found that the system could sufficiently simulate these two procedures. Our system takes advantage of the high-quality visualizations possible with 3DSlicer, as well as implements new infrastructure to allow for direct 3D interaction (cutting and positioning) with the bone models. We see the proposed osteotomy planner tool evolving towards incorporating different cutting templates to help depict several surgical scenarios, help 'trained' surgeons maintain operating skills, help rehearse a surgical sequence before heading to the OR, or even to help surgical planning for specific patient cases.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...