Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
J Urol ; 212(1): 52-62, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38860576

RESUMO

PURPOSE: Defining prostate cancer contours is a complex task, undermining the efficacy of interventions such as focal therapy. A multireader multicase study compared physicians' performance using artificial intelligence (AI) vs standard-of-care methods for tumor delineation. MATERIALS AND METHODS: Cases were interpreted by 7 urologists and 3 radiologists from 5 institutions with 2 to 23 years of experience. Each reader evaluated 50 prostatectomy cases retrospectively eligible for focal therapy. Each case included a T2-weighted MRI, contours of the prostate and region(s) of interest suspicious for cancer, and a biopsy report. First, readers defined cancer contours cognitively, manually delineating tumor boundaries to encapsulate all clinically significant disease. Then, after ≥ 4 weeks, readers contoured the same cases using AI software. Using tumor boundaries on whole-mount histopathology slides as ground truth, AI-assisted, cognitively-defined, and hemigland cancer contours were evaluated. Primary outcome measures were the accuracy and negative margin rate of cancer contours. All statistical analyses were performed using generalized estimating equations. RESULTS: The balanced accuracy (mean of voxel-wise sensitivity and specificity) of AI-assisted cancer contours (84.7%) was superior to cognitively-defined (67.2%) and hemigland contours (75.9%; P < .0001). Cognitively-defined cancer contours systematically underestimated cancer extent, with a negative margin rate of 1.6% compared to 72.8% for AI-assisted cancer contours (P < .0001). CONCLUSIONS: AI-assisted cancer contours reduce underestimation of prostate cancer extent, significantly improving contouring accuracy and negative margin rate achieved by physicians. This technology can potentially improve outcomes, as accurate contouring informs patient management strategy and underpins the oncologic efficacy of treatment.


Assuntos
Inteligência Artificial , Neoplasias da Próstata , Humanos , Masculino , Neoplasias da Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia , Estudos Retrospectivos , Imageamento por Ressonância Magnética/métodos , Pessoa de Meia-Idade , Prostatectomia/métodos , Idoso , Próstata/patologia , Próstata/diagnóstico por imagem , Sensibilidade e Especificidade , Competência Clínica
2.
Eur Urol Open Sci ; 54: 20-27, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37545845

RESUMO

Background: Magnetic resonance imaging (MRI) underestimation of prostate cancer extent complicates the definition of focal treatment margins. Objective: To validate focal treatment margins produced by an artificial intelligence (AI) model. Design setting and participants: Testing was conducted retrospectively in an independent dataset of 50 consecutive patients who had radical prostatectomy for intermediate-risk cancer. An AI deep learning model incorporated multimodal imaging and biopsy data to produce three-dimensional cancer estimation maps and margins. AI margins were compared with conventional MRI regions of interest (ROIs), 10-mm margins around ROIs, and hemigland margins. The AI model also furnished predictions of negative surgical margin probability, which were assessed for accuracy. Outcome measurements and statistical analysis: Comparing AI with conventional margins, sensitivity was evaluated using Wilcoxon signed-rank tests and negative margin rates using chi-square tests. Predicted versus observed negative margin probability was assessed using linear regression. Clinically significant prostate cancer (International Society of Urological Pathology grade ≥2) delineated on whole-mount histopathology served as ground truth. Results and limitations: The mean sensitivity for cancer-bearing voxels was higher for AI margins (97%) than for conventional ROIs (37%, p < 0.001), 10-mm ROI margins (93%, p = 0.24), and hemigland margins (94%, p < 0.001). For index lesions, AI margins were more often negative (90%) than conventional ROIs (0%, p < 0.001), 10-mm ROI margins (82%, p = 0.24), and hemigland margins (66%, p = 0.004). Predicted and observed negative margin probabilities were strongly correlated (R2 = 0.98, median error = 4%). Limitations include a validation dataset derived from a single institution's prostatectomy population. Conclusions: The AI model was accurate and effective in an independent test set. This approach could improve and standardize treatment margin definition, potentially reducing cancer recurrence rates. Furthermore, an accurate assessment of negative margin probability could facilitate informed decision-making for patients and physicians. Patient summary: Artificial intelligence was used to predict the extent of tumors in surgically removed prostate specimens. It predicted tumor margins more accurately than conventional methods.

3.
Sci Rep ; 8(1): 15519, 2018 10 19.
Artigo em Inglês | MEDLINE | ID: mdl-30341371

RESUMO

In intraoperative settings, the presence of acoustic clutter and reflection artifacts from metallic surgical tools often reduces the effectiveness of ultrasound imaging and complicates the localization of surgical tool tips. We propose an alternative approach for tool tracking and navigation in these challenging acoustic environments by augmenting ultrasound systems with a light source (to perform photoacoustic imaging) and a robot (to autonomously and robustly follow a surgical tool regardless of the tissue medium). The robotically controlled ultrasound probe continuously visualizes the location of the tool tip by segmenting and tracking photoacoustic signals generated from an optical fiber inside the tool. System validation in the presence of fat, muscle, brain, skull, and liver tissue with and without the presence of an additional clutter layer resulted in mean signal tracking errors <2 mm, mean probe centering errors <1 mm, and successful recovery from ultrasound perturbations, representing either patient motion or switching from photoacoustic images to ultrasound images to search for a target of interest. A detailed analysis of channel SNR in controlled experiments with and without significant acoustic clutter revealed that the detection of a needle tip is possible with photoacoustic imaging, particularly in cases where ultrasound imaging traditionally fails. Results show promise for guiding surgeries and procedures in acoustically challenging environments with this novel robotic and photoacoustic system combination.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Luz , Técnicas Fotoacústicas/tendências , Cirurgia Assistida por Computador/métodos , Ultrassonografia de Intervenção/métodos , Tecido Adiposo/diagnóstico por imagem , Algoritmos , Animais , Bovinos , Galinhas , Músculos/diagnóstico por imagem , Agulhas , Fibras Ópticas , Robótica , Análise Espectral
4.
Phys Med Biol ; 63(14): 144001, 2018 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-29923832

RESUMO

It is well known that there are structural differences between cortical and cancellous bone. However, spinal surgeons currently have no reliable method to non-invasively determine these differences in real-time when choosing the optimal starting point and trajectory to insert pedicle screws and avoid surgical complications associated with breached or weakened bone. This paper explores 3D photoacoustic imaging of a human vertebra to noninvasively differentiate cortical from cancellous bone for this surgical task. We observed that signals from the cortical bone tend to appear as compact, high-amplitude signals, while signals from the cancellous bone have lower amplitudes and are more diffuse. In addition, we discovered that the location of the light source for photoacoustic imaging is a critical parameter that can be adjusted to non-invasively determine the optimal entry point into the pedicle. Once inside the pedicle, statistically significant differences in the contrast and SNR of signals originating from the cancellous core of the pedicle (when compared to signals originating from the surrounding cortical bone) were obtained with laser energies of 0.23-2.08 mJ (p < 0.05). Similar quantitative differences were observed with an energy of 1.57 mJ at distances ⩾6 mm from the cortical bone of the pedicle. These quantifiable differences between cortical and cancellous bone (when imaging with an ultrasound probe in direct contact with each bone type) can potentially be used to ensure an optimal trajectory during surgery. Our results are promising for the introduction and development of photoacoustic imaging systems to overcome a wide range of longstanding challenges with spinal surgeries, including challenges with the occurrence of bone breaches due to misplaced pedicle screws.


Assuntos
Osso Esponjoso/diagnóstico por imagem , Osso Cortical/diagnóstico por imagem , Vértebras Lombares/diagnóstico por imagem , Técnicas Fotoacústicas/métodos , Fusão Vertebral/métodos , Osso Esponjoso/cirurgia , Osso Cortical/cirurgia , Humanos , Vértebras Lombares/cirurgia
5.
J Med Imaging (Bellingham) ; 5(2): 021213, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-29487885

RESUMO

Hysterectomies (i.e., surgical removal of the uterus) are the prevailing solution to treat medical conditions such as uterine cancer, endometriosis, and uterine prolapse. One complication of hysterectomies is accidental injury to the ureters located within millimeters of the uterine arteries that are severed and cauterized to hinder blood flow and enable full uterus removal. This work explores the feasibility of using photoacoustic imaging to visualize the uterine arteries (and potentially the ureter) when this imaging method is uniquely combined with a da Vinci® surgical robot that enables teleoperated hysterectomies. We developed a specialized light delivery system to surround a da Vinci® curved scissor tool, and an ultrasound probe was placed externally, representing a transvaginal approach, to receive the acoustic signals. Photoacoustic images were acquired while sweeping the tool across our custom 3-D uterine vessel model covered in ex vivo bovine tissue that was placed between the 3-D model and the fiber, as well as between the ultrasound probe and the 3-D model. Four tool orientations were explored, and the robot kinematics were used to provide tool position and orientation information simultaneously with each photoacoustic image acquisition. The optimal tool orientation produced images with contrast [Formula: see text] and background signal-to-noise ratios (SNRs) [Formula: see text], indicating minimal acoustic clutter from the tool tip. We achieved similar contrast and SNR measurements with four unique wrist orientations explored with the scissor tool in open and closed configurations. Results indicate that photoacoustic imaging is a promising approach to enable visualization of the uterine arteries to guide hysterectomies (and other gynecological surgeries). These results are additionally applicable to other da Vinci® surgeries and other surgical instruments with similar tip geometry.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA