Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Neuroradiology ; 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38980343

RESUMO

PURPOSE: For patients with vestibular schwannomas (VS), a conservative observational approach is increasingly used. Therefore, the need for accurate and reliable volumetric tumor monitoring is important. Currently, a volumetric cutoff of 20% increase in tumor volume is widely used to define tumor growth in VS. The study investigates the tumor volume dependency on the limits of agreement (LoA) for volumetric measurements of VS by means of an inter-observer study. METHODS: This retrospective study included 100 VS patients who underwent contrast-enhanced T1-weighted MRI. Five observers volumetrically annotated the images. Observer agreement and reliability was measured using the LoA, estimated using the limits of agreement with the mean (LOAM) method, and the intraclass correlation coefficient (ICC). RESULTS: The 100 patients had a median average tumor volume of 903 mm3 (IQR: 193-3101). Patients were divided into four volumetric size categories based on tumor volume quartile. The smallest tumor volume quartile showed a LOAM relative to the mean of 26.8% (95% CI: 23.7-33.6), whereas for the largest tumor volume quartile this figure was found to be 7.3% (95% CI: 6.5-9.7) and when excluding peritumoral cysts: 4.8% (95% CI: 4.2-6.2). CONCLUSION: Agreement limits within volumetric annotation of VS are affected by tumor volume, since the LoA improves with increasing tumor volume. As a result, for tumors larger than 200 mm3, growth can reliably be detected at an earlier stage, compared to the currently widely used cutoff of 20%. However, for very small tumors, growth should be assessed with higher agreement limits than previously thought.

2.
Gastrointest Endosc ; 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38942330

RESUMO

BACKGROUND AND AIMS: Computer-aided diagnosis (CADx) for optical diagnosis of colorectal polyps is thoroughly investigated. However, studies on human-artificial intelligence (AI) interaction are lacking. Aim was to investigate endoscopists' trust in CADx by evaluating whether communicating a calibrated algorithm confidence improved trust. METHODS: Endoscopists optically diagnosed 60 colorectal polyps. Initially, endoscopists diagnosed the polyps without CADx assistance (initial diagnosis). Immediately afterwards, the same polyp was again shown with CADx prediction; either only a prediction (benign or pre-malignant) or a prediction accompanied by a calibrated confidence score (0-100). A confidence score of 0 indicated a benign prediction, 100 a (pre-)malignant prediction. In half of the polyps CADx was mandatory, for the other half CADx was optional. After reviewing the CADx prediction, endoscopists made a final diagnosis. Histopathology was used as gold standard. Endoscopists' trust in CADx was measured as CADx prediction utilization; the willingness to follow CADx predictions when the endoscopists initially disagreed with the CADx prediction. RESULTS: Twenty-three endoscopists participated. Presenting CADx predictions increased the endoscopists' diagnostic accuracy (69.3% initial vs 76.6% final diagnosis, p<0.001). The CADx prediction was utilized in 36.5% (n=183/501) disagreements. Adding a confidence score led to a lower CADx prediction utilization, except when the confidence score surpassed 60. A mandatory CADx decreased CADx prediction utilization compared to an optional CADx. Appropriate trust, utilizing correct or disregarding incorrect CADx predictions was 48.7% (n=244/501). CONCLUSIONS: Appropriate trust was common and CADx prediction utilization was highest for the optional CADx without confidence scores. These results express the importance of a better understanding of human-AI interaction.

3.
J Endourol ; 38(7): 690-696, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38613819

RESUMO

Objective: To construct a convolutional neural network (CNN) model that can recognize and delineate anatomic structures on intraoperative video frames of robot-assisted radical prostatectomy (RARP) and to use these annotations to predict the surgical urethral length (SUL). Background: Urethral dissection during RARP impacts patient urinary incontinence (UI) outcomes, and requires extensive training. Large differences exist between incontinence outcomes of different urologists and hospitals. Also, surgeon experience and education are critical toward optimal outcomes. Therefore, new approaches are warranted. SUL is associated with UI. Artificial intelligence (AI) surgical image segmentation using a CNN could automate SUL estimation and contribute toward future AI-assisted RARP and surgeon guidance. Methods: Eighty-eight intraoperative RARP videos between June 2009 and September 2014 were collected from a single center. Two hundred sixty-four frames were annotated according to prostate, urethra, ligated plexus, and catheter. Thirty annotated images from different RARP videos were used as a test data set. The dice (similarity) coefficient (DSC) and 95th percentile Hausdorff distance (Hd95) were used to determine model performance. SUL was calculated using the catheter as a reference. Results: The DSC of the best performing model were 0.735 and 0.755 for the catheter and urethra classes, respectively, with a Hd95 of 29.27 and 72.62, respectively. The model performed moderately on the ligated plexus and prostate. The predicted SUL showed a mean difference of 0.64 to 1.86 mm difference vs human annotators, but with significant deviation (standard deviation = 3.28-3.56). Conclusion: This study shows that an AI image segmentation model can predict vital structures during RARP urethral dissection with moderate to fair accuracy. SUL estimation derived from it showed large deviations and outliers when compared with human annotators, but with a small mean difference (<2 mm). This is a promising development for further research on AI-assisted RARP.


Assuntos
Inteligência Artificial , Prostatectomia , Procedimentos Cirúrgicos Robóticos , Uretra , Humanos , Prostatectomia/métodos , Masculino , Uretra/cirurgia , Uretra/diagnóstico por imagem , Procedimentos Cirúrgicos Robóticos/métodos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Próstata/cirurgia , Próstata/diagnóstico por imagem
4.
Med Image Anal ; 94: 103157, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38574544

RESUMO

Computer-aided detection and diagnosis systems (CADe/CADx) in endoscopy are commonly trained using high-quality imagery, which is not representative for the heterogeneous input typically encountered in clinical practice. In endoscopy, the image quality heavily relies on both the skills and experience of the endoscopist and the specifications of the system used for screening. Factors such as poor illumination, motion blur, and specific post-processing settings can significantly alter the quality and general appearance of these images. This so-called domain gap between the data used for developing the system and the data it encounters after deployment, and the impact it has on the performance of deep neural networks (DNNs) supportive endoscopic CAD systems remains largely unexplored. As many of such systems, for e.g. polyp detection, are already being rolled out in clinical practice, this poses severe patient risks in particularly community hospitals, where both the imaging equipment and experience are subject to considerable variation. Therefore, this study aims to evaluate the impact of this domain gap on the clinical performance of CADe/CADx for various endoscopic applications. For this, we leverage two publicly available data sets (KVASIR-SEG and GIANA) and two in-house data sets. We investigate the performance of commonly-used DNN architectures under synthetic, clinically calibrated image degradations and on a prospectively collected dataset including 342 endoscopic images of lower subjective quality. Additionally, we assess the influence of DNN architecture and complexity, data augmentation, and pretraining techniques for improved robustness. The results reveal a considerable decline in performance of 11.6% (±1.5) as compared to the reference, within the clinically calibrated boundaries of image degradations. Nevertheless, employing more advanced DNN architectures and self-supervised in-domain pre-training effectively mitigate this drop to 7.7% (±2.03). Additionally, these enhancements yield the highest performance on the manually collected test set including images with lower subjective quality. By comprehensively assessing the robustness of popular DNN architectures and training strategies across multiple datasets, this study provides valuable insights into their performance and limitations for endoscopic applications. The findings highlight the importance of including robustness evaluation when developing DNNs for endoscopy applications and propose strategies to mitigate performance loss.


Assuntos
Diagnóstico por Computador , Redes Neurais de Computação , Humanos , Diagnóstico por Computador/métodos , Endoscopia Gastrointestinal , Processamento de Imagem Assistida por Computador/métodos
5.
Diagnostics (Basel) ; 13(20)2023 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-37892019

RESUMO

The preoperative prediction of resectability pancreatic ductal adenocarcinoma (PDAC) is challenging. This retrospective single-center study examined tumor and vessel radiomics to predict the resectability of PDAC in chemo-naïve patients. The tumor and adjacent arteries and veins were segmented in the portal-venous phase of contrast-enhanced CT scans, and radiomic features were extracted. Features were selected via stability and collinearity testing, and least absolute shrinkage and selection operator application (LASSO). Three models, using tumor features, vessel features, and a combination of both, were trained with the training set (N = 86) to predict resectability. The results were validated with the test set (N = 15) and compared to the multidisciplinary team's (MDT) performance. The vessel-features-only model performed best, with an AUC of 0.92 and sensitivity and specificity of 97% and 73%, respectively. Test set validation showed a sensitivity and specificity of 100% and 88%, respectively. The combined model was as good as the vessel model (AUC = 0.91), whereas the tumor model showed poor performance (AUC = 0.76). The MDT's prediction reached a sensitivity and specificity of 97% and 84% for the training set and 88% and 100% for the test set, respectively. Our clinician-independent vessel-based radiomics model can aid in predicting resectability and shows performance comparable to that of the MDT. With these encouraging results, improved, automated, and generalizable models can be developed that reduce workload and can be applied in non-expert hospitals.

6.
Cancers (Basel) ; 15(7)2023 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-37046611

RESUMO

Optical biopsy in Barrett's oesophagus (BE) using endocytoscopy (EC) could optimize endoscopic screening. However, the identification of dysplasia is challenging due to the complex interpretation of the highly detailed images. Therefore, we assessed whether using artificial intelligence (AI) as second assessor could help gastroenterologists in interpreting endocytoscopic BE images. First, we prospectively videotaped 52 BE patients with EC. Then we trained and tested the AI pm distinct datasets drawn from 83,277 frames, developed an endocytoscopic BE classification system, and designed online training and testing modules. We invited two successive cohorts for these online modules: 10 endoscopists to validate the classification system and 12 gastroenterologists to evaluate AI as second assessor by providing six of them with the option to request AI assistance. Training the endoscopists in the classification system established an improved sensitivity of 90.0% (+32.67%, p < 0.001) and an accuracy of 77.67% (+13.0%, p = 0.020) compared with the baseline. However, these values deteriorated at follow-up (-16.67%, p < 0.001 and -8.0%, p = 0.009). Contrastingly, AI-assisted gastroenterologists maintained high sensitivity and accuracy at follow-up, subsequently outperforming the unassisted gastroenterologists (+20.0%, p = 0.025 and +12.22%, p = 0.05). Thus, best diagnostic scores for the identification of dysplasia emerged through human-machine collaboration between trained gastroenterologists with AI as the second assessor. Therefore, AI could support clinical implementation of optical biopsies through EC.

7.
Bioengineering (Basel) ; 9(10)2022 Oct 09.
Artigo em Inglês | MEDLINE | ID: mdl-36290503

RESUMO

BACKGROUND: Neurosurgical procedures are complex and require years of training and experience. Traditional training on human cadavers is expensive, requires facilities and planning, and raises ethical concerns. Therefore, the use of anthropomorphic phantoms could be an excellent substitute. The aim of the study was to design and develop a patient-specific 3D-skull and brain model with realistic CT-attenuation suitable for conventional and augmented reality (AR)-navigated neurosurgical simulations. METHODS: The radiodensity of materials considered for the skull and brain phantoms were investigated using cone beam CT (CBCT) and compared to the radiodensities of the human skull and brain. The mechanical properties of the materials considered were tested in the laboratory and subsequently evaluated by clinically active neurosurgeons. Optimization of the phantom for the intended purposes was performed in a feedback cycle of tests and improvements. RESULTS: The skull, including a complete representation of the nasal cavity and skull base, was 3D printed using polylactic acid with calcium carbonate. The brain was cast using a mixture of water and coolant, with 4 wt% polyvinyl alcohol and 0.1 wt% barium sulfate, in a mold obtained from segmentation of CBCT and T1 weighted MR images from a cadaver. The experiments revealed that the radiodensities of the skull and brain phantoms were 547 and 38 Hounsfield units (HU), as compared to real skull bone and brain tissues with values of around 1300 and 30 HU, respectively. As for the mechanical properties testing, the brain phantom exhibited a similar elasticity to real brain tissue. The phantom was subsequently evaluated by neurosurgeons in simulations of endonasal skull-base surgery, brain biopsies, and external ventricular drain (EVD) placement and found to fulfill the requirements of a surgical phantom. CONCLUSIONS: A realistic and CT-compatible anthropomorphic head phantom was designed and successfully used for simulated augmented reality-led neurosurgical procedures. The anatomic details of the skull base and brain were realistically reproduced. This phantom can easily be manufactured and used for surgical training at a low cost.

8.
Artif Intell Med ; 121: 102178, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34763800

RESUMO

Colorectal polyps (CRP) are precursor lesions of colorectal cancer (CRC). Correct identification of CRPs during in-vivo colonoscopy is supported by the endoscopist's expertise and medical classification models. A recent developed classification model is the Blue light imaging Adenoma Serrated International Classification (BASIC) which describes the differences between non-neoplastic and neoplastic lesions acquired with blue light imaging (BLI). Computer-aided detection (CADe) and diagnosis (CADx) systems are efficient at visually assisting with medical decisions but fall short at translating decisions into relevant clinical information. The communication between machine and medical expert is of crucial importance to improve diagnosis of CRP during in-vivo procedures. In this work, the combination of a polyp image classification model and a language model is proposed to develop a CADx system that automatically generates text comparable to the human language employed by endoscopists. The developed system generates equivalent sentences as the human-reference and describes CRP images acquired with white light (WL), blue light imaging (BLI) and linked color imaging (LCI). An image feature encoder and a BERT module are employed to build the AI model and an external test set is used to evaluate the results and compute the linguistic metrics. The experimental results show the construction of complete sentences with an established metric scores of BLEU-1 = 0.67, ROUGE-L = 0.83 and METEOR = 0.50. The developed CADx system for automatic CRP image captioning facilitates future advances towards automatic reporting and may help reduce time-consuming histology assessment.


Assuntos
Adenoma , Pólipos do Colo , Neoplasias Colorretais , Pólipos do Colo/diagnóstico por imagem , Colonoscopia , Neoplasias Colorretais/diagnóstico por imagem , Humanos , Luz
9.
Endosc Int Open ; 9(10): E1497-E1503, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34540541

RESUMO

Background and study aims Colonoscopy is considered the gold standard for decreasing colorectal cancer incidence and mortality. Optical diagnosis of colorectal polyps (CRPs) is an ongoing challenge in clinical colonoscopy and its accuracy among endoscopists varies widely. Computer-aided diagnosis (CAD) for CRP characterization may help to improve this accuracy. In this study, we investigated the diagnostic accuracy of a novel algorithm for polyp malignancy classification by exploiting the complementary information revealed by three specific modalities. Methods We developed a CAD algorithm for CRP characterization based on high-definition, non-magnified white light (HDWL), Blue light imaging (BLI) and linked color imaging (LCI) still images from routine exams. All CRPs were collected prospectively and classified into benign or premalignant using histopathology as gold standard. Images and data were used to train the CAD algorithm using triplet network architecture. Our training dataset was validated using a threefold cross validation. Results In total 609 colonoscopy images of 203 CRPs of 154 consecutive patients were collected. A total of 174 CRPs were found to be premalignant and 29 were benign. Combining the triplet network features with all three image enhancement modalities resulted in an accuracy of 90.6 %, 89.7 % sensitivity, 96.6 % specificity, a positive predictive value of 99.4 %, and a negative predictive value of 60.9 % for CRP malignancy classification. The classification time for our CAD algorithm was approximately 90 ms per image. Conclusions Our novel approach and algorithm for CRP classification differentiates accurately between benign and premalignant polyps in non-magnified endoscopic images. This is the first algorithm combining three optical modalities (HDWL/BLI/LCI) exploiting the triplet network approach.

10.
Quant Imaging Med Surg ; 11(7): 3059-3069, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34249635

RESUMO

BACKGROUND: Detecting discomfort in infants is an important topic for their well-being and development. In this paper, we present an automatic and continuous video-based system for monitoring and detecting discomfort in infants. METHODS: The proposed system employs a novel and efficient 3D convolutional neural network (CNN), which achieves an end-to-end solution without the conventional face detection and tracking steps. In the scheme of this study, we thoroughly investigate the video characteristics (e.g., intensity images and motion images) and CNN architectures (e.g., 2D and 3D) for infant discomfort detection. The realized improvements of the 3D-CNN are based on capturing both the motion and the facial expression information of the infants. RESULTS: The performance of the system is assessed using videos recorded from 24 hospitalized infants by visualizing receiver operating characteristic (ROC) curves and measuring the values of area under the ROC curve (AUC). Additional performance metrics (labeling accuracy) are also calculated. Experimental results show that the proposed system achieves an AUC of 0.99, while the overall labeling accuracy is 0.98. CONCLUSIONS: These results confirms the robustness by using the 3D-CNN for infant discomfort monitoring and capturing both motion and facial expressions simultaneously.

11.
Bioengineering (Basel) ; 8(2)2021 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-33572063

RESUMO

Current prognostic risk scores in cardiac surgery do not benefit yet from machine learning (ML). This research aims to create a machine learning model to predict one-year mortality of a patient after transcatheter aortic valve implantation (TAVI). We adopt a modern gradient boosting on decision trees classifier (GBDTs), specifically designed for categorical features. In combination with a recent technique for model interpretations, we developed a feature analysis and selection stage, enabling the identification of the most important features for the prediction. We base our prediction model on the most relevant features, after interpreting and discussing the feature analysis results with clinical experts. We validated our model on 270 consecutive TAVI cases, reaching a C-statistic of 0.83 with CI [0.82, 0.84]. The model has achieved a positive predictive value ranging from 57% to 64%, suggesting that the patient selection made by the heart team of professionals can be further improved by taking into consideration the clinical data we identified as important and by exploiting ML approaches in the development of clinical risk scores. Our approach has shown promising predictive potential also with respect to widespread prognostic risk scores, such as logistic European system for cardiac operative risk evaluation (EuroSCORE II) and the society of thoracic surgeons (STS) risk score, which are broadly adopted by cardiologists worldwide.

12.
Biomed Eng Online ; 20(1): 6, 2021 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-33413426

RESUMO

BACKGROUND: Minimally invasive spine surgery is dependent on accurate navigation. Computer-assisted navigation is increasingly used in minimally invasive surgery (MIS), but current solutions require the use of reference markers in the surgical field for both patient and instruments tracking. PURPOSE: To improve reliability and facilitate clinical workflow, this study proposes a new marker-free tracking framework based on skin feature recognition. METHODS: Maximally Stable Extremal Regions (MSER) and Speeded Up Robust Feature (SURF) algorithms are applied for skin feature detection. The proposed tracking framework is based on a multi-camera setup for obtaining multi-view acquisitions of the surgical area. Features can then be accurately detected using MSER and SURF and afterward localized by triangulation. The triangulation error is used for assessing the localization quality in 3D. RESULTS: The framework was tested on a cadaver dataset and in eight clinical cases. The detected features for the entire patient datasets were found to have an overall triangulation error of 0.207 mm for MSER and 0.204 mm for SURF. The localization accuracy was compared to a system with conventional markers, serving as a ground truth. An average accuracy of 0.627 and 0.622 mm was achieved for MSER and SURF, respectively. CONCLUSIONS: This study demonstrates that skin feature localization for patient tracking in a surgical setting is feasible. The technology shows promising results in terms of detected features and localization accuracy. In the future, the framework may be further improved by exploiting extended feature processing using modern optical imaging techniques for clinical applications where patient tracking is crucial.


Assuntos
Procedimentos Cirúrgicos Minimamente Invasivos , Pele , Coluna Vertebral/cirurgia , Cirurgia Assistida por Computador
13.
Endoscopy ; 53(12): 1219-1226, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-33368056

RESUMO

BACKGROUND: Optical diagnosis of colorectal polyps remains challenging. Image-enhancement techniques such as narrow-band imaging and blue-light imaging (BLI) can improve optical diagnosis. We developed and prospectively validated a computer-aided diagnosis system (CADx) using high-definition white-light (HDWL) and BLI images, and compared the system with the optical diagnosis of expert and novice endoscopists. METHODS: CADx characterized colorectal polyps by exploiting artificial neural networks. Six experts and 13 novices optically diagnosed 60 colorectal polyps based on intuition. After 4 weeks, the same set of images was permuted and optically diagnosed using the BLI Adenoma Serrated International Classification (BASIC). RESULTS: CADx had a diagnostic accuracy of 88.3 % using HDWL images and 86.7 % using BLI images. The overall diagnostic accuracy combining HDWL and BLI (multimodal imaging) was 95.0 %, which was significantly higher than that of experts (81.7 %, P = 0.03) and novices (66.7 %, P < 0.001). Sensitivity was also higher for CADx (95.6 % vs. 61.1 % and 55.4 %), whereas specificity was higher for experts compared with CADx and novices (95.6 % vs. 93.3 % and 93.2 %). For endoscopists, diagnostic accuracy did not increase when using BASIC, either for experts (intuition 79.5 % vs. BASIC 81.7 %, P = 0.14) or for novices (intuition 66.7 % vs. BASIC 66.5 %, P = 0.95). CONCLUSION: CADx had a significantly higher diagnostic accuracy than experts and novices for the optical diagnosis of colorectal polyps. Multimodal imaging, incorporating both HDWL and BLI, improved the diagnostic accuracy of CADx. BASIC did not increase the diagnostic accuracy of endoscopists compared with intuitive optical diagnosis.


Assuntos
Adenoma , Pólipos do Colo , Neoplasias Colorretais , Adenoma/diagnóstico por imagem , Pólipos do Colo/diagnóstico por imagem , Colonoscopia , Neoplasias Colorretais/diagnóstico por imagem , Computadores , Humanos , Imagem de Banda Estreita
14.
Gastrointest Endosc Clin N Am ; 31(1): 91-103, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33213802

RESUMO

Because the current Barrett's esophagus (BE) surveillance protocol suffers from sampling error of random biopsies and a high miss-rate of early neoplastic lesions, many new endoscopic imaging and sampling techniques have been developed. None of these techniques, however, have significantly increased the diagnostic yield of BE neoplasia. In fact, these techniques have led to an increase in the amount of visible information, yet endoscopists and pathologists inevitably suffer from variations in intra- and interobserver agreement. Artificial intelligence systems have the potential to overcome these endoscopist-dependent limitations.


Assuntos
Inteligência Artificial , Esôfago de Barrett/diagnóstico , Diagnóstico por Computador/métodos , Detecção Precoce de Câncer/métodos , Esofagoscopia/métodos , Esôfago de Barrett/complicações , Biópsia/métodos , Neoplasias Esofágicas/diagnóstico , Neoplasias Esofágicas/etiologia , Humanos
15.
Gastrointest Endosc ; 93(4): 871-879, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-32735947

RESUMO

BACKGROUND AND AIMS: Volumetric laser endomicroscopy (VLE) is an advanced imaging modality used to detect Barrett's esophagus (BE) dysplasia. However, real-time interpretation of VLE scans is complex and time-consuming. Computer-aided detection (CAD) may help in the process of VLE image interpretation. Our aim was to train and validate a CAD algorithm for VLE-based detection of BE neoplasia. METHODS: The multicenter, VLE PREDICT study, prospectively enrolled 47 patients with BE. In total, 229 nondysplastic BE and 89 neoplastic (high-grade dysplasia/esophageal adenocarcinoma) targets were laser marked under VLE guidance and subsequently underwent a biopsy for histologic diagnosis. Deep convolutional neural networks were used to construct a CAD algorithm for differentiation between nondysplastic and neoplastic BE tissue. The CAD algorithm was trained on a set consisting of the first 22 patients (134 nondysplastic BE and 38 neoplastic targets) and validated on a separate test set from patients 23 to 47 (95 nondysplastic BE and 51 neoplastic targets). The performance of the algorithm was benchmarked against the performance of 10 VLE experts. RESULTS: Using the training set to construct the algorithm resulted in an accuracy of 92%, sensitivity of 95%, and specificity of 92%. When performance was assessed on the test set, accuracy, sensitivity, and specificity were 85%, 91%, and 82%, respectively. The algorithm outperformed all 10 VLE experts, who demonstrated an overall accuracy of 77%, sensitivity of 70%, and specificity of 81%. CONCLUSIONS: We developed, validated, and benchmarked a VLE CAD algorithm for detection of BE neoplasia using prospectively collected and biopsy-correlated VLE targets. The algorithm detected neoplasia with high accuracy and outperformed 10 VLE experts. (The Netherlands National Trials Registry (NTR) number: NTR 6728.).


Assuntos
Esôfago de Barrett , Neoplasias Esofágicas , Algoritmos , Esôfago de Barrett/diagnóstico por imagem , Computadores , Neoplasias Esofágicas/diagnóstico por imagem , Esofagoscopia , Humanos , Lasers , Microscopia Confocal , Países Baixos , Estudos Prospectivos
16.
Sensors (Basel) ; 20(23)2020 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-33291409

RESUMO

The primary treatment for malignant brain tumors is surgical resection. While gross total resection improves the prognosis, a supratotal resection may result in neurological deficits. On the other hand, accurate intraoperative identification of the tumor boundaries may be very difficult, resulting in subtotal resections. Histological examination of biopsies can be used repeatedly to help achieve gross total resection but this is not practically feasible due to the turn-around time of the tissue analysis. Therefore, intraoperative techniques to recognize tissue types are investigated to expedite the clinical workflow for tumor resection and improve outcome by aiding in the identification and removal of the malignant lesion. Hyperspectral imaging (HSI) is an optical imaging technique with the power of extracting additional information from the imaged tissue. Because HSI images cannot be visually assessed by human observers, we instead exploit artificial intelligence techniques and leverage a Convolutional Neural Network (CNN) to investigate the potential of HSI in twelve in vivo specimens. The proposed framework consists of a 3D-2D hybrid CNN-based approach to create a joint extraction of spectral and spatial information from hyperspectral images. A comparison study was conducted exploiting a 2D CNN, a 1D DNN and two conventional classification methods (SVM, and the SVM classifier combined with the 3D-2D hybrid CNN) to validate the proposed network. An overall accuracy of 80% was found when tumor, healthy tissue and blood vessels were classified, clearly outperforming the state-of-the-art approaches. These results can serve as a basis for brain tumor classification using HSI, and may open future avenues for image-guided neurosurgical applications.


Assuntos
Neoplasias Encefálicas , Glioblastoma , Inteligência Artificial , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/cirurgia , Glioblastoma/diagnóstico por imagem , Glioblastoma/cirurgia , Humanos , Imageamento Hiperespectral , Redes Neurais de Computação
17.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1169-1173, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018195

RESUMO

The main curative treatment for localized colon cancer is surgical resection. However when tumor residuals are left positive margins are found during the histological examinations and additional treatment is needed to inhibit recurrence. Hyperspectral imaging (HSI) can offer non-invasive surgical guidance with the potential of optimizing the surgical effectiveness. In this paper we investigate the capability of HSI for automated colon cancer detection in six ex-vivo specimens employing a spectral-spatial patch-based classification approach. The results demonstrate the feasibility in assessing the benign and malignant boundaries of the lesion with a sensitivity of 0.88 and specificity of 0.78. The results are compared with the state-of-the-art deep learning based approaches. The method with a new hybrid CNN outperforms the state-of the-art approaches (0.74 vs. 0.82 AUC). This study paves the way for further investigation towards improving surgical outcomes with HSI.


Assuntos
Neoplasias do Colo , Cirurgia Assistida por Computador , Biópsia , Neoplasias do Colo/diagnóstico por imagem , Humanos , Recidiva Local de Neoplasia/diagnóstico por imagem
18.
Artif Intell Med ; 107: 101914, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32828453

RESUMO

Patients suffering from Barrett's Esophagus (BE) are at an increased risk of developing esophageal adenocarcinoma and early detection is crucial for a good prognosis. To aid the endoscopists with the early detection for this preliminary stage of esophageal cancer, this work concentrates on the development and extensive evaluation of a state-of-the-art computer-aided classification and localization algorithm for dysplastic lesions in BE. To this end, we have employed a large-scale endoscopic data set, consisting of 494,355 images, in combination with a novel semi-supervised learning algorithm to pretrain several instances of the proposed neural network architecture. Next, several Barrett-specific data sets that are increasingly closer to the target domain with significantly more data compared to other related work, were used in a multi-stage transfer learning strategy. Additionally, the algorithm was evaluated on two prospectively gathered external test sets and compared against 53 medical professionals. Finally, the model was also evaluated in a live setting without interfering with the current biopsy protocol. Results from the performed experiments show that the proposed model improves on the state-of-the-art on all measured metrics. More specifically, compared to the best performing state-of-the-art model, the specificity is improved by more than 20% points while simultaneously preserving high sensitivity and reducing the false positive rate substantially. Our algorithm yields similar scores on the localization metrics, where the intersection of all experts is correctly indicated in approximately 92% of the cases. Furthermore, the live pilot study shows great performance in a clinical setting with a patient level accuracy, sensitivity, and specificity of 90%. Finally, the proposed algorithm outperforms each individual medical expert by at least 5% and the average assessor by more than 10% over all assessor groups with respect to accuracy.


Assuntos
Adenocarcinoma , Esôfago de Barrett , Neoplasias Esofágicas , Esôfago de Barrett/diagnóstico , Neoplasias Esofágicas/diagnóstico , Esofagoscopia , Humanos , Projetos Piloto
19.
Sensors (Basel) ; 20(13)2020 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-32610555

RESUMO

Surgical navigation systems are increasingly used for complex spine procedures to avoid neurovascular injuries and minimize the risk for reoperations. Accurate patient tracking is one of the prerequisites for optimal motion compensation and navigation. Most current optical tracking systems use dynamic reference frames (DRFs) attached to the spine, for patient movement tracking. However, the spine itself is subject to intrinsic movements which can impact the accuracy of the navigation system. In this study, we aimed to detect the actual patient spine features in different image views captured by optical cameras, in an augmented reality surgical navigation (ARSN) system. Using optical images from open spinal surgery cases, acquired by two gray-scale cameras, spinal landmarks were identified and matched in different camera views. A computer vision framework was created for preprocessing of the spine images, detecting and matching local invariant image regions. We compared four feature detection algorithms, Speeded Up Robust Feature (SURF), Maximal Stable Extremal Region (MSER), Features from Accelerated Segment Test (FAST), and Oriented FAST and Rotated BRIEF (ORB) to elucidate the best approach. The framework was validated in 23 patients and the 3D triangulation error of the matched features was < 0 . 5 mm. Thus, the findings indicate that spine feature detection can be used for accurate tracking in navigated surgery.


Assuntos
Realidade Aumentada , Imagem Óptica , Coluna Vertebral/diagnóstico por imagem , Cirurgia Assistida por Computador , Sistemas de Navegação Cirúrgica , Algoritmos , Humanos , Imageamento Tridimensional , Imagens de Fantasmas , Coluna Vertebral/cirurgia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA