Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24
Filtrar
1.
Int Neurourol J ; 28(2): 138-146, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38956773

RESUMO

PURPOSE: We aimed to evaluate the effect of self-training using a virtual reality head-mounted display simulator on the acquisition of surgical skills for holmium laser enucleation surgery. METHODS: Thirteen medical students without surgical skills for holmium laser enucleation of the prostate were trained using multimedia to learn the technique via simulator manipulation. Thereafter, participants performed the technique on a virtual benign prostatic hyperplasia model A (test A). After a 1-week wash-out period, they underwent self-training using a simulator and performed the technique on model B (test B). Subsequently, participants were asked to respond to Training Satisfaction Questions. Video footage of hand movements and endoscope view were recorded during tests A and B for later review by 2 expert surgeons. A 20-step Assessment Checklist, 6-domain Global Rating Scale, and a Pass Rating were used to compare performance on tests A and B. RESULTS: Thirteen participants completed both tests A and B. The 20-step Assessment Checklist and 6-domain Global Rating Scale evaluation results showed significantly improved scores in test B than in test A (P<0.05). No evaluator rated participants as passed after test A, but 11 participants (84.6%) passed after test B. Ten participants (76.9%) indicated that the simulator was helpful in acquiring surgical skills for holmium laser enucleation of the prostate. CONCLUSION: The virtual reality head-mounted display holmium laser enucleation of the prostate simulator was effective for surgical skill training. This simulator may help to shorten the learning curve of this technique in real clinical practice in the future.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38965779

RESUMO

INTRODUCTION: Liver tumor resection requires precise localization of tumors and blood vessels. Despite advancements in 3-dimensional (3D) visualization for laparoscopic surgeries, challenges persist. We developed and evaluated an augmented reality (AR) system that overlays preoperative 3D models onto laparoscopic images, offering crucial support for 3D visualization during laparoscopic liver surgeries. METHODS: Anatomic liver structures from preoperative computed tomography scans were segmented using open-source software including 3D Slicer and Maya 2022 for 3D model editing. A registration system was created with 3D visualization software utilizing a stereo registration input system to overlay the virtual liver onto laparoscopic images during surgical procedures. A controller was customized using a modified keyboard to facilitate manual alignment of the virtual liver with the laparoscopic image. The AR system was evaluated by 3 experienced surgeons who performed manual registration for a total of 27 images from 7 clinical cases. The evaluation criteria included registration time; measured in minutes, and accuracy; measured using the Dice similarity coefficient. RESULTS: The overall mean registration time was 2.4±1.7 minutes (range: 0.3 to 9.5 min), and the overall mean registration accuracy was 93.8%±4.9% (range: 80.9% to 99.7%). CONCLUSION: Our validated AR system has the potential to effectively enable the prediction of internal hepatic anatomic structures during 3D laparoscopic liver resection, and may enhance 3D visualization for select laparoscopic liver surgeries.

3.
Sci Rep ; 14(1): 872, 2024 01 09.
Artigo em Inglês | MEDLINE | ID: mdl-38195632

RESUMO

Recognizing anatomical sections during colonoscopy is crucial for diagnosing colonic diseases and generating accurate reports. While recent studies have endeavored to identify anatomical regions of the colon using deep learning, the deformable anatomical characteristics of the colon pose challenges for establishing a reliable localization system. This study presents a system utilizing 100 colonoscopy videos, combining density clustering and deep learning. Cascaded CNN models are employed to estimate the appendix orifice (AO), flexures, and "outside of the body," sequentially. Subsequently, DBSCAN algorithm is applied to identify anatomical sections. Clustering-based analysis integrates clinical knowledge and context based on the anatomical section within the model. We address challenges posed by colonoscopy images through non-informative removal preprocessing. The image data is labeled by clinicians, and the system deduces section correspondence stochastically. The model categorizes the colon into three sections: right (cecum and ascending colon), middle (transverse colon), and left (descending colon, sigmoid colon, rectum). We estimated the appearance time of anatomical boundaries with an average error of 6.31 s for AO, 9.79 s for HF, 27.69 s for SF, and 3.26 s for outside of the body. The proposed method can facilitate future advancements towards AI-based automatic reporting, offering time-saving efficacy and standardization.


Assuntos
Doenças do Colo , Aprendizado Profundo , Humanos , Colonoscopia , Algoritmos , Análise por Conglomerados
4.
Int J Surg ; 110(1): 194-201, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-37939117

RESUMO

BACKGROUND: Caesarean section (CS) is a complex surgical procedure that involves many steps and requires careful precision. Virtual reality (VR) simulation has emerged as a promising tool for medical education and training, providing a realistic and immersive environment for learners to practice clinical skills and decision-making. This study aimed to evaluate the educational effectiveness of a VR simulation program in training the management of patients with premature rupture of membranes (PROM) and CS. MATERIALS AND METHODS: A two-arm parallel randomized controlled trial was conducted with 105 eligible participants randomly assigned to the VR group ( n =53) or the control group ( n =52) in a 1:1 ratio. The VR group received VR simulation training focused on PROM management and CS practice, while the control group watched a video presentation with narrative of clinical scenario and recording of CS. Both groups completed questionnaires assessing their prior experiences with VR, experience in managing patients with PROM and performing CS, as well as their confidence levels. These questionnaires were administered before and after the intervention, along with a mini-test quiz. RESULTS: Baseline characteristics and previous experiences were comparable between the two groups. After the intervention, the VR group had higher confidence scores in all four aspects, including managing patients with PROM, performing CS as an operator, and understanding the indications and complications of CS, compared to the control group. The VR group also achieved significantly higher scores on the mini-test quiz [median (interquartile range), 42 (37-48) in the VR group; 36 (32-40) in the control group, P <0.001]. CONCLUSION: VR simulation program can be an effective educational tool for improving participants' knowledge and confidence in managing patients with PROM and performing CS.


Assuntos
Internato e Residência , Treinamento por Simulação , Realidade Virtual , Gravidez , Humanos , Feminino , Cesárea , Treinamento por Simulação/métodos , Competência Clínica
5.
J Craniofac Surg ; 34(8): 2369-2375, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37815288

RESUMO

Velopharyngeal insufficiency (VPI), which is the incomplete closure of the velopharyngeal valve during speech, is a typical poor outcome that should be evaluated after cleft palate repair. The interpretation of VPI considering both imaging analysis and perceptual evaluation is essential for further management. The authors retrospectively reviewed patients with repaired cleft palates who underwent assessment for velopharyngeal function, including both videofluoroscopic imaging and perceptual speech evaluation. The final diagnosis of VPI was made by plastic surgeons based on both assessment modalities. Deep learning techniques were applied for the diagnosis of VPI and compared with the human experts' diagnostic results of videofluoroscopic imaging. In addition, the results of the deep learning techniques were compared with a speech pathologist's diagnosis of perceptual evaluation to assess consistency with clinical symptoms. A total of 714 cases from January 2010 to June 2019 were reviewed. Six deep learning algorithms (VGGNet, ResNet, Xception, ResNext, DenseNet, and SENet) were trained using the obtained dataset. The area under the receiver operating characteristic curve of the algorithms ranged between 0.8758 and 0.9468 in the hold-out method and between 0.7992 and 0.8574 in the 5-fold cross-validation. Our findings demonstrated the deep learning algorithms performed comparable to experienced plastic surgeons in the diagnosis of VPI based on videofluoroscopic velopharyngeal imaging.


Assuntos
Fissura Palatina , Aprendizado Profundo , Insuficiência Velofaríngea , Humanos , Fissura Palatina/diagnóstico por imagem , Fissura Palatina/cirurgia , Insuficiência Velofaríngea/diagnóstico por imagem , Insuficiência Velofaríngea/cirurgia , Faringe/cirurgia , Estudos Retrospectivos , Resultado do Tratamento
6.
Am J Clin Dermatol ; 24(4): 649-659, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37160644

RESUMO

BACKGROUND: Although lesion counting is an evaluation method that effectively analyzes facial acne severity, its usage is limited because of difficult implementation. OBJECTIVES: We aimed to develop and validate an automated algorithm that detects and counts acne lesions by type, and to evaluate its clinical applicability as an assistance tool through a reader test. METHODS: A total of 20,699 lesions (closed and open comedones, papules, nodules/cysts, and pustules) were manually labeled on 1213 facial images of 398 facial acne photography sets (frontal and both lateral views) acquired from 258 patients and used for training and validating algorithms based on a convolutional neural network for classifying five classes of acne lesions or for binary classification into noninflammatory and inflammatory lesions. RESULTS: In the validation dataset, the highest mean average precision was 28.48 for the binary classification algorithm. Pearson's correlation of lesion counts between algorithm and ground-truth was 0.72 (noninflammatory) and 0.90 (inflammatory), respectively. In the reader test, eight readers (100.0%) detected and counted lesions more accurately using the algorithm compared with the reader-alone evaluation. CONCLUSIONS: Overall, our algorithm demonstrated clinically applicable performance in detecting and counting facial acne lesions by type and its utility as an assistance tool for evaluating acne severity.


Assuntos
Acne Vulgar , Dermatologistas , Humanos , Acne Vulgar/patologia , Algoritmos , Fotografação , Vesícula
7.
Sci Rep ; 13(1): 1360, 2023 01 24.
Artigo em Inglês | MEDLINE | ID: mdl-36693894

RESUMO

Neural network models have been used to analyze thyroid ultrasound (US) images and stratify malignancy risk of the thyroid nodules. We investigated the optimal neural network condition for thyroid US image analysis. We compared scratch and transfer learning models, performed stress tests in 10% increments, and compared the performance of three threshold values. All validation results indicated superiority of the transfer learning model over the scratch model. Stress test indicated that training the algorithm using 3902 images (70%) resulted in a performance which was similar to the full dataset (5575). Threshold 0.3 yielded high sensitivity (1% false negative) and low specificity (72% false positive), while 0.7 gave low sensitivity (22% false negative) and high specificity (23% false positive). Here we showed that transfer learning was more effective than scratch learning in terms of area under curve, sensitivity, specificity and negative/positive predictive value, that about 3900 images were minimally required to demonstrate an acceptable performance, and that algorithm performance can be customized according to the population characteristics by adjusting threshold value.


Assuntos
Redes Neurais de Computação , Nódulo da Glândula Tireoide , Humanos , Sensibilidade e Especificidade , Nódulo da Glândula Tireoide/diagnóstico por imagem , Nódulo da Glândula Tireoide/patologia , Valor Preditivo dos Testes , Ultrassonografia/métodos
8.
Sci Rep ; 12(1): 3105, 2022 02 24.
Artigo em Inglês | MEDLINE | ID: mdl-35210442

RESUMO

There is an increasing demand and need for patients and caregivers to actively participate in the treatment process. However, when there are unexpected findings during pediatrics surgery, access restrictions in the operating room may lead to a lack of understanding of the medical condition, as the caregivers are forced to indirectly hear about it. To overcome this, we designed a tele-consent system that operates through a specially constructed mixed reality (MR) environment during surgery. We enrolled 11 patients with unilateral inguinal hernia and their caregivers among the patients undergoing laparoscopic inguinal herniorrhaphy between January through February 2021. The caregivers were informed of the intraoperative findings in real-time through MR glasses outside the operating room. After surgery, we conducted questionnaire surveys to evaluate the satisfaction and usefulness of tele-consent. We identified contralateral patent processus vaginalis in seven out of 11 patients, and then additionally performed surgery on the contralateral side with tele-consent from their caregivers. Most caregivers and surgeons answered positively about the satisfaction and usefulness of tele-consent. This study found that tele-consent with caregivers using MR glasses not only increased the satisfaction of caregivers and surgeons, but also helped to accommodate real-time findings by adapting surgical plan through the tele-consent.


Assuntos
Hérnia Inguinal/complicações , Consentimento Livre e Esclarecido/ética , Telemedicina/métodos , Adulto , Realidade Aumentada , Cuidadores/psicologia , Criança , Pré-Escolar , Feminino , Hérnia Inguinal/cirurgia , Humanos , Achados Incidentais , Lactente , Recém-Nascido , Laparoscopia/métodos , Masculino , Competência Mental/psicologia , Pediatria/métodos , Dados Preliminares , Estudos Retrospectivos , Inquéritos e Questionários
9.
World J Surg ; 46(4): 942-948, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35006323

RESUMO

BACKGROUND: Pediatric hemato-oncologic patients require central catheters for chemotherapy, and the junction of the superior vena cava and right atrium is considered the ideal location for catheter tips. Skin landmarks or fluoroscopic supports have been applied to identify the cavoatrial junction; however, none has been recognized as the gold standard. Therefore, we aim to develop a safe and accurate technique using augmented reality technology for the location of the cavoatrial junction in pediatric hemato-oncologic patients. METHODS: Fifteen oncology patients who underwent chest computed tomography were enrolled for Hickman catheter or chemoport insertion. With the aid of augmented reality technology, three-dimensional models of the internal jugular veins, external jugular veins, subclavian veins, superior vena cava, and right atrium were constructed. On inserting the central vein catheters, the cavoatrial junction identified using the three-dimensional models were marked on the body surface, the tip was positioned at the corresponding location, and the actual insertion location was confirmed using a portable x-ray machine. The proposed method was evaluated by comparing the distance from the cavoatrial junction to the augmented reality location with that to the conventional location on x-ray. RESULTS: The mean distance between the cavoatrial junction and augmented reality location on x-ray was 1.2 cm, which was significantly shorter than that between the cavoatrial junction and conventional location (1.9 cm; P = 0.027). CONCLUSIONS: Central catheter insertion using augmented reality technology is more safe and accurate than that using conventional methods and can be performed at no additional cost in oncology patients.


Assuntos
Realidade Aumentada , Cateterismo Venoso Central , Cateteres Venosos Centrais , Cateterismo Venoso Central/métodos , Criança , Sinais (Psicologia) , Humanos , Veias Jugulares , Veia Cava Superior/diagnóstico por imagem
10.
Sci Rep ; 12(1): 261, 2022 01 07.
Artigo em Inglês | MEDLINE | ID: mdl-34997124

RESUMO

Computer-aided detection (CADe) systems have been actively researched for polyp detection in colonoscopy. To be an effective system, it is important to detect additional polyps that may be easily missed by endoscopists. Sessile serrated lesions (SSLs) are a precursor to colorectal cancer with a relatively higher miss rate, owing to their flat and subtle morphology. Colonoscopy CADe systems could help endoscopists; however, the current systems exhibit a very low performance for detecting SSLs. We propose a polyp detection system that reflects the morphological characteristics of SSLs to detect unrecognized or easily missed polyps. To develop a well-trained system with imbalanced polyp data, a generative adversarial network (GAN) was used to synthesize high-resolution whole endoscopic images, including SSL. Quantitative and qualitative evaluations on GAN-synthesized images ensure that synthetic images are realistic and include SSL endoscopic features. Moreover, traditional augmentation methods were used to compare the efficacy of the GAN augmentation method. The CADe system augmented with GAN synthesized images showed a 17.5% improvement in sensitivity on SSLs. Consequently, we verified the potential of the GAN to synthesize high-resolution images with endoscopic features and the proposed system was found to be effective in detecting easily missed polyps during a colonoscopy.


Assuntos
Pólipos do Colo/patologia , Colonoscopia , Neoplasias Colorretais/patologia , Detecção Precoce de Câncer , Interpretação de Imagem Assistida por Computador , Redes Neurais de Computação , Bases de Dados Factuais , Humanos , Valor Preditivo dos Testes , Estudos Prospectivos , Reprodutibilidade dos Testes , Estudos Retrospectivos
11.
Int Neurourol J ; 26(4): 317-324, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36599340

RESUMO

PURPOSE: Bladder capacity is an important parameter in the diagnosis of lower urinary tract dysfunction. We aimed to determine whether the maximum bladder capacity (MCC) measured during a urodynamic study was affected by involuntary detrusor contraction (IDC) in patients with Lower Urinary Tract Symptoms (LUTS)/Benign Prostatic Hyperplasia (BPH). METHODS: Between March 2020 and April 2021, we obtained maximum voided volume (MVV) from a 3-day frequency-volume chart, MCC during filling cystometry, and maximum anesthetic bladder capacity (MABC) during holmium laser enucleation of the prostate under spinal or general anesthesia in 139 men with LUTS/BPH aged >50 years. Patients were divided according to the presence of IDC during filling cystometry. We assumed that the MABC is close to the true value of the MCC, as it is measured under the condition of minimizing neural influence over the bladder. RESULTS: There was no difference in demographic and clinical characteristics between the non-IDC (n=20) and IDC groups (n=119) (mean age, 71.5±7.4) (P>0.05). The non-IDC group had greater bladder volume to feel the first sensation, first desire, and strong desire than the IDC group (P<0.001). In all patients, MABC and MVV were correlated (r=0.41, P<0.001); however, there was no correlation between MCC and MABC (r=0.19, P=0.02). There was no significant difference in MABC between the non-IDC and IDC groups (P=0.19), but MVV and MCC were significantly greater in the non-IDC group (P<0.001). There was no significant difference between MABC and MVV (MABC-MVV, P=0.54; MVV/MABC, P=0.07), but there was a significant difference between MABC and MCC between the non-IDC and IDC groups (MABC-MCC, P<0.001; MCC/MABC, P<0.001). CONCLUSION: Maximum bladder capacity from a urodynamic study does not represent true bladder capacity because of involuntary contractions.

12.
Sci Rep ; 11(1): 14911, 2021 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-34290326

RESUMO

Increasing recognition of anatomical obstruction has resulted in a large variety of sleep surgeries to improve anatomic collapse of obstructive sleep apnea (OSA) and the prediction of whether sleep surgery will have successful outcome is very important. The aim of this study is to assess a machine learning-based clinical model that predict the success rate of sleep surgery in OSA subjects. The predicted success rate from machine learning and the predicted subjective surgical outcome from the physician were compared with the actual success rate in 163 male dominated-OSA subjects. Predicted success rate of sleep surgery from machine learning models based on sleep parameters and endoscopic findings of upper airway demonstrated higher accuracy than subjective predicted value of sleep surgeon. The gradient boosting model showed the best performance to predict the surgical success that is evaluated by pre- and post-operative polysomnography or home sleep apnea testing among the logistic regression and three machine learning models, and the accuracy of gradient boosting model (0.708) was significantly higher than logistic regression model (0.542). Our data demonstrate that the data mining-driven prediction such as gradient boosting exhibited higher accuracy for prediction of surgical outcome and we can provide accurate information on surgical outcomes before surgery to OSA subjects using machine learning models.


Assuntos
Modelos Logísticos , Aprendizado de Máquina , Apneia Obstrutiva do Sono/cirurgia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Orofaringe/cirurgia , Procedimentos Cirúrgicos Otorrinolaringológicos , Período Pré-Operatório , Resultado do Tratamento
13.
JMIR Med Inform ; 9(5): e25869, 2021 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-33858817

RESUMO

BACKGROUND: Federated learning is a decentralized approach to machine learning; it is a training strategy that overcomes medical data privacy regulations and generalizes deep learning algorithms. Federated learning mitigates many systemic privacy risks by sharing only the model and parameters for training, without the need to export existing medical data sets. In this study, we performed ultrasound image analysis using federated learning to predict whether thyroid nodules were benign or malignant. OBJECTIVE: The goal of this study was to evaluate whether the performance of federated learning was comparable with that of conventional deep learning. METHODS: A total of 8457 (5375 malignant, 3082 benign) ultrasound images were collected from 6 institutions and used for federated learning and conventional deep learning. Five deep learning networks (VGG19, ResNet50, ResNext50, SE-ResNet50, and SE-ResNext50) were used. Using stratified random sampling, we selected 20% (1075 malignant, 616 benign) of the total images for internal validation. For external validation, we used 100 ultrasound images (50 malignant, 50 benign) from another institution. RESULTS: For internal validation, the area under the receiver operating characteristic (AUROC) curve for federated learning was between 78.88% and 87.56%, and the AUROC for conventional deep learning was between 82.61% and 91.57%. For external validation, the AUROC for federated learning was between 75.20% and 86.72%, and the AUROC curve for conventional deep learning was between 73.04% and 91.04%. CONCLUSIONS: We demonstrated that the performance of federated learning using decentralized data was comparable to that of conventional deep learning using pooled data. Federated learning might be potentially useful for analyzing medical images while protecting patients' personal information.

14.
J Clin Med ; 9(6)2020 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-32585953

RESUMO

As the number of robotic surgery procedures has increased, so has the importance of evaluating surgical skills in these techniques. It is difficult, however, to automatically and quantitatively evaluate surgical skills during robotic surgery, as these skills are primarily associated with the movement of surgical instruments. This study proposes a deep learning-based surgical instrument tracking algorithm to evaluate surgeons' skills in performing procedures by robotic surgery. This method overcame two main drawbacks: occlusion and maintenance of the identity of the surgical instruments. In addition, surgical skill prediction models were developed using motion metrics calculated from the motion of the instruments. The tracking method was applied to 54 video segments and evaluated by root mean squared error (RMSE), area under the curve (AUC), and Pearson correlation analysis. The RMSE was 3.52 mm, the AUC of 1 mm, 2 mm, and 5 mm were 0.7, 0.78, and 0.86, respectively, and Pearson's correlation coefficients were 0.9 on the x-axis and 0.87 on the y-axis. The surgical skill prediction models showed an accuracy of 83% with Objective Structured Assessment of Technical Skill (OSATS) and Global Evaluative Assessment of Robotic Surgery (GEARS). The proposed method was able to track instruments during robotic surgery, suggesting that the current method of surgical skill assessment by surgeons can be replaced by the proposed automatic and quantitative evaluation method.

15.
Sci Rep ; 10(1): 8437, 2020 05 21.
Artigo em Inglês | MEDLINE | ID: mdl-32439970

RESUMO

We adopted a vision-based tracking system for augmented reality (AR), and evaluated whether it helped surgeons to localize the recurrent laryngeal nerve (RLN) during robotic thyroid surgery. We constructed an AR image of the trachea, common carotid artery, and RLN using CT images. During surgery, an AR image of the trachea and common carotid artery were overlaid on the physical structures after they were exposed. The vision-based tracking system was activated so that the AR image of the RLN followed the camera movement. After identifying the RLN, the distance between the AR image of the RLN and the actual RLN was measured. Eleven RLNs (9 right, 4 left) were tested. The mean distance between the RLN AR image and the actual RLN was 1.9 ± 1.5 mm (range 0.5 to 3.7). RLN localization using AR and vision-based tracking system was successfully applied during robotic thyroidectomy. There were no cases of RLN palsy. This technique may allow surgeons to identify hidden anatomical structures during robotic surgery.


Assuntos
Realidade Aumentada , Traumatismos do Nervo Laríngeo Recorrente/prevenção & controle , Nervo Laríngeo Recorrente/anatomia & histologia , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Assistida por Computador/métodos , Câncer Papilífero da Tireoide/cirurgia , Neoplasias da Glândula Tireoide/cirurgia , Adulto , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Prognóstico , Estudos Prospectivos , Nervo Laríngeo Recorrente/cirurgia , Câncer Papilífero da Tireoide/patologia , Neoplasias da Glândula Tireoide/patologia , Tireoidectomia
16.
PeerJ ; 7: e7256, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31392088

RESUMO

BACKGROUND: Cecal intubation time is an important component for quality colonoscopy. Cecum is the turning point that determines the insertion and withdrawal phase of the colonoscope. For this reason, obtaining information related with location of the cecum in the endoscopic procedure is very useful. Also, it is necessary to detect the direction of colonoscope's movement and time-location of the cecum. METHODS: In order to analysis the direction of scope's movement, the Horn-Schunck algorithm was used to compute the pixel's motion change between consecutive frames. Horn-Schunk-algorithm applied images were trained and tested through convolutional neural network deep learning methods, and classified to the insertion, withdrawal and stop movements. Based on the scope's movement, the graph was drawn with a value of +1 for insertion, -1 for withdrawal, and 0 for stop. We regarded the turning point as a cecum candidate point when the total graph area sum in a certain section recorded the lowest. RESULTS: A total of 328,927 frame images were obtained from 112 patients. The overall accuracy, drawn from 5-fold cross-validation, was 95.6%. When the value of "t" was 30 s, accuracy of cecum discovery was 96.7%. In order to increase visibility, the movement of the scope was added to summary report of colonoscopy video. Insertion, withdrawal, and stop movements were mapped to each color and expressed with various scale. As the scale increased, the distinction between the insertion phase and the withdrawal phase became clearer. CONCLUSION: Information obtained in this study can be utilized as metadata for proficiency assessment. Since insertion and withdrawal are technically different movements, data of scope's movement and phase can be quantified and utilized to express pattern unique to the colonoscopist and to assess proficiency. Also, we hope that the findings of this study can contribute to the informatics field of medical records so that medical charts can be transmitted graphically and effectively in the field of colonoscopy.

17.
Medicine (Baltimore) ; 98(15): e15133, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30985680

RESUMO

Fine needle aspiration (FNA) is the procedure of choice for evaluating thyroid nodules. It is indicated for nodules >2 cm, even in cases of very low suspicion of malignancy. FNA has associated risks and expenses. In this study, we developed an image analysis model using a deep learning algorithm and evaluated if the algorithm could predict thyroid nodules with benign FNA results.Ultrasonographic images of thyroid nodules with cytologic or histologic results were retrospectively collected. For algorithm training, 1358 (670 benign, 688 malignant) thyroid nodule images were input into the Inception-V3 network model. The model was pretrained to classify nodules as benign or malignant using the ImageNet database. The diagnostic performance of the algorithm was tested with the prospectively collected internal (n = 55) and external test sets (n = 100).For the internal test set, 20 of the 21 FNA malignant nodules were correctly classified as malignant by the algorithm (sensitivity, 95.2%); and of the 22 nodules algorithm classified as benign, 21 were FNA benign (negative predictive value [NPV], 95.5%). For the external test set, 47 of the 50 FNA malignant nodules were correctly classified by the algorithm (sensitivity, 94.0%); and of the 31 nodules the algorithm classified as benign, 28 were FNA benign (NPV, 90.3%).The sensitivity and NPV of the deep learning algorithm shown in this study are promising. Artificial intelligence may assist clinicians to recognize nodules that are likely to be benign and avoid unnecessary FNA.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Nódulo da Glândula Tireoide/diagnóstico por imagem , Ultrassonografia , Biópsia por Agulha Fina , Aprendizado Profundo , Humanos , Sensibilidade e Especificidade , Glândula Tireoide/diagnóstico por imagem , Glândula Tireoide/patologia , Nódulo da Glândula Tireoide/patologia , Ultrassonografia/métodos
18.
J Radiol Prot ; 39(2): 373-386, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30602144

RESUMO

During computed tomography (CT) scans, radiation scatters in all directions, increasing radiation exposure. In this study, an aperture-type radiation regulator was developed to provide shielding against secondary radiation from the x-ray tube and collimator in CT. To evaluate the usefulness of the developed aperture-type radiation regulator, (1) spatial dose distribution within the CT room was measured, (2) dose intensity at 1 m from the isocenter was compared, (3) absorbed dose in the nearby organs was evaluated using a human equivalent phantom, and (4) noise, CNR, and SNR were compared for assessment of image quality. The results showed that the developed aperture-type radiation regulator reduced the intensity of secondary radiation by approximately 25% in front of the gantry and 15% to the rear of the gantry. The maximum dose distribution on 10 µGy was reduced by approximately 18% in front of the gantry and 12% in the rear. In addition, when the neck and head were scanned, the absorbed dose in the chest decreased by 25% and 40%, respectively, and noise was reduced by 3.3%-4.5% for different phantoms. Evaluation of abdominal CT images showed 18% noise reduction, with 27% and 28% increases in the signal-to-noise and contrast-to-noise ratios, respectively. These results confirmed that the proposed aperture-type radiation regulator can reduce radiation exposure without affecting primary radiation that creates medical images. The results also confirmed that the radiation regulator effectively improves the quality of medical images.


Assuntos
Imagens de Fantasmas , Proteção Radiológica , Tomografia Computadorizada por Raios X , Humanos , Doses de Radiação , Raios X
19.
Ann Surg Treat Res ; 95(6): 297-302, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30505820

RESUMO

PURPOSE: Increased robotic surgery is attended by increased reports of complications, largely due to limited operative view and lack of tactile sense. These kinds of obstacles, which seldom occur in open surgery, are challenging for beginner surgeons. To enhance robotic surgery safety, we created an augmented reality (AR) model of the organs around the thyroid glands, and tested the AR model applicability in robotic thyroidectomy. METHODS: We created AR images of the thyroid gland, common carotid arteries, trachea, and esophagus using preoperative CT images of a thyroid carcinoma patient. For a preliminary test, we overlaid the AR images on a 3-dimensional printed model at five different angles and evaluated its accuracy using Dice similarity coefficient. We then overlaid the AR images on the real-time operative images during robotic thyroidectomy. RESULTS: The Dice similarity coefficients ranged from 0.984 to 0.9908, and the mean of the five different angles was 0.987. During the entire process of robotic thyroidectomy, the AR images were successfully overlaid on the real-time operative images using manual registration. CONCLUSION: We successfully demonstrated the use of AR on the operative field during robotic thyroidectomy. Although there are currently limitations, the use of AR in robotic surgery will become more practical as the technology advances and may contribute to the enhancement of surgical safety.

20.
Healthc Inform Res ; 24(4): 394-401, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30443429

RESUMO

OBJECTIVES: Augmented reality (AR) technology has become rapidly available and is suitable for various medical applications since it can provide effective visualization of intricate anatomical structures inside the human body. This paper describes the procedure to develop an AR app with Unity3D and Vuforia software development kit and publish it to a smartphone for the localization of critical tissues or organs that cannot be seen easily by the naked eye during surgery. METHODS: In this study, Vuforia version 6.5 integrated with the Unity Editor was installed on a desktop computer and configured to develop the Android AR app for the visualization of internal organs. Three-dimensional segmented human organs were extracted from a computerized tomography file using Seg3D software, and overlaid on a target body surface through the developed app with an artificial marker. RESULTS: To aid beginners in using the AR technology for medical applications, a 3D model of the thyroid and surrounding structures was created from a thyroid cancer patient's DICOM file, and was visualized on the neck of a medical training mannequin through the developed AR app. The individual organs, including the thyroid, trachea, carotid artery, jugular vein, and esophagus were localized by the surgeon's Android smartphone. CONCLUSIONS: Vuforia software can help even researchers, students, or surgeons who do not possess computer vision expertise to easily develop an AR app in a user-friendly manner and use it to visualize and localize critical internal organs without incision. It could allow AR technology to be extensively utilized for various medical applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA