Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Sensors (Basel) ; 22(23)2022 Dec 02.
Article in English | MEDLINE | ID: mdl-36502123

ABSTRACT

Ultrasonic imaging logging can visually identify the location, shape, dip angle and orientation of fractures and holes. The method has not been effectively applied in the field; one of the prime reasons is that the results of physical simulation experiments are insufficient. The physical simulation of fracture and hole response in the laboratory can provide a reference for the identification and evaluation of the underground geological structure. In this work, ultrasonic scanning experiments are conducted on a grooved sandstone plate and a simulated borehole and the influence of different fractures and holes on ultrasonic pulse echo is studied. Experimental results show that the combination of ultrasonic echo amplitude imaging and arrival time imaging can be used to identify the fracture location, width, depth and orientation, along with accurately calculating the fracture dip angle. The evaluated fracture parameters are similar to those in the physical simulation model. The identification accuracy of the ultrasonic measurement is related to the diameter of the radiation beam of the ultrasonic transducer. A single fracture with width larger than or equal to the radiation beam diameter of the ultrasonic transducer and multiple fractures with spacing longer than or equal to the radiation beam diameter can be effectively identified.


Subject(s)
Fractures, Bone , Transducers , Humans , Ultrasonography/methods , Computer Simulation , Ultrasonics , Bone Plates
2.
Bioorg Med Chem Lett ; 26(19): 4842-4845, 2016 10 01.
Article in English | MEDLINE | ID: mdl-27524310

ABSTRACT

Two series of novel tricyclic oxazine and oxazepine fused quinazolines have been designed and synthesized. The in vitro antitumor effect of the title compounds was screened on N87, A431, H1975, BT474 and Calu-3 cell lines. Compared to gefitinib and erlotinib, compounds 1a-1h were found to demonstrate more potent antitumor activities. Several derivatives could counteract EGF-induced phosphorylation of EGFR in cells, and their potency was comparable to the reference compounds. Compounds 1a-1f, 1h were chosen for further evaluation of EGFR and HER2 in vitro kinase inhibitory activity. Compounds 1c-1f, 1h effectively inhibited the in vitro kinase activity of EGFR and HER2 with similar efficacy as gefitinib and erlotinib.


Subject(s)
Quinazolines/chemistry , Quinazolines/pharmacology , Cell Line , ErbB Receptors/antagonists & inhibitors , ErbB Receptors/metabolism , Gefitinib , Humans , Phosphorylation
3.
Article in English | MEDLINE | ID: mdl-38158267

ABSTRACT

OBJECTIVE: The aim of this study was to evaluate a deep convolutional neural network (DCNN) method for the detection and classification of nasopalatine duct cysts (NPDC) and periapical cysts (PAC) on panoramic radiographs. STUDY DESIGN: A total of 1,209 panoramic radiographs with 606 NPDC and 603 PAC were labeled with a bounding box and divided into training, validation, and test sets with an 8:1:1 ratio. The networks used were EfficientDet-D3, Faster R-CNN, YOLO v5, RetinaNet, and SSD. Mean average precision (mAP) was used to assess performance. Sixty images with no lesion in the anterior maxilla were added to the previous test set and were tested on 2 dentists with no training in radiology (GP) and on EfficientDet-D3. The performances were comparatively examined. RESULTS: The mAP for each DCNN was EfficientDet-D3 93.8%, Faster R-CNN 90.8%, YOLO v5 89.5%, RetinaNet 79.4%, and SSD 60.9%. The classification performance of EfficientDet-D3 was higher than that of the GPs' with accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of 94.4%, 94.4%, 97.2%, 94.6%, and 97.2%, respectively. CONCLUSIONS: The proposed method achieved high performance for the detection and classification of NPDC and PAC compared with the GPs and presented promising prospects for clinical application.


Subject(s)
Neural Networks, Computer , Radicular Cyst , Radiography, Panoramic , Humans , Radicular Cyst/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods
4.
Sci Rep ; 14(1): 11750, 2024 05 23.
Article in English | MEDLINE | ID: mdl-38782964

ABSTRACT

Sex determination is essential for identifying unidentified individuals, particularly in forensic contexts. Traditional methods for sex determination involve manual measurements of skeletal features on CBCT scans. However, these manual measurements are labor-intensive, time-consuming, and error-prone. The purpose of this study was to automatically and accurately determine sex on a CBCT scan using a two-stage anatomy-guided attention network (SDetNet). SDetNet consisted of a 2D frontal sinus segmentation network (FSNet) and a 3D anatomy-guided attention network (SDNet). FSNet segmented frontal sinus regions in the CBCT images and extracted regions of interest (ROIs) near them. Then, the ROIs were fed into SDNet to predict sex accurately. To improve sex determination performance, we proposed multi-channel inputs (MSIs) and an anatomy-guided attention module (AGAM), which encouraged SDetNet to learn differences in the anatomical context of the frontal sinus between males and females. SDetNet showed superior sex determination performance in the area under the receiver operating characteristic curve, accuracy, Brier score, and specificity compared with the other 3D CNNs. Moreover, the results of ablation studies showed a notable improvement in sex determination with the embedding of both MSI and AGAM. Consequently, SDetNet demonstrated automatic and accurate sex determination by learning the anatomical context information of the frontal sinus on CBCT scans.


Subject(s)
Cone-Beam Computed Tomography , Frontal Sinus , Humans , Cone-Beam Computed Tomography/methods , Male , Female , Frontal Sinus/diagnostic imaging , Frontal Sinus/anatomy & histology , Imaging, Three-Dimensional/methods , Adult , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Sex Determination by Skeleton/methods
5.
Imaging Sci Dent ; 54(1): 81-91, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38571772

ABSTRACT

Purpose: The objective of this study was to propose a deep-learning model for the detection of the mandibular canal on dental panoramic radiographs. Materials and Methods: A total of 2,100 panoramic radiographs (PANs) were collected from 3 different machines: RAYSCAN Alpha (n=700, PAN A), OP-100 (n=700, PAN B), and CS8100 (n=700, PAN C). Initially, an oral and maxillofacial radiologist coarsely annotated the mandibular canals. For deep learning analysis, convolutional neural networks (CNNs) utilizing U-Net architecture were employed for automated canal segmentation. Seven independent networks were trained using training sets representing all possible combinations of the 3 groups. These networks were then assessed using a hold-out test dataset. Results: Among the 7 networks evaluated, the network trained with all 3 available groups achieved an average precision of 90.6%, a recall of 87.4%, and a Dice similarity coefficient (DSC) of 88.9%. The 3 networks trained using each of the 3 possible 2-group combinations also demonstrated reliable performance for mandibular canal segmentation, as follows: 1) PAN A and B exhibited a mean DSC of 87.9%, 2) PAN A and C displayed a mean DSC of 87.8%, and 3) PAN B and C demonstrated a mean DSC of 88.4%. Conclusion: This multi-device study indicated that the examined CNN-based deep learning approach can achieve excellent canal segmentation performance, with a DSC exceeding 88%. Furthermore, the study highlighted the importance of considering the characteristics of panoramic radiographs when developing a robust deep-learning network, rather than depending solely on the size of the dataset.

6.
J Clin Med ; 11(12)2022 Jun 09.
Article in English | MEDLINE | ID: mdl-35743380

ABSTRACT

PURPOSE: We investigated whether a deep learning algorithm applied to retinal fundoscopic images could predict cerebral white matter hyperintensity (WMH), as represented by a modified Fazekas scale (FS), on brain magnetic resonance imaging (MRI). METHODS: Participants who had undergone brain MRI and health-screening fundus photography at Hallym University Sacred Heart Hospital between 2010 and 2020 were consecutively included. The subjects were divided based on the presence of WMH, then classified into three groups according to the FS grade (0 vs. 1 vs. 2+) using age matching. Two pre-trained convolutional neural networks were fine-tuned and evaluated for prediction performance using 10-fold cross-validation. RESULTS: A total of 3726 fundus photographs from 1892 subjects were included, of which 905 fundus photographs from 462 subjects were included in the age-matched balanced dataset. In predicting the presence of WMH, the mean area under the receiver operating characteristic curve was 0.736 ± 0.030 for DenseNet-201 and 0.724 ± 0.026 for EfficientNet-B7. For the prediction of FS grade, the mean accuracies reached 41.4 ± 5.7% with DenseNet-201 and 39.6 ± 5.6% with EfficientNet-B7. The deep learning models focused on the macula and retinal vasculature to detect an FS of 2+. CONCLUSIONS: Cerebral WMH might be partially predicted by non-invasive fundus photography via deep learning, which may suggest an eye-brain association.

SELECTION OF CITATIONS
SEARCH DETAIL