Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 38
Filter
Add more filters

Publication year range
1.
Cleft Palate Craniofac J ; 56(8): 1026-1037, 2019 09.
Article in English | MEDLINE | ID: mdl-30773047

ABSTRACT

BACKGROUND: Congenital midfacial hypoplasia often requires intensive treatments and is a typical condition for the Binder phenotype and syndromic craniosynostosis. The growth trait of the midfacial skeleton during the early fetal period has been assumed to be critical for such an anomaly. However, previous embryological studies using 2-dimensional analyses and specimens during the late fetal period have not been sufficient to reveal it. OBJECTIVE: To understand the morphogenesis of the midfacial skeleton in the early fetal period via 3-dimensional quantification of the growth trait and investigation of the developmental association between the growth centers and midface. METHODS: Magnetic resonance images were obtained from 60 human fetuses during the early fetal period. Three-dimensional shape changes in the craniofacial skeleton along growth were quantified and visualized using geometric morphometrics. Subsequently, the degree of development was computed. Furthermore, the developmental association between the growth centers and the midfacial skeleton was statistically investigated and visualized. RESULTS: The zygoma expanded drastically in the anterolateral dimension, and the lateral part of the maxilla developed forward until approximately 13 weeks of gestation. The growth centers such as the nasal septum and anterior portion of the sphenoid were highly associated with the forward growth of the midfacial skeleton (RV = 0.589; P < .001). CONCLUSIONS: The development of the midface, especially of the zygoma, before 13 weeks of gestation played an essential role in the midfacial development. Moreover, the growth centers had a strong association with midfacial forward growth before birth.


Subject(s)
Craniosynostoses , Face , Fetal Development , Maxilla , Maxillofacial Development , Face/embryology , Female , Humans , Maxilla/embryology , Maxilla/growth & development , Morphogenesis , Pregnancy , Zygoma
2.
Int J Legal Med ; 130(5): 1323-8, 2016 Sep.
Article in English | MEDLINE | ID: mdl-27048214

ABSTRACT

In the present study, we evaluated post-mortem lateral cerebral ventricle (LCV) changes using computed tomography (CT). Subsequent periodical CT scans termed "sequential scans" were obtained for three cadavers. The first scan was performed immediately after the body was transferred from the emergency room to the institute of legal medicine. Sequential scans were obtained and evaluated for 24 h at maximum. The time of death had been determined in the emergency room. The sequential scans enabled us to observe periodical post-mortem changes in CT images. The series of continuous LCV images obtained up to 24 h (two cases)/16 h (1 case) after death was evaluated. The average Hounsfield units (HU) within the LCVs progressively increased, and LCV volume progressively decreased over time. The HU in the cerebrospinal fluid (CSF) increased at an individual rate proportional to the post-mortem interval (PMI). Thus, an early longitudinal radiodensity change in the CSF could be potential indicator of post-mortem interval (PMI). Sequential imaging scans reveal post-mortem changes in the CSF space which may reflect post-mortem brain alterations. Further studies are needed to evaluate the proposed CSF change markers in correlation with other validated PMI indicators.


Subject(s)
Cerebral Ventricles/diagnostic imaging , Multidetector Computed Tomography , Postmortem Changes , Aged , Cerebrospinal Fluid/diagnostic imaging , Forensic Pathology , Humans , Imaging, Three-Dimensional , Male , Middle Aged , Myocardial Ischemia , Time Factors
3.
Int J Comput Assist Radiol Surg ; 19(4): 613-623, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38329565

ABSTRACT

PURPOSE: This study proposes a detection support system for primary and metastatic lesions of prostate cancer using 18 F -PSMA 1007 positron emission tomography/computed tomography (PET/CT) images with non-image information, including patient metadata and location information of an input slice image. METHODS: A convolutional neural network with condition generators and feature-wise linear modulation (FiLM) layers was employed to allow input of not only PET/CT images but also non-image information, namely, Gleason score, flag of pre- or post-prostatectomy, and normalized z-coordinate of an input slice. We explored the insertion position of the FiLM layers to optimize the conditioning of the network using non-image information. RESULTS: 18 F -PSMA 1007 PET/CT images were collected from 163 patients with prostate cancer and applied to the proposed system in a threefold cross-validation manner to evaluate the performance. The proposed system achieved a Dice score of 0.5732 (per case) and sensitivity of 0.8200 (per lesion), which are 3.87 and 4.16 points higher than the network without non-image information. CONCLUSION: This study demonstrated the effectiveness of the use of non-image information, including metadata of the patient and location information of the input slice image, in the detection of prostate cancer from 18 F -PSMA 1007 PET/CT images. Improvement in the sensitivity of inactive and small lesions remains a future challenge.


Subject(s)
Positron Emission Tomography Computed Tomography , Prostatic Neoplasms , Male , Humans , Positron Emission Tomography Computed Tomography/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/surgery , Prostatic Neoplasms/pathology , Prostatectomy
4.
Int J Comput Assist Radiol Surg ; 19(8): 1527-1536, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38625446

ABSTRACT

PURPOSE: The quality and bias of annotations by annotators (e.g., radiologists) affect the performance changes in computer-aided detection (CAD) software using machine learning. We hypothesized that the difference in the years of experience in image interpretation among radiologists contributes to annotation variability. In this study, we focused on how the performance of CAD software changes with retraining by incorporating cases annotated by radiologists with varying experience. METHODS: We used two types of CAD software for lung nodule detection in chest computed tomography images and cerebral aneurysm detection in magnetic resonance angiography images. Twelve radiologists with different years of experience independently annotated the lesions, and the performance changes were investigated by repeating the retraining of the CAD software twice, with the addition of cases annotated by each radiologist. Additionally, we investigated the effects of retraining using integrated annotations from multiple radiologists. RESULTS: The performance of the CAD software after retraining differed among annotating radiologists. In some cases, the performance was degraded compared to that of the initial software. Retraining using integrated annotations showed different performance trends depending on the target CAD software, notably in cerebral aneurysm detection, where the performance decreased compared to using annotations from a single radiologist. CONCLUSIONS: Although the performance of the CAD software after retraining varied among the annotating radiologists, no direct correlation with their experience was found. The performance trends differed according to the type of CAD software used when integrated annotations from multiple radiologists were used.


Subject(s)
Intracranial Aneurysm , Radiologists , Software , Tomography, X-Ray Computed , Humans , Intracranial Aneurysm/diagnostic imaging , Intracranial Aneurysm/diagnosis , Tomography, X-Ray Computed/methods , Diagnosis, Computer-Assisted/methods , Clinical Competence , Magnetic Resonance Angiography/methods , Machine Learning , Observer Variation , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/diagnosis , Image Interpretation, Computer-Assisted/methods , Solitary Pulmonary Nodule/diagnostic imaging , Solitary Pulmonary Nodule/diagnosis
5.
Int J Comput Assist Radiol Surg ; 19(9): 1699-1711, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39088129

ABSTRACT

PURPOSE: This study proposes a process for detecting slices with bone marrow edema (BME), a typical finding of axSpA, using MRI scans as the input. This process does not require manual input of ROIs and provides the results of the judgment of the presence or absence of BME on a slice and the location of edema as the rationale for the judgment. METHODS: First, the signal intensity of the MRI scans of the sacroiliac joint was normalized to reduce the variation in signal values between scans. Next, slices containing synovial joints were extracted using a slice selection network. Finally, the BME slice detection network determines the presence or absence of the BME in each slice and outputs the location of the BME. RESULTS: The proposed method was applied to 86 MRI scans collected from 15 hospitals in Japan. The results showed that the average absolute error of the slice selection process was 1.49 slices for the misalignment between the upper and lower slices of the synovial joint range. The accuracy, sensitivity, and specificity of the BME slice detection network were 0.905, 0.532, and 0.974, respectively. CONCLUSION: This paper proposes a process to detect the slice with BME and its location as the rationale of the judgment from an MRI scan and shows its effectiveness using 86 MRI scans. In the future, we plan to develop a process for detecting other findings such as bone erosion from MR scans, followed by the development of a diagnostic support system.


Subject(s)
Axial Spondyloarthritis , Bone Marrow Diseases , Edema , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Edema/diagnostic imaging , Edema/diagnosis , Bone Marrow Diseases/diagnostic imaging , Bone Marrow Diseases/diagnosis , Axial Spondyloarthritis/diagnosis , Axial Spondyloarthritis/diagnostic imaging , Male , Female , Bone Marrow/diagnostic imaging , Bone Marrow/pathology , Sacroiliac Joint/diagnostic imaging , Sacroiliac Joint/pathology , Sensitivity and Specificity , Adult , Middle Aged
6.
Int J Comput Assist Radiol Surg ; 18(2): 289-301, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36251150

ABSTRACT

PURPOSE: This study proposes a method to draw attention toward the specific radiological findings of coronavirus disease 2019 (COVID-19) in CT images, such as bilaterality of ground glass opacity (GGO) and/or consolidation, in order to improve the classification accuracy of input CT images. METHODS: We propose an induction mask that combines a similarity and a bilateral mask. A similarity mask guides attention to regions with similar appearances, and a bilateral mask induces attention to the opposite side of the lung to capture bilaterally distributed lesions. An induction mask for pleural effusion is also proposed in this study. ResNet18 with nonlocal blocks was trained by minimizing the loss function defined by the induction mask. RESULTS: The four-class classification accuracy of the CT images of 1504 cases was 0.6443, where class 1 was the typical appearance of COVID-19 pneumonia, class 2 was the indeterminate appearance of COVID-19 pneumonia, class 3 was the atypical appearance of COVID-19 pneumonia, and class 4 was negative for pneumonia. The four classes were divided into two subgroups. The accuracy of COVID-19 and pneumonia classifications was evaluated, which were 0.8205 and 0.8604, respectively. The accuracy of the four-class and COVID-19 classifications improved when attention was paid to pleural effusion. CONCLUSION: The proposed attention induction method was effective for the classification of CT images of COVID-19 patients. Improvement of the classification accuracy of class 3 by focusing on features specific to the class remains a topic for future work.


Subject(s)
COVID-19 , Pleural Effusion , Pneumonia , Humans , SARS-CoV-2 , Tomography, X-Ray Computed/methods , Retrospective Studies , Lung/diagnostic imaging , Pleural Effusion/diagnostic imaging
7.
Sci Rep ; 12(1): 20840, 2022 12 02.
Article in English | MEDLINE | ID: mdl-36460708

ABSTRACT

This study presents a novel framework for classifying and visualizing pneumonia induced by COVID-19 from CT images. Although many image classification methods using deep learning have been proposed, in the case of medical image fields, standard classification methods are unable to be used in some cases because the medical images that belong to the same category vary depending on the progression of the symptoms and the size of the inflamed area. In addition, it is essential that the models used be transparent and explainable, allowing health care providers to trust the models and avoid mistakes. In this study, we propose a classification method using contrastive learning and an attention mechanism. Contrastive learning is able to close the distance for images of the same category and generate a better feature space for classification. An attention mechanism is able to emphasize an important area in the image and visualize the location related to classification. Through experiments conducted on two-types of classification using a three-fold cross validation, we confirmed that the classification accuracy was significantly improved; in addition, a detailed visual explanation was achieved comparison with conventional methods.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Health Personnel , Trust , Research Design , Tomography, X-Ray Computed
8.
Int J Comput Assist Radiol Surg ; 16(12): 2251-2260, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34478048

ABSTRACT

PURPOSE: A hotspot of bone metastatic lesion in a whole-body bone scintigram is often observed as left-right asymmetry. The purpose of this study is to present a network to evaluate bilateral difference of a whole-body bone scintigram, and to subsequently integrate it with our previous network that extracts the hotspot from a pair of anterior and posterior images. METHODS: Input of the proposed network is a pair of scintigrams that are the original one and the flipped version with respect to body axis. The paired scintigrams are processed by a butterfly-type network (BtrflyNet). Subsequently, the output of the network is combined with the output of another BtrflyNet for a pair of anterior and posterior scintigrams by employing a convolutional layer optimized using training images. RESULTS: We evaluated the performance of the combined networks, which comprised two BtrflyNets followed by a convolutional layer for integration, in terms of accuracy of hotspot extraction using 1330 bone scintigrams of 665 patients with prostate cancer. A threefold cross-validation experiment showed that the number of false positive regions was reduced from 4.30 to 2.13 for anterior and 4.71 to 2.62 for posterior scintigrams on average compared with our previous model. CONCLUSIONS: This study presented a network for hotspot extraction of bone metastatic lesion that evaluates bilateral difference of a whole-body bone scintigram. When combining the network with the previous network that extracts the hotspot from a pair of anterior and posterior scintigrams, the false positives were reduced by nearly half compared to our previous model.


Subject(s)
Bone and Bones , Prostatic Neoplasms , Humans , Male , Prostatic Neoplasms/diagnostic imaging
9.
Cell Rep ; 37(6): 109966, 2021 11 09.
Article in English | MEDLINE | ID: mdl-34758322

ABSTRACT

Sensory processing is essential for motor control. Climbing fibers from the inferior olive transmit sensory signals to Purkinje cells, but how the signals are represented in the cerebellar cortex remains elusive. To examine the olivocerebellar organization of the mouse brain, we perform quantitative Ca2+ imaging to measure complex spikes (CSs) evoked by climbing fiber inputs over the entire dorsal surface of the cerebellum simultaneously. The surface is divided into approximately 200 segments, each composed of ∼100 Purkinje cells that fire CSs synchronously. Our in vivo imaging reveals that, although stimulation of four limb muscles individually elicits similar global CS responses across nearly all segments, the timing and location of a stimulus are derived by Bayesian inference from coordinated activation and inactivation of multiple segments on a single trial basis. We propose that the cerebellum performs segment-based, distributed-population coding that represents the conditional probability of sensory events.


Subject(s)
Action Potentials , Calcium/metabolism , Cerebellum/physiology , Nerve Net/physiology , Olivary Nucleus/physiology , Purkinje Cells/physiology , Sense Organs/physiology , Animals , Bayes Theorem , Cerebellum/cytology , Female , Male , Mice , Mice, Inbred ICR , Nerve Net/cytology , Olivary Nucleus/cytology , Purkinje Cells/cytology , Sense Organs/cytology
10.
Int J Comput Assist Radiol Surg ; 16(11): 1875-1887, 2021 Nov.
Article in English | MEDLINE | ID: mdl-34309781

ABSTRACT

PURPOSE: The purpose of this study was to develop a deep learning-based computer-aided diagnosis system for skin disease classification using photographic images of patients. The targets are 59 skin diseases, including localized and diffuse diseases captured by photographic cameras, resulting in highly diverse images in terms of the appearance of the diseases or photographic conditions. METHODS: ResNet-18 is used as a baseline model for classification and is reinforced by metric learning to boost generalization in classification by avoiding the overfitting of the training data and increasing the reliability of CADx for dermatologists. Patient-wise classification is performed by aggregating the inference vectors of all the input patient images. RESULTS: The experiment using 70,196 images of 13,038 patients demonstrated that classification accuracy was significantly improved by both metric learning and aggregation, resulting in patient accuracies of 0.579 for Top-1, 0.793 for Top-3, and 0.863 for Top-5. The McNemar test showed that the improvements achieved by the proposed method were statistically significant. CONCLUSION: This study presents a deep learning-based classification of 59 skin diseases using multiple photographic images of a patient. The experimental results demonstrated that the proposed classification reinforced by metric learning and aggregation of multiple input images was effective in the classification of patients with diverse skin diseases and imaging conditions.


Subject(s)
Deep Learning , Skin Diseases , Skin Neoplasms , Humans , Photography , Reproducibility of Results , Skin Diseases/diagnostic imaging
SELECTION OF CITATIONS
SEARCH DETAIL