Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Cancer ; 24(1): 128, 2024 Jan 24.
Article in English | MEDLINE | ID: mdl-38267924

ABSTRACT

BACKGROUND: Sarcopenia has been identified as a potential negative prognostic factor in cancer patients. In this study, our objective was to investigate the relationship between the assessment method for sarcopenia using the masseter muscle volume measured on computed tomography (CT) images and the life expectancy of patients with oral cancer. We also developed a learning model using deep learning to automatically extract the masseter muscle volume and investigated its association with the life expectancy of oral cancer patients. METHODS: To develop the learning model for masseter muscle volume, we used manually extracted data from CT images of 277 patients. We established the association between manually extracted masseter muscle volume and the life expectancy of oral cancer patients. Additionally, we compared the correlation between the groups of manual and automatic extraction in the masseter muscle volume learning model. RESULTS: Our findings revealed a significant association between manually extracted masseter muscle volume on CT images and the life expectancy of patients with oral cancer. Notably, the manual and automatic extraction groups in the masseter muscle volume learning model showed a high correlation. Furthermore, the masseter muscle volume automatically extracted using the developed learning model exhibited a strong association with life expectancy. CONCLUSIONS: The sarcopenia assessment method is useful for predicting the life expectancy of patients with oral cancer. In the future, it is crucial to validate and analyze various factors within the oral surgery field, extending beyond cancer patients.


Subject(s)
Deep Learning , Mouth Neoplasms , Sarcopenia , Humans , Prognosis , Masseter Muscle/diagnostic imaging , Sarcopenia/diagnostic imaging , Mouth Neoplasms/diagnostic imaging
2.
Cancer Med ; 12(5): 5312-5322, 2023 03.
Article in English | MEDLINE | ID: mdl-36307918

ABSTRACT

BACKGROUND: Although cervical lymph node metastasis is an important prognostic factor for oral cancer, occult metastases remain undetected even by diagnostic imaging. We developed a learning model to predict lymph node metastasis in resected specimens of tongue cancer by classifying the level of immunohistochemical (IHC) staining for angiogenesis- and lymphangiogenesis-related proteins using a multilayer perceptron neural network (MNN). METHODS: We obtained a dataset of 76 patients with squamous cell carcinoma of the tongue who had undergone primary tumor resection. All 76 specimens were IHC stained for the six types shown above (VEGF-C, VEGF-D, NRP1, NRP2, CCR7, and SEMA3E) and 456 slides were prepared. We scored the staining levels visually on all slides. We created virtual slides (4560 images) and the accuracy of the MNN model was verified by comparing it with a hue-saturation (HS) histogram, which quantifies the manually determined visual information. RESULTS: The accuracy of the training model with the MNN was 98.6%, and when the training image was converted to grayscale, the accuracy decreased to 52.9%. This indicates that our MNN adequately evaluates the level of staining rather than the morphological features of the IHC images. Multivariate analysis revealed that CCR7 staining level and T classification were independent factors associated with the presence of cervical lymph node metastasis in both HS histograms and MNN. CONCLUSION: These results suggest that IHC assessment using MNN may be useful for identifying lymph node metastasis in patients with tongue cancer.


Subject(s)
Mouth Neoplasms , Tongue Neoplasms , Humans , Tongue Neoplasms/pathology , Lymphatic Metastasis/pathology , Receptors, CCR7 , Lymph Nodes/pathology , Mouth Neoplasms/pathology
3.
PLoS One ; 17(11): e0277761, 2022.
Article in English | MEDLINE | ID: mdl-36395291

ABSTRACT

Humpback whales in the western North Pacific are considered endangered due to their small population size and lack of information. Although previous studies have reported interchanges between regions within a population, the relationship between the geographic regions of a population in Japan is poorly understood. Using 3,532 fluke photo IDs of unique individuals obtained from four areas in Japan: Hokkaido, six IDs (2009-2019); Ogasawara, 1,477 IDs, from two organizations (1) Everlasting nature of Asia (1987-2020) and (2) Ogasawara Whale Watching Association, (1990-2020); Amami, 373 IDs (1992-1994, 2005-2016); Okinawa, 1,676 IDs (1990-2018), interchanges were analyzed. The ID matchings were conducted using an automated system with an 80.9% matching accuracy. Interchange and within-region return indices were also calculated. As a result, number of matches and interchange indices follow locations, Hokkaido-Okinawa (3, 0.31), Amami-Ogasawara (36, 0.06), Amami-Okinawa (222, 0.37), and Okinawa-Ogasawara (225, 0.08), respectively. Interchange indices among Japanese areas were much higher than the indices between Ogasawara/Okinawa and Hawaii (0.01) and Mexico (0.00) reported in previous studies, indicating that the Japanese regions are utilized by the same population. At the same time, the frequency of interchanges among the three breeding areas vary, and the high within-region return indices in respective breeding areas suggest the site fidelity of the whales in each area at some level. These results indicate the existence of several groups within the population which are possibly be divided into at least two groups based on geographical features: one tend to utilize Ogasawara and the Mariana Archipelago; the other utilize Amami, Okinawa, and the Philippines, migrating along the Ryukyu and Philippine Trench. The matching results also suggest that Hokkaido is possibly be utilized as a corridor between northern feeding areas and southern breeding areas at least by individuals migrating to Okinawa area.


Subject(s)
Humpback Whale , Animals , Japan , Movement , Geography , Asia
4.
Orthod Craniofac Res ; 24 Suppl 2: 53-58, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34145974

ABSTRACT

AIM: To estimate the number of cephalograms needed to re-learn for different quality images, when artificial intelligence (AI) systems are introduced in a clinic. SETTINGS AND SAMPLE POPULATION: A total of 2385 digital lateral cephalograms (University data [1785]; Clinic F [300]; Clinic N [300]) were used. Using data from the university and clinics F and N, and combined data from clinics F and N, 50 cephalograms were randomly selected to test the system's performance (Test-data O, F, N, FN). MATERIALS AND METHODS: To examine the recognition ability of landmark positions of the AI system developed in Part I (Original System) for other clinical data, test data F, N and FN were applied to the original system, and success rates were calculated. Then, to determine the approximate number of cephalograms needed to re-learn for different quality images, 85 and 170 cephalograms were randomly selected from each group and used for the re-learning (F85, F170, N85, N170, FN85 and FN170) of the original system. To estimate the number of cephalograms needed for re-learning, we examined the changes in the success rate of the re-trained systems and compared them with the original system. Re-trained systems F85 and F170 were evaluated with test data F, N85 and N170 from test data N, and FN85 and FN170 from test data FN. RESULTS: For systems using F, N and FN, it was determined that 85, 170 and 85 cephalograms, respectively, were required for re-learning. CONCLUSIONS: The number of cephalograms needed to re-learn for images of different quality was estimated.


Subject(s)
Artificial Intelligence , Cephalometry , Humans , Radiography
5.
Orthod Craniofac Res ; 24 Suppl 2: 43-52, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34021976

ABSTRACT

OBJECTIVES: To determine whether AI systems that recognize cephalometric landmarks can apply to various patient groups and to examine the patient-related factors associated with identification errors. SETTING AND SAMPLE POPULATION: The present retrospective cohort study analysed digital lateral cephalograms obtained from 1785 Japanese orthodontic patients. Patients were categorized into eight subgroups according to dental age, cleft lip and/or palate, orthodontic appliance use and overjet. MATERIALS AND METHODS: An AI system that automatically recognizes anatomic landmarks on lateral cephalograms was used. Thirty cephalograms in each subgroup were randomly selected and used to test the system's performance. The remaining cephalograms were used for system learning. The success rates in landmark recognition were evaluated using confidence ellipses with α = 0.99 for each landmark. The selection of test samples, learning of the system and evaluation of the system were repeated five times for each subgroup. The mean success rate and identification error were calculated. Factors associated with identification errors were examined using a multiple linear regression model. RESULTS: The success rate and error varied among subgroups, ranging from 85% to 91% and 1.32 mm to 1.50 mm, respectively. Cleft lip and/or palate was found to be a factor associated with greater identification errors, whereas dental age, orthodontic appliances and overjet were not significant factors (all, P < .05). CONCLUSION: Artificial intelligence systems that recognize cephalometric landmarks could be applied to various patient groups. Patient-oriented errors were found in patients with cleft lip and/or palate.


Subject(s)
Cleft Lip , Cleft Palate , Artificial Intelligence , Cephalometry , Cleft Lip/diagnostic imaging , Cleft Palate/diagnostic imaging , Humans , Retrospective Studies
6.
Front Neurol ; 5: 8, 2014.
Article in English | MEDLINE | ID: mdl-24550883

ABSTRACT

UNLABELLED: When faced with visual uncertainty during motor performance, humans rely more on predictive forward models and proprioception and attribute lesser importance to the ambiguous visual feedback. Though disrupted predictive control is typical of patients with cerebellar disease, sensorimotor deficits associated with the involuntary and often unconscious nature of l-DOPA-induced dyskinesias in Parkinson's disease (PD) suggests dyskinetic subjects may also demonstrate impaired predictive motor control. METHODS: We investigated the motor performance of 9 dyskinetic and 10 non-dyskinetic PD subjects on and off l-DOPA, and of 10 age-matched control subjects, during a large-amplitude, overlearned, visually guided tracking task. Ambiguous visual feedback was introduced by adding "jitter" to a moving target that followed a Lissajous pattern. Root mean square (RMS) tracking error was calculated, and ANOVA, robust multivariate linear regression, and linear dynamical system analyses were used to determine the contribution of speed and ambiguity to tracking performance. RESULTS: Increasing target ambiguity and speed contributed significantly more to the RMS error of dyskinetic subjects off medication. l-DOPA improved the RMS tracking performance of both PD groups. At higher speeds, controls and PDs without dyskinesia were able to effectively de-weight ambiguous visual information. CONCLUSION: PDs' visually guided motor performance degrades with visual jitter and speed of movement to a greater degree compared to age-matched controls. However, there are fundamental differences in PDs with and without dyskinesia: subjects without dyskinesia are generally slow, and less responsive to dynamic changes in motor task requirements, but in PDs with dyskinesia, there was a trade-off between overall performance and inappropriate reliance on ambiguous visual feedback. This is likely associated with functional changes in posterior parietal-ponto-cerebellar pathways.

SELECTION OF CITATIONS
SEARCH DETAIL