Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
Sci Rep ; 14(1): 22689, 2024 09 30.
Article in English | MEDLINE | ID: mdl-39349950

ABSTRACT

Prompt personal identification is required during disasters that can result in many casualties. To rapidly estimate sex based on skull structure, this study applied deep learning using two-dimensional silhouette images, obtained from head postmortem computed tomography (PMCT), to enhance the outline shape of the skull. We investigated the process of sex estimation using silhouette images viewed from different angles and majority votes. A total of 264 PMCT cases (132 cases for each sex) were used for transfer learning with two deep-learning models (AlexNet and VGG16). VGG16 exhibited the highest accuracy (89.8%) for lateral projections. The accuracy improved to 91.7% when implementing a majority vote based on the results of multiple projection angles. Moreover, silhouette images can be obtained from simple and popular X-ray imaging in addition to PMCT. Thus, this study demonstrated the feasibility of sex estimation by combining silhouette images with deep learning. The results implied that X-ray images can be used for personal identification.


Subject(s)
Deep Learning , Skull , Tomography, X-Ray Computed , Humans , Skull/diagnostic imaging , Skull/anatomy & histology , Tomography, X-Ray Computed/methods , Female , Male , Autopsy/methods , Adult , Sex Determination by Skeleton/methods , Middle Aged , Aged , Image Processing, Computer-Assisted/methods , Young Adult , Forensic Anthropology/methods , Postmortem Imaging
2.
Diagnostics (Basel) ; 14(16)2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39202266

ABSTRACT

Post-mortem (PM) imaging has potential for identifying individuals by comparing ante-mortem (AM) and PM images. Radiographic images of bones contain significant information for personal identification. However, PM images are affected by soft tissue decomposition; therefore, it is desirable to extract only images of bones that change little over time. This study evaluated the effectiveness of U-Net for bone image extraction from two-dimensional (2D) X-ray images. Two types of pseudo 2D X-ray images were created from the PM computed tomography (CT) volumetric data using ray-summation processing for training U-Net. One was a projection of all body tissues, and the other was a projection of only bones. The performance of the U-Net for bone extraction was evaluated using Intersection over Union, Dice coefficient, and the area under the receiver operating characteristic curve. Additionally, AM chest radiographs were used to evaluate its performance with real 2D images. Our results indicated that bones could be extracted visually and accurately from both AM and PM images using U-Net. The extracted bone images could provide useful information for personal identification in forensic pathology.

3.
PLoS One ; 17(1): e0261870, 2022.
Article in English | MEDLINE | ID: mdl-34995298

ABSTRACT

BACKGROUND: Forensic dentistry identifies deceased individuals by comparing postmortem dental charts, oral-cavity pictures and dental X-ray images with antemortem records. However, conventional forensic dentistry methods are time-consuming and thus unable to rapidly identify large numbers of victims following a large-scale disaster. OBJECTIVE: Our goal is to automate the dental filing process by using intraoral scanner images. In this study, we generated and evaluated an artificial intelligence-based algorithm that classified images of individual molar teeth into three categories: (1) full metallic crown (FMC); (2) partial metallic restoration (In); or (3) sound tooth, carious tooth or non-metallic restoration (CNMR). METHODS: A pre-trained model was created using oral-cavity pictures from patients. Then, the algorithm was generated through transfer learning and training with images acquired from cadavers by intraoral scanning. Cross-validation was performed to reduce bias. The ability of the model to classify molar teeth into the three categories (FMC, In or CNMR) was evaluated using four criteria: precision, recall, F-measure and overall accuracy. RESULTS: The average value (variance) was 0.952 (0.000140) for recall, 0.957 (0.0000614) for precision, 0.952 (0.000145) for F-measure, and 0.952 (0.000142) for overall accuracy when the algorithm was used to classify images of molar teeth acquired from cadavers by intraoral scanning. CONCLUSION: We have created an artificial intelligence-based algorithm that analyzes images acquired with an intraoral scanner and classifies molar teeth into one of three types (FMC, In or CNMR) based on the presence/absence of metallic restorations. Furthermore, the accuracy of the algorithm reached about 95%. This algorithm was constructed as a first step toward the development of an automated system that generates dental charts from images acquired by an intraoral scanner. The availability of such a system would greatly increase the efficiency of personal identification in the event of a major disaster.


Subject(s)
Artificial Intelligence , Imaging, Three-Dimensional , Molar , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL