ABSTRACT
We propose an effective method for manufacturing human anatomical specimens in response to the shortage of cadaver specimens and the poor simulation results of anatomical specimen substitutes. Digital human data with high precision were used to create digital models and corresponding mapped textures. Different materials were chosen to print the digital models with full-color and multimaterial 3D-printing technology based on the histological characteristics of the anatomical structures. Anatomy experts and surgeons were then invited to compare the 3D printed models with authentic anatomical specimens in terms of morphological appearance, anatomical detail, and textural properties. The skull, brain, hand muscles, blood vessels and nerves of the hand, and the deep structure of the head and face were printed. The skull model used hard material, and the brain and hand muscles models used flexible and hard materials combined. The blood vessels, nerves of the hand, and the superficial and deep structure of the head and face used transparent materials, revealing the small vessels and nerves in the interior. In all the models, there were no significant differences from anatomical specimens in morphological appearance and anatomical detail. They also affected vision and touch in the same way as authentic specimens in the textural properties of color, roughness, smoothness, and fineness. Full-color and multi-material 3D printed anatomical models have the same visual and tactile properties as anatomical specimens and could serve to complement or supplement them in anatomy teaching to compensate for the shortage of cadavers.
Subject(s)
Models, Anatomic , Printing, Three-Dimensional , Cadaver , Humans , SkullABSTRACT
Automated organ segmentation in anatomical sectional images of canines is crucial for clinical applications and the study of sectional anatomy. The manual delineation of organ boundaries by experts is a time-consuming and laborious task. However, semi-automatic segmentation methods have shown low segmentation accuracy. Deep learning-based CNN models lack the ability to establish long-range dependencies, leading to limited segmentation performance. Although Transformer-based models excel at establishing long-range dependencies, they face a limitation in capturing local detail information. To address these challenges, we propose a novel ECA-TFUnet model for organ segmentation in anatomical sectional images of canines. ECA-TFUnet model is a U-shaped CNN-Transformer network with Efficient Channel Attention, which fully combines the strengths of the Unet network and Transformer block. Specifically, The U-Net network is excellent at capturing detailed local information. The Transformer block is equipped in the first skip connection layer of the Unet network to effectively learn the global dependencies of different regions, which improves the representation ability of the model. Additionally, the Efficient Channel Attention Block is introduced to the Unet network to focus on more important channel information, further improving the robustness of the model. Furthermore, the mixed loss strategy is incorporated to alleviate the problem of class imbalance. Experimental results showed that the ECA-TFUnet model yielded 92.63% IoU, outperforming 11 state-of-the-art methods. To comprehensively evaluate the model performance, we also conducted experiments on a public dataset, which achieved 87.93% IoU, still superior to 11 state-of-the-art methods. Finally, we explored the use of a transfer learning strategy to provide good initialization parameters for the ECA-TFUnet model. We demonstrated that the ECA-TFUnet model exhibits superior segmentation performance on anatomical sectional images of canines, which has the potential for application in medical clinical diagnosis.
Subject(s)
Electric Power Supplies , Image Processing, Computer-Assisted , Animals , DogsABSTRACT
Specimen observation and dissection have been regarded as the best approach to teach anatomy, but due to the severe lack of anatomical specimens in recent years, the quality of anatomy teaching has been seriously affected. In order to disseminate anatomical knowledge effectively under such circumstances, this study discusses three key factors (modeling, perception, and interaction) involved in constructing virtual anatomy teaching systems in detail. To ensure the authenticity, integrity, and accuracy of modeling, detailed three-dimensional (3D) digital anatomical models are constructed using multi-scale data, such as the Chinese Visible Human dataset, clinical imaging data, tissue sections, and other sources. The anatomical knowledge ontology is built according to the needs of the particular teaching purposes. Various kinds of anatomical knowledge and 3D digital anatomical models are organically combined to construct virtual anatomy teaching system by means of virtual reality equipment and technology. The perception of knowledge is realized by the Yi Chuang Digital Human Anatomy Teaching System that we have created. The virtual interaction mode, which is similar to actual anatomical specimen observation and dissection, can enhance the transmissibility of anatomical knowledge. This virtual anatomy teaching system captures the three key factors. It can provide realistic and reusable teaching resources, expand the new medical education model, and effectively improve the quality of anatomy teaching.