Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Sensors (Basel) ; 19(18)2019 Sep 12.
Artículo en Inglés | MEDLINE | ID: mdl-31547399

RESUMEN

This paper presents a human-carried mapping backpack based on a pair of Velodyne LiDAR scanners. Our system is a universal solution for both large scale outdoor and smaller indoor environments. It benefits from a combination of two LiDAR scanners, which makes the odometry estimation more precise. The scanners are mounted under different angles, thus a larger space around the backpack is scanned. By fusion with GNSS/INS sub-system, the mapping of featureless environments and the georeferencing of resulting point cloud is possible. By deploying SoA methods for registration and the loop closure optimization, it provides sufficient precision for many applications in BIM (Building Information Modeling), inventory check, construction planning, etc. In our indoor experiments, we evaluated our proposed backpack against ZEB-1 solution, using FARO terrestrial scanner as the reference, yielding similar results in terms of precision, while our system provides higher data density, laser intensity readings, and scalability for large environments.

2.
Ultramicroscopy ; 246: 113666, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36599269

RESUMEN

AFM microscopy from its nature produces outputs with certain distortions, inaccuracies and errors given by its physical principle. These distortions are more or less well studied and documented. Based on the nature of the individual distortions, different reconstruction and compensation filters have been developed to post-process the scanned images. This article presents an approach based on machine learning - the involved convolutional neural network learns from pairs of distorted images and the ground truth image and then it is able to process pairs of images of interest and produce a filtered image with the artifacts removed or at least suppressed. What is important in our approach is that the neural network is trained purely on synthetic data generated by a simulator of the inputs, based on an analytical description of the physical phenomena causing the distortions. The generator produces training samples involving various combinations of the distortions. The resulting trained network seems to be able to autonomously recognize the distortions present in the testing image (no knowledge of the distortions or any other human knowledge is provided at the test time) and apply the appropriate corrections. The experimental results show that not only is the new approach better or at least on par with conventional post-processing methods, but more importantly, it does not require any operator's input and works completely autonomously. The source codes of the training set generator and of the convolutional neural net model are made public, as well as an evaluation dataset of real captured AFM images.

3.
Med Image Anal ; 88: 102865, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37331241

RESUMEN

Cranial implants are commonly used for surgical repair of craniectomy-induced skull defects. These implants are usually generated offline and may require days to weeks to be available. An automated implant design process combined with onsite manufacturing facilities can guarantee immediate implant availability and avoid secondary intervention. To address this need, the AutoImplant II challenge was organized in conjunction with MICCAI 2021, catering for the unmet clinical and computational requirements of automatic cranial implant design. The first edition of AutoImplant (AutoImplant I, 2020) demonstrated the general capabilities and effectiveness of data-driven approaches, including deep learning, for a skull shape completion task on synthetic defects. The second AutoImplant challenge (i.e., AutoImplant II, 2021) built upon the first by adding real clinical craniectomy cases as well as additional synthetic imaging data. The AutoImplant II challenge consisted of three tracks. Tracks 1 and 3 used skull images with synthetic defects to evaluate the ability of submitted approaches to generate implants that recreate the original skull shape. Track 3 consisted of the data from the first challenge (i.e., 100 cases for training, and 110 for evaluation), and Track 1 provided 570 training and 100 validation cases aimed at evaluating skull shape completion algorithms at diverse defect patterns. Track 2 also made progress over the first challenge by providing 11 clinically defective skulls and evaluating the submitted implant designs on these clinical cases. The submitted designs were evaluated quantitatively against imaging data from post-craniectomy as well as by an experienced neurosurgeon. Submissions to these challenge tasks made substantial progress in addressing issues such as generalizability, computational efficiency, data augmentation, and implant refinement. This paper serves as a comprehensive summary and comparison of the submissions to the AutoImplant II challenge. Codes and models are available at https://github.com/Jianningli/Autoimplant_II.


Asunto(s)
Prótesis e Implantes , Cráneo , Humanos , Cráneo/diagnóstico por imagen , Cráneo/cirugía , Craneotomía/métodos , Cabeza
4.
Comput Biol Med ; 137: 104766, 2021 10.
Artículo en Inglés | MEDLINE | ID: mdl-34425418

RESUMEN

Correct virtual reconstruction of a defective skull is a prerequisite for successful cranioplasty and its automatization has the potential for accelerating and standardizing the clinical workflow. This work provides a deep learning-based method for the reconstruction of a skull shape and cranial implant design on clinical data of patients indicated for cranioplasty. The method is based on a cascade of multi-branch volumetric CNNs that enables simultaneous training on two different types of cranioplasty ground-truth data: the skull patch, which represents the exact shape of the missing part of the original skull, and which can be easily created artificially from healthy skulls, and expert-designed cranial implant shapes that are much harder to acquire. The proposed method reaches an average surface distance of the reconstructed skull patches of 0.67 mm on a clinical test set of 75 defective skulls. It also achieves a 12% reduction of a newly proposed defect border Gaussian curvature error metric, compared to a baseline model trained on synthetic data only. Additionally, it produces directly 3D printable cranial implant shapes with a Dice coefficient 0.88 and a surface error of 0.65 mm. The outputs of the proposed skull reconstruction method reach good quality and can be considered for use in semi- or fully automatic clinical cranial implant design workflows.


Asunto(s)
Aprendizaje Profundo , Procedimientos de Cirugía Plástica , Humanos , Prótesis e Implantes , Cráneo/diagnóstico por imagen , Cráneo/cirugía
5.
IEEE Trans Med Imaging ; 40(9): 2329-2342, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33939608

RESUMEN

The aim of this paper is to provide a comprehensive overview of the MICCAI 2020 AutoImplant Challenge. The approaches and publications submitted and accepted within the challenge will be summarized and reported, highlighting common algorithmic trends and algorithmic diversity. Furthermore, the evaluation results will be presented, compared and discussed in regard to the challenge aim: seeking for low cost, fast and fully automated solutions for cranial implant design. Based on feedback from collaborating neurosurgeons, this paper concludes by stating open issues and post-challenge requirements for intra-operative use. The codes can be found at https://github.com/Jianningli/tmi.


Asunto(s)
Prótesis e Implantes , Cráneo , Cráneo/diagnóstico por imagen , Cráneo/cirugía
6.
IEEE Trans Vis Comput Graph ; 16(3): 434-8, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-20224138

RESUMEN

Ray-triangle intersection is an important algorithm, not only in the field of realistic rendering (based on ray tracing) but also in physics simulation, collision detection, modeling, etc. Obviously, the speed of this well-defined algorithm's implementations is important because calls to such a routine are numerous in rendering and simulation applications. Contemporary fast intersection algorithms, which use SIMD instructions, focus on the intersection of ray packets against triangles. For intersection between single rays and triangles, operations such as horizontal addition or dot product are required. The SSE4 instruction set adds the dot product instruction which can be used for this purpose. This paper presents a new modification of the fast ray-triangle intersection algorithms commonly used, which-when implemented on SSE4-outperforms the current state-of-the-art algorithms. It also allows both a single ray and ray packet intersection calculation with the same precomputed data. The speed gain measurements are described and discussed in the paper.


Asunto(s)
Algoritmos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Modelos Teóricos , Lenguajes de Programación , Programas Informáticos , Simulación por Computador , Luz , Dispersión de Radiación
7.
Comput Biol Med ; 123: 103886, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32658793

RESUMEN

Designing a cranial implant to restore the protective and aesthetic function of the patient's skull is a challenging process that requires a substantial amount of manual work, even for an experienced clinician. While computer-assisted approaches with various levels of required user interaction exist to aid this process, they are usually only validated on either a single type of simple synthetic defect or a very limited sample of real defects. The work presented in this paper aims to address two challenges: (i) design a fully automatic 3D shape reconstruction method that can address diverse shapes of real skull defects in various stages of healing and (ii) to provide an open dataset for optimization and validation of anatomical reconstruction methods on a set of synthetically broken skull shapes. We propose an application of the multi-scale cascade architecture of convolutional neural networks to the reconstruction task. Such an architecture is able to tackle the issue of trade-off between the output resolution and the receptive field of the model imposed by GPU memory limitations. Furthermore, we experiment with both generative and discriminative models and study their behavior during the task of anatomical reconstruction. The proposed method achieves an average surface error of 0.59mm for our synthetic test dataset with as low as 0.48mm for unilateral defects of parietal and temporal bone, matching state-of-the-art performance while being completely automatic. We also show that the model trained on our synthetic dataset is able to reconstruct real patient defects.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Humanos , Prótesis e Implantes , Cráneo/diagnóstico por imagen
8.
Artículo en Inglés | MEDLINE | ID: mdl-31247544

RESUMEN

Today's film and advertisement production heavily uses computer graphics combined with living actors by chromakeying. The matchmoving process typically takes a considerable manual effort. Semi-automatic matchmoving tools exist as well, but they still work offline and require manual check-up and correction. In this article, we propose an instant matchmoving solution for green screen. It uses a recent technique of planar uniform marker fields. Our technique can be used in indie and professional filmmaking as a cheap and ultramobile virtual camera, and for shot prototyping and storyboard creation. The matchmoving technique based on marker fields of shades of green is very computationally efficient: we developed and present in the article a mobile application running at 33 FPS. Our technique is thus available to anyone with a smartphone at low cost and with easy setup, opening space for new levels of filmmakers' creative expression.

9.
IEEE Comput Graph Appl ; 39(6): 108-119, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-28113835

RESUMEN

Today's film and advertisement production heavily uses computer graphics combined with living actors by chroma keying. The matchmoving process typically takes a considerable manual effort. Semiautomatic matchmoving tools exist as well, but they still work offline and require manual check-up and correction. In this paper, we propose an instant matchmoving solution for green screen. It uses a recent technique of planar uniform marker fields. Our technique can be used in indie and professional filmmaking as a cheap and ultramobile virtual camera, and for shot prototyping and storyboard creation. The matchmoving technique based on marker fields of shades of green is very computationally efficient: we developed and present in this paper a mobile application running at 33 FPS. Our technique is thus available to anyone with a smartphone at low cost and with an easy setup, opening space for new levels of filmmakers' creative expression.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA