Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 13(1): 21716, 2023 12 07.
Artigo em Inglês | MEDLINE | ID: mdl-38066019

RESUMO

Usually, a baseline image, either through magnetic resonance imaging (MRI) or computed tomography (CT), is captured as a reference before medical procedures such as respiratory interventions like Thoracentesis. In these procedures, ultrasound (US) imaging is often employed for guiding needle placement during Thoracentesis or providing image guidance in MISS procedures within the thoracic region. Following the procedure, a post-procedure image is acquired to monitor and evaluate the patient's progress. Currently, there are no real-time guidance and tracking capabilities that allow a surgeon to perform their procedure using the familiarity of the reference imaging modality. In this work, we propose a real-time volumetric indirect registration using a deep learning approach where the fusion of multi-imaging modalities will allow for guidance and tracking of surgical procedures using US while displaying the resultant changes in a clinically friendly reference imaging modality (MRI). The deep learning method employs a series of generative adversarial networks (GANs), specifically CycleGAN, to conduct an unsupervised image-to-image translation. This process produces spatially aligned US and MRI volumes corresponding to their respective input volumes (MRI and US) of the thoracic spine anatomical region. In this preliminary proof-of-concept study, the focus was on the T9 vertebrae. A clinical expert performs anatomical validation of randomly selected real and generated volumes of the T9 thoracic vertebrae and gives a score of 0 (conclusive anatomical structures present) or 1 (inconclusive anatomical structures present) to each volume to check if the volumes are anatomically accurate. The Dice and Overlap metrics show how accurate the shape of T9 is when compared to real volumes and how consistent the shape of T9 is when compared to other generated volumes. The average Dice, Overlap and Accuracy to clearly label all the anatomical structures of the T9 vertebrae are approximately 80% across the board.


Assuntos
Processamento de Imagem Assistida por Computador , Ultrassom , Humanos , Processamento de Imagem Assistida por Computador/métodos , Estudo de Prova de Conceito , Ultrassonografia , Imageamento por Ressonância Magnética/métodos
2.
BMC Med Inform Decis Mak ; 23(1): 274, 2023 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-38031040

RESUMO

BACKGROUND: Point-of-care lung ultrasound (LUS) allows real-time patient scanning to help diagnose pleural effusion (PE) and plan further investigation and treatment. LUS typically requires training and experience from the clinician to accurately interpret the images. To address this limitation, we previously demonstrated a deep-learning model capable of detecting the presence of PE on LUS at an accuracy greater than 90%, when compared to an experienced LUS operator. METHODS: This follow-up study aimed to develop a deep-learning model to provide segmentations for PE in LUS. Three thousand and forty-one LUS images from twenty-four patients diagnosed with PE were selected for this study. Two LUS experts provided the ground truth for training by reviewing and segmenting the images. The algorithm was then trained using ten-fold cross-validation. Once training was completed, the algorithm segmented a separate subset of patients. RESULTS: Comparing the segmentations, we demonstrated an average Dice Similarity Coefficient (DSC) of 0.70 between the algorithm and experts. In contrast, an average DSC of 0.61 was observed between the experts. CONCLUSION: In summary, we showed that the trained algorithm achieved a comparable average DSC at PE segmentation. This represents a promising step toward developing a computational tool for accurately augmenting PE diagnosis and treatment.


Assuntos
Aprendizado Profundo , Derrame Pleural , Humanos , Seguimentos , Algoritmos , Pulmão/diagnóstico por imagem , Derrame Pleural/diagnóstico por imagem
3.
Phys Eng Sci Med ; 46(1): 197-208, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-36625994

RESUMO

The assessment of spinal posture is a difficult endeavour given the lack of identifiable bony landmarks for placement of skin markers. Moreover, potentially significant soft tissue artefacts along the spine further affect the accuracy of marker-based approaches. The objective of this proof-of-concept study was to develop an experimental framework to assess spinal postures by using three-dimensional (3D) ultrasound (US) imaging. A phantom spine model immersed in water was scanned using 3D US in a neutral and two curved postures mimicking a forward flexion in the sagittal plane while the US probe was localised by three electromagnetic tracking sensors attached to the probe head. The obtained anatomical 'coarse' registrations were further refined using an automatic registration algorithm and validated by an experienced sonographer. Spinal landmarks were selected in the US images and validated against magnetic resonance imaging data of the same phantom through image registration. Their position was then related to the location of the tracking sensors identified in the acquired US volumes, enabling the localisation of landmarks in the global coordinate system of the tracking device. Results of this study show that localised 3D US enables US-based anatomical reconstructions comparable to clinical standards and the identification of spinal landmarks in different postures of the spine. The accuracy in sensor identification was 0.49 mm on average while the intra- and inter-observer reliability in sensor identification was strongly correlated with a maximum deviation of 0.8 mm. Mapping of landmarks had a small relative distance error of 0.21 mm (SD = ± 0.16) on average. This study implies that localised 3D US holds the potential for the assessment of full spinal posture by accurately and non-invasively localising vertebrae in space.


Assuntos
Curvaturas da Coluna Vertebral , Coluna Vertebral , Humanos , Reprodutibilidade dos Testes , Coluna Vertebral/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Postura
4.
Sci Rep ; 12(1): 17581, 2022 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-36266463

RESUMO

Our automated deep learning-based approach identifies consolidation/collapse in LUS images to aid in the identification of late stages of COVID-19 induced pneumonia, where consolidation/collapse is one of the possible associated pathologies. A common challenge in training such models is that annotating each frame of an ultrasound video requires high labelling effort. This effort in practice becomes prohibitive for large ultrasound datasets. To understand the impact of various degrees of labelling precision, we compare labelling strategies to train fully supervised models (frame-based method, higher labelling effort) and inaccurately supervised models (video-based methods, lower labelling effort), both of which yield binary predictions for LUS videos on a frame-by-frame level. We moreover introduce a novel sampled quaternary method which randomly samples only 10% of the LUS video frames and subsequently assigns (ordinal) categorical labels to all frames in the video based on the fraction of positively annotated samples. This method outperformed the inaccurately supervised video-based method and more surprisingly, the supervised frame-based approach with respect to metrics such as precision-recall area under curve (PR-AUC) and F1 score, despite being a form of inaccurate learning. We argue that our video-based method is more robust with respect to label noise and mitigates overfitting in a manner similar to label smoothing. The algorithm was trained using a ten-fold cross validation, which resulted in a PR-AUC score of 73% and an accuracy of 89%. While the efficacy of our classifier using the sampled quaternary method significantly lowers the labelling effort, it must be verified on a larger consolidation/collapse dataset, our proposed classifier using the sampled quaternary video-based method is clinically comparable with trained experts' performance.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , COVID-19/diagnóstico por imagem , Ultrassonografia/métodos , Algoritmos , Pulmão/diagnóstico por imagem
5.
Ultrasound Med Biol ; 48(3): 450-459, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34848081

RESUMO

Three-dimensional imaging and advanced manufacturing are being applied in health care research to create novel diagnostic and surgical planning methods, as well as personalised treatments and implants. For ear reconstruction, where a cartilage-shaped implant is embedded underneath the skin to re-create shape and form, volumetric imaging and segmentation processing to capture patient anatomy are particularly challenging. Here, we introduce 3-D ultrasound (US) as an available option for imaging the external ear and underlying auricular cartilage structure, and compare it with computed tomography (CT) and magnetic resonance imaging (MRI) against micro-CT (µCT) as a high-resolution reference (gold standard). US images were segmented to create 3-D models of the auricular cartilage and compared against models generated from µCT to assess accuracy. We found that CT was significantly less accurate than the other methods (root mean square [RMS]: 1.30 ± 0.5 mm) and had the least contrast between tissues. There was no significant difference between MRI (RMS: 0.69 ± 0.2 mm) and US (0.55 ± 0.1 mm). US was also the least expensive imaging method at half the cost of MRI. These results unveil a novel use of ultrasound imaging that has not been presented before, as well as support its more widespread use in biofabrication as a low-cost imaging technique to create patient-specific 3D models and implants.


Assuntos
Cartilagem da Orelha , Procedimentos de Cirurgia Plástica , Cartilagem da Orelha/cirurgia , Orelha Externa/cirurgia , Humanos , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Próteses e Implantes , Procedimentos de Cirurgia Plástica/métodos , Ultrassonografia
6.
Artigo em Inglês | MEDLINE | ID: mdl-31944954

RESUMO

Knee arthroscopy is a complex minimally invasive surgery that can cause unintended injuries to femoral cartilage or postoperative complications, or both. Autonomous robotic systems using real-time volumetric ultrasound (US) imaging guidance hold potential for reducing significantly these issues and for improving patient outcomes. To enable the robotic system to navigate autonomously in the knee joint, the imaging system should provide the robot with a real-time comprehensive map of the surgical site. To this end, the first step is automatic image quality assessment, to ensure that the boundaries of the relevant knee structures are defined well enough to be detected, outlined, and then tracked. In this article, a recently developed one-class classifier deep learning algorithm was used to discriminate among the US images acquired in a simulated surgical scenario on which the femoral cartilage either could or could not be outlined. A total of 38 656 2-D US images were extracted from 151 3-D US volumes, collected from six volunteers, and were labeled as "1" or as "0" when an expert was or was not able to outline the cartilage on the image, respectively. The algorithm was evaluated using the expert labels as ground truth with a fivefold cross validation, where each fold was trained and tested on average with 15 640 and 6246 labeled images, respectively. The algorithm reached a mean accuracy of 78.4% ± 5.0, mean specificity of 72.5% ± 9.4, mean sensitivity of 82.8% ± 5.8, and mean area under the curve of 85% ± 4.4. In addition, interobserver and intraobserver tests involving two experts were performed on an image subset of 1536 2-D US images. Percent agreement values of 0.89 and 0.93 were achieved between two experts (i.e., interobserver) and by each expert (i.e., intraobserver), respectively. These results show the feasibility of the first essential step in the development of automatic US image acquisition and interpretation systems for autonomous robotic knee arthroscopy.


Assuntos
Artroscopia/métodos , Aprendizado Profundo , Interpretação de Imagem Assistida por Computador/métodos , Articulação do Joelho/diagnóstico por imagem , Ultrassonografia/métodos , Adulto , Algoritmos , Cartilagem/diagnóstico por imagem , Cartilagem/cirurgia , Fêmur/diagnóstico por imagem , Fêmur/cirurgia , Humanos , Articulação do Joelho/cirurgia , Adulto Jovem
7.
Med Image Anal ; 60: 101631, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31927473

RESUMO

The tracking of the knee femoral condyle cartilage during ultrasound-guided minimally invasive procedures is important to avoid damaging this structure during such interventions. In this study, we propose a new deep learning method to track, accurately and efficiently, the femoral condyle cartilage in ultrasound sequences, which were acquired under several clinical conditions, mimicking realistic surgical setups. Our solution, that we name Siam-U-Net, requires minimal user initialization and combines a deep learning segmentation method with a siamese framework for tracking the cartilage in temporal and spatio-temporal sequences of 2D ultrasound images. Through extensive performance validation given by the Dice Similarity Coefficient, we demonstrate that our algorithm is able to track the femoral condyle cartilage with an accuracy which is comparable to experienced surgeons. It is additionally shown that the proposed method outperforms state-of-the-art segmentation models and trackers in the localization of the cartilage. We claim that the proposed solution has the potential for ultrasound guidance in minimally invasive knee procedures.


Assuntos
Cartilagem Articular/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Articulação do Joelho/diagnóstico por imagem , Redes Neurais de Computação , Ultrassonografia de Intervenção/métodos , Artroscopia , Aprendizado Profundo , Feminino , Voluntários Saudáveis , Humanos , Imageamento Tridimensional , Masculino
8.
Healthcare (Basel) ; 7(4)2019 Dec 02.
Artigo em Inglês | MEDLINE | ID: mdl-31810236

RESUMO

In prostate cancer external beam radiation therapy (EBRT), intra-fraction prostate drifts may compromise the treatment efficacy by underdosing the target and/or overdosing the organs at risk. In this study, a recently developed real-time adaptive planning strategy for intensity-modulated radiation therapy (IMRT) for prostate cancer was evaluated in hypofractionated regimes against traditional treatment planning based on a treatment volume margin expansion. The proposed workflow makes use of a "library of plans" corresponding to possible intra-fraction prostate positions. During delivery, at each beam end, the plan prepared for the position of the prostate closest to the current one is selected and the corresponding beam delivered. This adaptive planning strategy was compared with the traditional approach on a clinical prostate cancer case where different prostate shift magnitudes were considered. Five, six and fifteen fraction hypofractionated schemes were considered for each of these scenarios. When shifts larger than the treatment margin were present, using the traditional approach the seminal vesicles were underdosed by 3-4% of the prescribed dose. The adaptive approach instead allowed for correct target dose coverage and lowered the dose on the rectum for each dosimetric endpoint on average by 3-4% in all the fractionation schemes. Standard intensity-modulated radiation therapy planning did not always guarantee a correct dose distribution on the seminal vesicles and the rectum. The adaptive planning strategy proposed resulted insensitive to the intra-fraction prostate drifts, produced a dose distribution in agreement with the dosimetric requirements in every case analysed and significantly lowered the dose on the rectum.

9.
Med Image Anal ; 54: 149-167, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30928829

RESUMO

In the past decade, medical robotics has gained significant traction within the surgical field. While the introduction of fully autonomous robotic systems for surgical procedures still remains a challenge, robotic assisted interventions have become increasingly more interesting for the scientific and clinical community. This happens especially when difficulties associated with complex surgical manoeuvres under reduced field of view are involved, as encountered in minimally invasive surgeries. Various imaging modalities can be used to support these procedures, by re-creating a virtual, enhanced view of the intervention site. Among them, ultrasound imaging showed several advantages, such as cost effectiveness, non-invasiveness and real-time volumetric imaging. In this review we comprehensively report about the interventional applications where ultrasound imaging has been used to provide guidance for the intervention tools, allowing the surgeon to visualize intra-operatively the soft tissue configuration in real-time and to compensate for possible anatomical changes. Future directions are also discussed, in particular how the recent developments in 3D/4D ultrasound imaging and the introduction of advanced imaging capabilities (such as elastography) in commercially available systems may fulfil the unmet needs towards fully autonomous robotic interventions.


Assuntos
Biópsia Guiada por Imagem/métodos , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Procedimentos Cirúrgicos Robóticos/métodos , Ultrassonografia/métodos , Técnicas de Ablação/métodos , Braquiterapia/métodos , Humanos , Imageamento Tridimensional , Injeções/métodos , Imagens de Fantasmas
10.
PLoS One ; 14(2): e0213002, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30818345

RESUMO

BACKGROUND AND PURPOSE: In prostate cancer treatment with external beam radiation therapy (EBRT), prostate motion and internal changes in tissue distribution can lead to a decrease in plan quality. In most currently used planning methods, the uncertainties due to prostate motion are compensated by irradiating a larger treatment volume. However, this could cause underdosage of the treatment volume and overdosage of the organs at risk (OARs). To reduce this problem, in this proof of principle study we developed and evaluated a novel adaptive planning method. The strategy proposed corrects the dose delivered by each beam according to the actual position of the target in order to produce a final dose distribution dosimetrically as similar as possible to the prescribed one. MATERIAL AND METHODS: Our adaptive planning method was tested on a phantom case and on a clinical case. For the first, a pilot study was performed on an in-silico pelvic phantom. A "library" of intensity modulated RT (IMRT) plans corresponding to possible positions of the prostate during a treatment fraction was generated at planning stage. Then a 3D random walk model was used to simulate possible displacements of the prostate during the treatment fraction. At treatment stage, at the end of each beam, based on the current position of the target, the beam from the library of plans, which could reproduce the best approximation of the prescribed dose distribution, was selected and delivered. In the clinical case, the same approach was used on two prostate cancer patients: for the first a tissue deformation was simulated in-silico and for the second a cone beam CT (CBCT) taken during the treatment was used to simulate an intra-fraction change. Then, dosimetric comparisons with the standard treatment plan and, for the second patient, also with an isocenter shift correction, were performed. RESULTS: For the phantom case, the plan generated using the adaptive planning method was able to meet all the dosimetric requirements and to correct for a misdosage of 13% of the dose prescription on the prostate. For the first clinical case, the standard planning method caused underdosage of the seminal vesicles, respectively by 5% and 4% of the prescribed dose, when the position changes for the target were correctly taken into account. The proposed adaptive planning method corrected any possible missed target coverage, reducing at the same time the dose on the OARs. For the second clinical case, both with the standard planning strategy and with the isocenter shift correction target coverage was significantly worsened (in particular uniformity) and some organs exceeded some toxicity objectives. While with our approach, the most uniform coverage for the target was produced and systematically the lowest toxicity values for the organs at risk were achieved. CONCLUSIONS: In our proof of principle study, the adaptive planning method performed better than the standard planning and the isocenter shift methods for prostate EBRT. It improved the coverage of the treatment volumes and lowered the dose to the OARs. This planning method is particularly promising for hypofractionated IMRT treatments in which a higher precision and control on dose deposition are needed. Further studies will be performed to test more extensively the proposed adaptive planning method and to evaluate it at a full clinical level.


Assuntos
Neoplasias da Próstata/radioterapia , Planejamento da Radioterapia Assistida por Computador/métodos , Simulação por Computador , Sistemas Computacionais , Tomografia Computadorizada de Feixe Cônico , Humanos , Masculino , Movimento (Física) , Órgãos em Risco , Imagens de Fantasmas , Estudo de Prova de Conceito , Próstata/diagnóstico por imagem , Próstata/patologia , Próstata/efeitos da radiação , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/patologia , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/estatística & dados numéricos , Radioterapia de Intensidade Modulada/métodos , Radioterapia de Intensidade Modulada/estatística & dados numéricos
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 966-969, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946054

RESUMO

Segmentation of knee cartilage from Ultrasound (US) images is essential for various clinical tasks in diagnosis and treatment planning of Osteoarthritis. Moreover, the potential use of US imaging for guidance in robotic knee arthroscopy is presently being investigated. The femoral cartilage being the main organ at risk during the operation, it is paramount to be able to segment this structure, to make US guidance feasible. In this paper, we set forth a deep learning network, Mask R-CNN, based femoral cartilage segmentation in 2D US images for these types of applications. While the traditional imaging approaches showed promising results, they are mostly not real-time and involve human interaction. This being the case, in recent years, deep learning has paved its way into medical imaging showing commendable results. However, deep learning-based segmentation in US images remains unexplored. In the present study we employ Mask R-CNN on US images of the knee cartilage. The performance of the method is analyzed in various scenarios, with and without Gaussian filter preprocessing and pretraining the network with different datasets. The best results are observed when the images are preprocessed and the network is pretrained with COCO 2016 image dataset. A maximum Dice Similarity Coefficient (DSC) of 0.88 and an average DSC of 0.80 is achieved when tested on 55 images indicating that the proposed method has a potential for clinical applications.


Assuntos
Processamento de Imagem Assistida por Computador , Joelho , Cartilagem , Humanos , Joelho/diagnóstico por imagem , Articulação do Joelho , Ultrassonografia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...