Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 67
Filtrar
1.
Sensors (Basel) ; 23(9)2023 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-37177630

RESUMO

Pectus carinatum (PC) is a chest deformity caused by disproportionate growth of the costal cartilages compared with the bony thoracic skeleton, pulling the sternum forwards and leading to its protrusion. Currently, the most common non-invasive treatment is external compressive bracing, by means of an orthosis. While this treatment is widely adopted, the correct magnitude of applied compressive forces remains unknown, leading to suboptimal results. Moreover, the current orthoses are not suitable to monitor the treatment. The purpose of this study is to design a force measuring system that could be directly embedded into an existing PC orthosis without relevant modifications in its construction. For that, inspired by the currently commercially available products where a solid silicone pad is used, three concepts for silicone-based sensors, two capacitive and one magnetic type, are presented and compared. Additionally, a concept of a full pipeline to capture and store the sensor data was researched. Compression tests were conducted on a calibration machine, with forces ranging from 0 N to 300 N. Local evaluation of sensors' response in different regions was also performed. The three sensors were tested and then compared with the results of a solid silicon pad. One of the capacitive sensors presented an identical response to the solid silicon while the other two either presented poor repeatability or were too stiff, raising concerns for patient comfort. Overall, the proposed system demonstrated its potential to measure and monitor orthosis's applied forces, corroborating its potential for clinical practice.


Assuntos
Pectus Carinatum , Humanos , Pectus Carinatum/terapia , Silício , Esterno , Braquetes , Pressão , Resultado do Tratamento
2.
Sensors (Basel) ; 23(4)2023 Feb 07.
Artigo em Inglês | MEDLINE | ID: mdl-36850436

RESUMO

Breast cancer is the most prevalent cancer in the world and the fifth-leading cause of cancer-related death. Treatment is effective in the early stages. Thus, a need to screen considerable portions of the population is crucial. When the screening procedure uncovers a suspect lesion, a biopsy is performed to assess its potential for malignancy. This procedure is usually performed using real-time Ultrasound (US) imaging. This work proposes a visualization system for US breast biopsy. It consists of an application running on AR glasses that interact with a computer application. The AR glasses track the position of QR codes mounted on an US probe and a biopsy needle. US images are shown in the user's field of view with enhanced lesion visualization and needle trajectory. To validate the system, latency of the transmission of US images was evaluated. Usability assessment compared our proposed prototype with a traditional approach with different users. It showed that needle alignment was more precise, with 92.67 ± 2.32° in our prototype versus 89.99 ± 37.49° in a traditional system. The users also reached the lesion more accurately. Overall, the proposed solution presents promising results, and the use of AR glasses as a tracking and visualization device exhibited good performance.


Assuntos
Realidade Aumentada , Feminino , Humanos , Interface Usuário-Computador , Ultrassonografia Mamária , Ultrassonografia , Biópsia
3.
J Biomed Inform ; 132: 104121, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35750261

RESUMO

Evaluation of the head shape of newborns is needed to detect cranial deformities, disturbances in head growth, and consequently, to predict short- and long-term neurodevelopment. Currently, there is a lack of automatic tools to provide a detailed evaluation of the head shape. Artificial intelligence (AI) methods, namely deep learning (DL), can be explored to develop fast and automatic approaches for shape evaluation. However, due to the clinical variability of patients' head anatomy, generalization of AI networks to the clinical needs is paramount and extremely challenging. In this work, a new framework is proposed to augment the 3D data used for training DL networks for shape evaluation. The proposed augmentation strategy deforms head surfaces towards different deformities. For that, a point-based 3D morphable model (p3DMM) is developed to generate a statistical model representative of head shapes of different cranial deformities. Afterward, a constrained transformation approach (3DHT) is applied to warp a head surface towards a target deformity by estimating a dense motion field from a sparse one resulted from the p3DMM. Qualitative evaluation showed that the proposed method generates realistic head shapes indistinguishable from the real ones. Moreover, quantitative experiments demonstrated that DL networks training with the proposed augmented surfaces improves their performance in terms of head shape analysis. Overall, the introduced augmentation allows to effectively transform a given head surface towards different deformity shapes, potentiating the development of DL approaches for head shape analysis.


Assuntos
Inteligência Artificial , Modelos Estatísticos , Humanos , Lactente , Recém-Nascido
4.
Sensors (Basel) ; 22(19)2022 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-36236577

RESUMO

The increase of the aging population brings numerous challenges to health and aesthetic segments. Here, the use of laser therapy for dermatology is expected to increase since it allows for non-invasive and infection-free treatments. However, existing laser devices require doctors' manually handling and visually inspecting the skin. As such, the treatment outcome is dependent on the user's expertise, which frequently results in ineffective treatments and side effects. This study aims to determine the workspace and limits of operation of laser treatments for vascular lesions of the lower limbs. The results of this study can be used to develop a robotic-guided technology to help address the aforementioned problems. Specifically, workspace and limits of operation were studied in eight vascular laser treatments. For it, an electromagnetic tracking system was used to collect the real-time positioning of the laser during the treatments. The computed average workspace length, height, and width were 0.84 ± 0.15, 0.41 ± 0.06, and 0.78 ± 0.16 m, respectively. This corresponds to an average volume of treatment of 0.277 ± 0.093 m3. The average treatment time was 23.2 ± 10.2 min, with an average laser orientation of 40.6 ± 5.6 degrees. Additionally, the average velocities of 0.124 ± 0.103 m/s and 31.5 + 25.4 deg/s were measured. This knowledge characterizes the vascular laser treatment workspace and limits of operation, which may ease the understanding for future robotic system development.


Assuntos
Robótica , Extremidade Inferior/cirurgia , Robótica/métodos , Resultado do Tratamento
5.
Surg Innov ; 23(1): 52-61, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25994623

RESUMO

INTRODUCTION AND OBJECTIVES: Laparoscopic surgery has undeniable advantages, such as reduced postoperative pain, smaller incisions, and faster recovery. However, to improve surgeons' performance, ergonomic adaptations of the laparoscopic instruments and introduction of robotic technology are needed. The aim of this study was to ascertain the influence of a new hand-held robotic device for laparoscopy (HHRDL) and 3D vision on laparoscopic skills performance of 2 different groups, naïve and expert. MATERIALS AND METHODS: Each participant performed 3 laparoscopic tasks-Peg transfer, Wire chaser, Knot-in 4 different ways. With random sequencing we assigned the execution order of the tasks based on the first type of visualization and laparoscopic instrument. Time to complete each laparoscopic task was recorded and analyzed with one-way analysis of variance. RESULTS: Eleven experts and 15 naïve participants were included. Three-dimensional video helps the naïve group to get better performance in Peg transfer, Wire chaser 2 hands, and Knot; the new device improved the execution of all laparoscopic tasks (P < .05). For expert group, the 3D video system benefited them in Peg transfer and Wire chaser 1 hand, and the robotic device in Peg transfer, Wire chaser 1 hand, and Wire chaser 2 hands (P < .05). CONCLUSION: The HHRDL helps the execution of difficult laparoscopic tasks, such as Knot, in the naïve group. Three-dimensional vision makes the laparoscopic performance of the participants without laparoscopic experience easier, unlike those with experience in laparoscopic procedures.


Assuntos
Ergonomia/métodos , Laparoscopia/educação , Laparoscopia/instrumentação , Procedimentos Cirúrgicos Robóticos/educação , Procedimentos Cirúrgicos Robóticos/instrumentação , Cirurgiões/educação , Competência Clínica , Humanos , Cirurgiões/estatística & dados numéricos
6.
Surg Innov ; 21(3): 290-6, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24151136

RESUMO

Pectus excavatum is the most common deformity of the thorax. A minimally invasive surgical correction is commonly carried out to remodel the anterior chest wall by using an intrathoracic convex prosthesis in the substernal position. The process of prosthesis modeling and bending still remains an area of improvement. The authors developed a new system, i3DExcavatum, which can automatically model and bend the bar preoperatively based on a thoracic CT scan. This article presents a comparison between automatic and manual bending. The i3DExcavatum was used to personalize prostheses for 41 patients who underwent pectus excavatum surgical correction between 2007 and 2012. Regarding the anatomical variations, the soft-tissue thicknesses external to the ribs show that both symmetric and asymmetric patients always have asymmetric variations, by comparing the patients' sides. It highlighted that the prosthesis bar should be modeled according to each patient's rib positions and dimensions. The average differences between the skin and costal line curvature lengths were 84 ± 4 mm and 96 ± 11 mm, for male and female patients, respectively. On the other hand, the i3DExcavatum ensured a smooth curvature of the surgical prosthesis and was capable of predicting and simulating a virtual shape and size of the bar for asymmetric and symmetric patients. In conclusion, the i3DExcavatum allows preoperative personalization according to the thoracic morphology of each patient. It reduces surgery time and minimizes the margin error introduced by the manually bent bar, which only uses a template that copies the chest wall curvature.


Assuntos
Tórax em Funil/cirurgia , Procedimentos Cirúrgicos Minimamente Invasivos/instrumentação , Procedimentos Cirúrgicos Minimamente Invasivos/métodos , Implantação de Prótese/instrumentação , Implantação de Prótese/métodos , Adolescente , Adulto , Criança , Estudos de Coortes , Tórax em Funil/epidemiologia , Tórax em Funil/psicologia , Humanos , Masculino , Próteses e Implantes , Desenho de Prótese , Radiografia Torácica , Inquéritos e Questionários , Adulto Jovem
7.
Med Image Anal ; 91: 102985, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37844472

RESUMO

This paper introduces the "SurgT: Surgical Tracking" challenge which was organized in conjunction with the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2022). There were two purposes for the creation of this challenge: (1) the establishment of the first standardized benchmark for the research community to assess soft-tissue trackers; and (2) to encourage the development of unsupervised deep learning methods, given the lack of annotated data in surgery. A dataset of 157 stereo endoscopic videos from 20 clinical cases, along with stereo camera calibration parameters, have been provided. Participants were assigned the task of developing algorithms to track the movement of soft tissues, represented by bounding boxes, in stereo endoscopic videos. At the end of the challenge, the developed methods were assessed on a previously hidden test subset. This assessment uses benchmarking metrics that were purposely developed for this challenge, to verify the efficacy of unsupervised deep learning algorithms in tracking soft-tissue. The metric used for ranking the methods was the Expected Average Overlap (EAO) score, which measures the average overlap between a tracker's and the ground truth bounding boxes. Coming first in the challenge was the deep learning submission by ICVS-2Ai with a superior EAO score of 0.617. This method employs ARFlow to estimate unsupervised dense optical flow from cropped images, using photometric and regularization losses. Second, Jmees with an EAO of 0.583, uses deep learning for surgical tool segmentation on top of a non-deep learning baseline method: CSRT. CSRT by itself scores a similar EAO of 0.563. The results from this challenge show that currently, non-deep learning methods are still competitive. The dataset and benchmarking tool created for this challenge have been made publicly available at https://surgt.grand-challenge.org/. This challenge is expected to contribute to the development of autonomous robotic surgery and other digital surgical technologies.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Benchmarking , Algoritmos , Endoscopia , Processamento de Imagem Assistida por Computador/métodos
8.
Hum Mol Genet ; 20(15): 2996-3009, 2011 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-21546381

RESUMO

The risk of developing neurodegenerative diseases increases with age. Although many of the molecular pathways regulating proteotoxic stress and longevity are well characterized, their contribution to disease susceptibility remains unclear. In this study, we describe a new Caenorhabditis elegans model of Machado-Joseph disease pathogenesis. Pan-neuronal expression of mutant ATXN3 leads to a polyQ-length dependent, neuron subtype-specific aggregation and neuronal dysfunction. Analysis of different neurons revealed a pattern of dorsal nerve cord and sensory neuron susceptibility to mutant ataxin-3 that was distinct from the aggregation and toxicity profiles of polyQ-alone proteins. This reveals that the sequences flanking the polyQ-stretch in ATXN3 have a dominant influence on cell-intrinsic neuronal factors that modulate polyQ-mediated pathogenesis. Aging influences the ATXN3 phenotypes which can be suppressed by the downregulation of the insulin/insulin growth factor-1-like signaling pathway and activation of heat shock factor-1.


Assuntos
Proteínas de Caenorhabditis elegans/metabolismo , Caenorhabditis elegans/metabolismo , Proteínas do Tecido Nervoso/metabolismo , Neurônios/citologia , Neurônios/metabolismo , Fatores de Transcrição/metabolismo , Animais , Ataxina-3 , Caenorhabditis elegans/genética , Proteínas de Caenorhabditis elegans/genética , Agregação Celular/genética , Agregação Celular/fisiologia , Fatores de Transcrição Forkhead , Microscopia Confocal , Proteínas do Tecido Nervoso/genética , Neurônios/patologia , Peptídeos/metabolismo , Fatores de Transcrição/genética
9.
J Urol ; 190(5): 1932-7, 2013 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-23714434

RESUMO

PURPOSE: Precise needle puncture of the renal collecting system is an essential but challenging step for successful percutaneous nephrolithotomy. We evaluated the efficiency of a new real-time electromagnetic tracking system for in vivo kidney puncture. MATERIALS AND METHODS: Six anesthetized female pigs underwent ureterorenoscopy to place a catheter with an electromagnetic tracking sensor into the desired puncture site and ascertain puncture success. A tracked needle with a similar electromagnetic tracking sensor was subsequently navigated into the sensor in the catheter. Four punctures were performed by each of 2 surgeons in each pig, including 1 each in the kidney, middle ureter, and right and left sides. Outcome measurements were the number of attempts and the time needed to evaluate the virtual trajectory and perform percutaneous puncture. RESULTS: A total of 24 punctures were easily performed without complication. Surgeons required more time to evaluate the trajectory during ureteral than kidney puncture (median 15 seconds, range 14 to 18 vs 13, range 11 to 16, p=0.1). Median renal and ureteral puncture time was 19 (range 14 to 45) and 51 seconds (range 45 to 67), respectively (p=0.003). Two attempts were needed to achieve a successful ureteral puncture. The technique requires the presence of a renal stone for testing. CONCLUSIONS: The proposed electromagnetic tracking solution for renal collecting system puncture proved to be highly accurate, simple and quick. This method might represent a paradigm shift in percutaneous kidney access techniques.


Assuntos
Túbulos Renais Coletores/cirurgia , Nefrostomia Percutânea/métodos , Punções/métodos , Animais , Sistemas Computacionais , Fenômenos Eletromagnéticos , Feminino , Suínos
10.
Heliyon ; 9(6): e16297, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37346350

RESUMO

Background: The daily monitoring of the physiological parameters is essential for monitoring health condition and to prevent health problems. This is possible due to the democratization of numerous types of medical devices and promoted by the interconnection between these and smartphones. Nevertheless, medical devices that connect to smartphones are typically limited to manufacturers applications. Objectives: This paper proposes an intelligent scanning system to simplify the collection of data displayed on different medical devices screens, recognizing the values, and optionally integrating them, through open protocols, with centralized databases. Methods: To develop this system, a dataset comprising 1614 images of medical devices was created, obtained from manufacturer catalogs, photographs and other public datasets. Then, three object detector algorithms (yolov3, Single-Shot Detector [SSD] 320 × 320 and SSD 640 × 640) were trained to detect digits and acronyms/units of measurements presented by medical devices. These models were tested under 3 different conditions to detect digits and acronyms/units as a single object (single label), digits and acronyms/units as independent objects (two labels), and digits and acronyms/units individually (fifteen labels). Models trained for single and two labels were completed with a convolutional neural network (CNN) to identify the detected objects. To group the recognized digits, a condition-tree based strategy on density spatial clustering was used. Results: The most promising approach was the use of the SSD 640 × 640 for fifteen labels. Conclusion: Lastly, as future work, it is intended to convert this system to a mobile environment to accelerate and streamline the process of inserting data into mobile health (mhealth) applications.

11.
Artigo em Inglês | MEDLINE | ID: mdl-38082575

RESUMO

Breast cancer is the most prevalent type of cancer in women. Although mammography is used as the main imaging modality for the diagnosis, robust lesion detection in mammography images is a challenging task, due to the poor contrast of the lesion boundaries and the widely diverse sizes and shapes of the lesions. Deep Learning techniques have been explored to facilitate automatic diagnosis and have produced outstanding outcomes when used for different medical challenges. This study provides a benchmark for breast lesion detection in mammography images. Five state-of-art methods were evaluated on 1592 mammograms from a publicly available dataset (CBIS-DDSM) and compared considering the following seven metrics: i) mean Average Precision (mAP); ii) intersection over union; iii) precision; iv) recall; v) True Positive Rate (TPR); and vi) false positive per image. The CenterNet, YOLOv5, Faster-R-CNN, EfficientDet, and RetinaNet architectures were trained with a combination of the L1 localization loss and L2 localization loss. Despite all evaluated networks having mAP ratings greater than 60%, two managed to stand out among the evaluated networks. In general, the results demonstrate the efficiency of the model CenterNet with Hourglass-104 as its backbone and the model YOLOv5, achieving mAP scores of 70.71% and 69.36%, and TPR scores of 96.10% and 92.19%, respectively, outperforming the state-of-the-art models.Clinical Relevance - This study demonstrates the effectiveness of deep learning algorithms for breast lesion detection in mammography, potentially improving the accuracy and efficiency of breast cancer diagnosis.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Feminino , Humanos , Mamografia/métodos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Detecção Precoce de Câncer , Algoritmos
12.
Artigo em Inglês | MEDLINE | ID: mdl-38082637

RESUMO

Medical image segmentation is a paramount task for several clinical applications, namely for the diagnosis of pathologies, for treatment planning, and for aiding image-guided surgeries. With the development of deep learning, Convolutional Neural Networks (CNN) have become the state-of-the-art for medical image segmentation. However, issues are still raised concerning the precise object boundary delineation, since traditional CNNs can produce non-smooth segmentations with boundary discontinuities. In this work, a U-shaped CNN architecture is proposed to generate both pixel-wise segmentation and probabilistic contour maps of the object to segment, in order to generate reliable segmentations at the object's boundaries. Moreover, since the segmentation and contour maps must be inherently related to each other, a dual consistency loss that relates the two outputs of the network is proposed. Thus, the network is enforced to consistently learn the segmentation and contour delineation tasks during the training. The proposed method was applied and validated on a public dataset of cardiac 3D ultrasound images of the left ventricle. The results obtained showed the good performance of the method and its applicability for the cardiac dataset, showing its potential to be used in clinical practice for medical image segmentation.Clinical Relevance- The proposed network with dual consistency loss scheme can improve the performance of state-of-the-art CNNs for medical image segmentation, proving its value to be applied for computer-aided diagnosis.


Assuntos
Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Redes Neurais de Computação , Coração , Ventrículos do Coração
13.
Artigo em Inglês | MEDLINE | ID: mdl-38083333

RESUMO

Breast cancer is a global public health concern. For women with suspicious breast lesions, the current diagnosis requires a biopsy, which is usually guided by ultrasound (US). However, this process is challenging due to the low quality of the US image and the complexity of dealing with the US probe and the surgical needle simultaneously, making it largely reliant on the surgeon's expertise. Some previous works employing collaborative robots emerged to improve the precision of biopsy interventions, providing an easier, safer, and more ergonomic procedure. However, for these equipment to be able to navigate around the breast autonomously, 3D breast reconstruction needs to be available. The accuracy of these systems still needs to improve, with the 3D reconstruction of the breast being one of the biggest focuses of errors. The main objective of this work is to develop a method to obtain a robust 3D reconstruction of the patient's breast, based on RGB monocular images, which later can be used to compute the robot's trajectories for the biopsy. To this end, depth estimation techniques will be developed, based on a deep learning architecture constituted by a CNN, LSTM, and MLP, to generate depth maps capable of being converted into point clouds. After merging several from multiple points of view, it is possible to generate a real-time reconstruction of the breast as a mesh. The development and validation of our method was performed using a previously described synthetic dataset. Hence, this procedure takes RGB images and the cameras' position and outputs the breasts' meshes. It has a mean error of 3.9 mm and a standard deviation of 1.2 mm. The final results attest to the ability of this methodology to predict the breast's shape and size using monocular images.Clinical Relevance- This work proposes a method based on artificial intelligence and monocular RGB images to obtain the breast's volume during robotic guided breast biopsies, improving their execution and safety.


Assuntos
Mamoplastia , Procedimentos Cirúrgicos Robóticos , Robótica , Humanos , Feminino , Inteligência Artificial , Mama/patologia
14.
Sci Rep ; 13(1): 761, 2023 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-36641527

RESUMO

Chronic Venous Disorders (CVD) of the lower limbs are one of the most prevalent medical conditions, affecting 35% of adults in Europe and North America. Due to the exponential growth of the aging population and the worsening of CVD with age, it is expected that the healthcare costs and the resources needed for the treatment of CVD will increase in the coming years. The early diagnosis of CVD is fundamental in treatment planning, while the monitoring of its treatment is fundamental to assess a patient's condition and quantify the evolution of CVD. However, correct diagnosis relies on a qualitative approach through visual recognition of the various venous disorders, being time-consuming and highly dependent on the physician's expertise. In this paper, we propose a novel automatic strategy for the joint segmentation and classification of CVDs. The strategy relies on a multi-task deep learning network, denominated VENet, that simultaneously solves segmentation and classification tasks, exploiting the information of both tasks to increase learning efficiency, ultimately improving their performance. The proposed method was compared against state-of-the-art strategies in a dataset of 1376 CVD images. Experiments showed that the VENet achieved a classification performance of 96.4%, 96.4%, and 97.2% for accuracy, precision, and recall, respectively, and a segmentation performance of 75.4%, 76.7.0%, 76.7% for the Dice coefficient, precision, and recall, respectively. The joint formulation increased the robustness of both tasks when compared to the conventional classification or segmentation strategies, proving its added value, mainly for the segmentation of small lesions.


Assuntos
Doenças Cardiovasculares , Redes Neurais de Computação , Veias , Idoso , Humanos , Europa (Continente) , Processamento de Imagem Assistida por Computador/métodos , América do Norte , Doença Crônica
15.
Artigo em Inglês | MEDLINE | ID: mdl-38082961

RESUMO

Classification of electrocardiogram (ECG) signals plays an important role in the diagnosis of heart diseases. It is a complex and non-linear signal, which is the first option to preliminary identify specific pathologies/conditions (e.g., arrhythmias). Currently, the scientific community has proposed a multitude of intelligent systems to automatically process the ECG signal, through deep learning techniques, as well as machine learning, where this present high performance, showing state-of-the-art results. However, most of these models are designed to analyze the ECG signal individually, i.e., segment by segment. The scientific community states that to diagnose a pathology in the ECG signal, it is not enough to analyze a signal segment corresponding to the cardiac cycle, but rather an analysis of successive segments of cardiac cycles, to identify a pathological pattern.In this paper, an intelligent method based on a Convolutional Neural Network 1D paired with a Multilayer Perceptron (CNN 1D+MLP) was evaluated to automatically diagnose a set of pathological conditions, from the analysis of the individual segment of the cardiac cycle. In particular, we intend to study the robustness of the referred method in the analysis of several simultaneous ECG signal segments. Two ECG signal databases were selected, namely: MIT-BIH Arrhythmia Database (D1) and European ST-T Database (D2). The data was processed to create datasets with two, three and five segments in a row, to train and test the performance of the method. The method was evaluated in terms of classification metrics, such as: precision, recall, f1-score, and accuracy, as well as through the calculation of confusion matrices.Overall, the method demonstrated high robustness in the analysis of successive ECG signal segments, which we can conclude that it has the potential to detect anomalous patterns in the ECG signal. In the future, we will use this method to analyze the ECG signal coming in real-time, acquired by a wearable device, through a cloud system.Clinical Relevance-This study evaluates the potential of a deep learning method to classify one or several segments of the cardiac cycle and diagnose pathologies in ECG signals.


Assuntos
Aprendizado Profundo , Humanos , Redes Neurais de Computação , Arritmias Cardíacas/diagnóstico , Eletrocardiografia/métodos , Aprendizado de Máquina
16.
Artigo em Inglês | MEDLINE | ID: mdl-38083151

RESUMO

Accurate lesion classification as benign or malignant in breast ultrasound (BUS) images is a critical task that requires experienced radiologists and has many challenges, such as poor image quality, artifacts, and high lesion variability. Thus, automatic lesion classification may aid professionals in breast cancer diagnosis. In this scope, computer-aided diagnosis systems have been proposed to assist in medical image interpretation, outperforming the intra and inter-observer variability. Recently, such systems using convolutional neural networks have demonstrated impressive results in medical image classification tasks. However, the lack of public benchmarks and a standardized evaluation method hampers the performance comparison of networks. This work is a benchmark for lesion classification in BUS images comparing six state-of-the-art networks: GoogLeNet, InceptionV3, ResNet, DenseNet, MobileNetV2, and EfficientNet. For each network, five input data variations that include segmentation information were tested to compare their impact on the final performance. The methods were trained on a multi-center BUS dataset (BUSI and UDIAT) and evaluated using the following metrics: precision, sensitivity, F1-score, accuracy, and area under the curve (AUC). Overall, the lesion with a thin border of background provides the best performance. For this input data, EfficientNet obtained the best results: an accuracy of 97.65% and an AUC of 96.30%.Clinical Relevance- This study showed the potential of deep neural networks to be used in clinical practice for breast lesion classification, also suggesting the best model choices.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Feminino , Humanos , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Redes Neurais de Computação , Ultrassonografia
17.
Artigo em Inglês | MEDLINE | ID: mdl-38083227

RESUMO

Left atrial appendage (LAA) is the major source of thromboembolism in patients with non-valvular atrial fibrillation. Currently, LAA occlusion can be offered as a treatment for these patients, obstructing the LAA through a percutaneously delivered device. Nevertheless, correct device sizing is a complex task, requiring manual analysis of medical images. This approach is sub-optimal, time-demanding, and highly variable between experts. Different solutions were proposed to improve intervention planning, but, no efficient solution is available to 2D ultrasound, which is the most used imaging modality for intervention planning and guidance. In this work, we studied the performance of recently proposed deep learning methods when applied for the LAA segmentation in 2D ultrasound. For that, it was created a 2D ultrasound database. Then, the performance of different deep learning methods, namely Unet, UnetR, AttUnet, TransAttUnet was assessed. All networks were compared using seven metrics: i) Dice coefficient; ii) Accuracy iii) Recall; iv) Specificity; v) Precision; vi) Hausdorff distance and vii) Average distance error. Overall, the results demonstrate the efficiency of AttUnet and TransAttUnet with dice scores of 88.62% and 89.28%, and accuracy of 88.25% and 86.30%, respectively. The current results demonstrate the feasibility of deep learning methods for LAA segmentation in 2D ultrasound.Clinical relevance- Our results proved the clinical potential of deep neural networks for the LAA anatomical analysis.


Assuntos
Apêndice Atrial , Aprendizado Profundo , Humanos , Apêndice Atrial/diagnóstico por imagem , Ecocardiografia Transesofagiana/métodos , Ultrassonografia , Bases de Dados Factuais
18.
Artigo em Inglês | MEDLINE | ID: mdl-38083246

RESUMO

Ultrasound (US) imaging is a widely used medical imaging modality for the diagnosis, monitoring, and surgical planning for kidney conditions. Thus, accurate segmentation of the kidney and internal structures in US images is essential for the assessment of kidney function and the detection of pathological conditions, such as cysts, tumors, and kidney stones. Therefore, there is a need for automated methods that can accurately segment the kidney and internal structures in US images. Over the years, automatic strategies were proposed for such purpose, with deep learning methods achieving the current state-of-the-art results. However, these strategies typically ignore the segmentation of the internal structures of the kidney. Moreover, they were evaluated in different private datasets, hampering the direct comparison of results, and making it difficult to determination the optimal strategy for this task. In this study, we perform a comparative analysis of 7 deep learning networks for the segmentation of the kidney and internal structures (Capsule, Central Echogenic Complex (CEC), Cortex and Medulla) in 2D US images in an open access multi-class kidney US dataset. The dataset includes 514 images, acquired in multiple clinical centers using different US machines and protocols. The dataset contains the annotation of two experts, but 321 images with complete segmentation of all 4 classes were used. Overall, the results demonstrate that the DeepLabV3+ network outperformed the inter-rater variability with a Dice score of 78.0% compared to 75.6% for inter-rater variability. Specifically, DeepLabV3Plus achieved mean Dice scores of 94.2% for the Capsule, 85.8% for the CEC, 62.4% for the Cortex, and 69.6% for the Medulla. These findings suggest the potential of deep learning-based methods in improving the accuracy of kidney segmentation in US images.Clinical Relevance- This study shows the potential of DL for improving accuracy of kidney segmentation in US, leading to increased diagnostic efficiency, and enabling new applications such as computer-aided diagnosis and treatment, ultimately resulting in improved patient outcomes and reduced healthcare costs.1.


Assuntos
Aprendizado Profundo , Humanos , Diagnóstico por Computador/métodos , Rim/diagnóstico por imagem , Semântica , Conjuntos de Dados como Assunto
19.
Med Image Anal ; 89: 102888, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37451133

RESUMO

Formalizing surgical activities as triplets of the used instruments, actions performed, and target anatomies is becoming a gold standard approach for surgical activity modeling. The benefit is that this formalization helps to obtain a more detailed understanding of tool-tissue interaction which can be used to develop better Artificial Intelligence assistance for image-guided surgery. Earlier efforts and the CholecTriplet challenge introduced in 2021 have put together techniques aimed at recognizing these triplets from surgical footage. Estimating also the spatial locations of the triplets would offer a more precise intraoperative context-aware decision support for computer-assisted intervention. This paper presents the CholecTriplet2022 challenge, which extends surgical action triplet modeling from recognition to detection. It includes weakly-supervised bounding box localization of every visible surgical instrument (or tool), as the key actors, and the modeling of each tool-activity in the form of triplet. The paper describes a baseline method and 10 new deep learning algorithms presented at the challenge to solve the task. It also provides thorough methodological comparisons of the methods, an in-depth analysis of the obtained results across multiple metrics, visual and procedural challenges; their significance, and useful insights for future research directions and applications in surgery.


Assuntos
Inteligência Artificial , Cirurgia Assistida por Computador , Humanos , Endoscopia , Algoritmos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos
20.
Med Image Anal ; 88: 102833, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37267773

RESUMO

In-utero fetal MRI is emerging as an important tool in the diagnosis and analysis of the developing human brain. Automatic segmentation of the developing fetal brain is a vital step in the quantitative analysis of prenatal neurodevelopment both in the research and clinical context. However, manual segmentation of cerebral structures is time-consuming and prone to error and inter-observer variability. Therefore, we organized the Fetal Tissue Annotation (FeTA) Challenge in 2021 in order to encourage the development of automatic segmentation algorithms on an international level. The challenge utilized FeTA Dataset, an open dataset of fetal brain MRI reconstructions segmented into seven different tissues (external cerebrospinal fluid, gray matter, white matter, ventricles, cerebellum, brainstem, deep gray matter). 20 international teams participated in this challenge, submitting a total of 21 algorithms for evaluation. In this paper, we provide a detailed analysis of the results from both a technical and clinical perspective. All participants relied on deep learning methods, mainly U-Nets, with some variability present in the network architecture, optimization, and image pre- and post-processing. The majority of teams used existing medical imaging deep learning frameworks. The main differences between the submissions were the fine tuning done during training, and the specific pre- and post-processing steps performed. The challenge results showed that almost all submissions performed similarly. Four of the top five teams used ensemble learning methods. However, one team's algorithm performed significantly superior to the other submissions, and consisted of an asymmetrical U-Net network architecture. This paper provides a first of its kind benchmark for future automatic multi-tissue segmentation algorithms for the developing human brain in utero.


Assuntos
Processamento de Imagem Assistida por Computador , Substância Branca , Gravidez , Feminino , Humanos , Processamento de Imagem Assistida por Computador/métodos , Encéfalo/diagnóstico por imagem , Cabeça , Feto/diagnóstico por imagem , Algoritmos , Imageamento por Ressonância Magnética/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA