RESUMEN
Hepatocellular carcinoma (HCC), the most common type of liver cancer, poses significant challenges in detection and diagnosis. Medical imaging, especially computed tomography (CT), is pivotal in non-invasively identifying this disease, requiring substantial expertise for interpretation. This research introduces an innovative strategy that integrates two-dimensional (2D) and three-dimensional (3D) deep learning models within a federated learning (FL) framework for precise segmentation of liver and tumor regions in medical images. The study utilized 131 CT scans from the Liver Tumor Segmentation (LiTS) challenge and demonstrated the superior efficiency and accuracy of the proposed Hybrid-ResUNet model with a Dice score of 0.9433 and an AUC of 0.9965 compared to ResNet and EfficientNet models. This FL approach is beneficial for conducting large-scale clinical trials while safeguarding patient privacy across healthcare settings. It facilitates active engagement in problem-solving, data collection, model development, and refinement. The study also addresses data imbalances in the FL context, showing resilience and highlighting local models' robust performance. Future research will concentrate on refining federated learning algorithms and their incorporation into the continuous implementation and deployment (CI/CD) processes in AI system operations, emphasizing the dynamic involvement of clients. We recommend a collaborative human-AI endeavor to enhance feature extraction and knowledge transfer. These improvements are intended to boost equitable and efficient data collaboration across various sectors in practical scenarios, offering a crucial guide for forthcoming research in medical AI.
Asunto(s)
Carcinoma Hepatocelular , Aprendizaje Profundo , Neoplasias Hepáticas , Vena Porta , Tomografía Computarizada por Rayos X , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Carcinoma Hepatocelular/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Vena Porta/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , AlgoritmosRESUMEN
BACKGROUND: Preoperative imaging evaluation of liver volume and hepatic steatosis for the donor affects transplantation outcomes. However, computed tomography (CT) for liver volumetry and magnetic resonance spectroscopy (MRS) for hepatic steatosis are time consuming. Therefore, we investigated the correlation of automated 3D-multi-echo-Dixon sequence magnetic resonance imaging (ME-Dixon MRI) and its derived proton density fat fraction (MRI-PDFF) with CT liver volumetry and MRS hepatic steatosis measurements in living liver donors. METHODS: This retrospective cross-sectional study was conducted from December 2017 to November 2022. We enrolled donors who received a dynamic CT scan and an MRI exam within 2 days. First, the CT volumetry was processed semiautomatically using commercial software, and ME-Dixon MRI volumetry was automatically measured using an embedded sequence. Next, the signal intensity of MRI-PDFF volumetric data was correlated with MRS as the gold standard. RESULTS: We included the 165 living donors. The total liver volume of ME-Dixon MRI was significantly correlated with CT (r = 0.913, p < 0.001). The fat percentage measured using MRI-PDFF revealed a strong correlation between automatic segmental volume and MRS (r = 0.705, p < 0.001). Furthermore, the hepatic steatosis group (MRS ≥5%) had a strong correlation than the non-hepatic steatosis group (MRS <5%) in both volumetric (r = 0.906 vs. r = 0.887) and fat fraction analysis (r = 0.779 vs. r = 0.338). CONCLUSION: Automated ME-Dixon MRI liver volumetry and MRI-PDFF were strongly correlated with CT liver volumetry and MRS hepatic steatosis measurements, especially in donors with hepatic steatosis.
RESUMEN
The rapid development of AIOT-related technologies has revolutionized various industries. The advantage of such real-time sensing, low costs, small sizes, and easy deployment makes extensive use of wireless sensor networks in various fields. However, due to the wireless transmission of data, and limited built-in power supply, controlling energy consumption and making the application of the sensor network more efficient is still an urgent problem to be solved in practice. In this study, we construct this problem as a tree structure wireless sensor network mathematical model, which mainly considers the QoS and fairness requirements. This study determines the probability of sensor activity, transmission distance, and transmission of the packet size, and thereby minimizes energy consumption. The Lagrangian Relaxation method is used to find the optimal solution with the lowest energy consumption while maintaining the network's transmission efficiency. The experimental results confirm that the decision-making speed and energy consumption can be effectively improved.
Asunto(s)
Redes de Comunicación de Computadores , Tecnología Inalámbrica , Simulación por Computador , Algoritmos , Modelos TeóricosRESUMEN
This paper proposes an encoder-decoder architecture for kidney segmentation. A hyperparameter optimization process is implemented, including the development of a model architecture, selecting a windowing method and a loss function, and data augmentation. The model consists of EfficientNet-B5 as the encoder and a feature pyramid network as the decoder that yields the best performance with a Dice score of 0.969 on the 2019 Kidney and Kidney Tumor Segmentation Challenge dataset. The proposed model is tested with different voxel spacing, anatomical planes, and kidney and tumor volumes. Moreover, case studies are conducted to analyze segmentation outliers. Finally, five-fold cross-validation and the 3D-IRCAD-01 dataset are used to evaluate the developed model in terms of the following evaluation metrics: the Dice score, recall, precision, and the Intersection over Union score. A new development and application of artificial intelligence algorithms to solve image analysis and interpretation will be demonstrated in this paper. Overall, our experiment results show that the proposed kidney segmentation solutions in CT images can be significantly applied to clinical needs to assist surgeons in surgical planning. It enables the calculation of the total kidney volume for kidney function estimation in ADPKD and supports radiologists or doctors in disease diagnoses and disease progression.
Asunto(s)
Aprendizaje Profundo , Inteligencia Artificial , Procesamiento de Imagen Asistido por Computador/métodos , Riñón/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodosRESUMEN
Previously, doctors interpreted computed tomography (CT) images based on their experience in diagnosing kidney diseases. However, with the rapid increase in CT images, such interpretations were required considerable time and effort, producing inconsistent results. Several novel neural network models were proposed to automatically identify kidney or tumor areas in CT images for solving this problem. In most of these models, only the neural network structure was modified to improve accuracy. However, data pre-processing was also a crucial step in improving the results. This study systematically discussed the necessary pre-processing methods before processing medical images in a neural network model. The experimental results were shown that the proposed pre-processing methods or models significantly improve the accuracy rate compared with the case without data pre-processing. Specifically, the dice score was improved from 0.9436 to 0.9648 for kidney segmentation and 0.7294 for all types of tumor detections. The performance was suitable for clinical applications with lower computational resources based on the proposed medical image processing methods and deep learning models. The cost efficiency and effectiveness were also achieved for automatic kidney volume calculation and tumor detection accurately.
Asunto(s)
Aprendizaje Profundo , Neoplasias , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Riñón/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodosRESUMEN
As wireless sensor networks have become more prevalent, data from sensors in daily life are constantly being recorded. Due to cost or energy consumption considerations, optimization-based approaches are proposed to reduce deployed sensors and yield results within the error tolerance. The correlation-aware method is also designed in a mathematical model that combines theoretical and practical perspectives. The sensor deployment strategies, including XGBoost, Pearson correlation, and Lagrangian Relaxation (LR), are determined to minimize deployment costs while maintaining estimation errors below a given threshold. Moreover, the results significantly ensure the accuracy of the gathered information while minimizing the cost of deployment and maximizing the lifetime of the WSN. Furthermore, the proposed solution can be readily applied to sensor distribution problems in various fields.
Asunto(s)
Redes de Comunicación de Computadores , Tecnología Inalámbrica , Modelos Teóricos , RegistrosRESUMEN
A combined edge and core cloud computing environment is a novel solution in 5G network slices. The clients' high availability requirement is a challenge because it limits the possible admission control in front of the edge cloud. This work proposes an orchestrator with a mathematical programming model in a global viewpoint to solve resource management problems and satisfying the clients' high availability requirements. The proposed Lagrangian relaxation-based approach is adopted to solve the problems at a near-optimal level for increasing the system revenue. A promising and straightforward resource management approach and several experimental cases are used to evaluate the efficiency and effectiveness. Preliminary results are presented as performance evaluations to verify the proposed approach's suitability for edge and core cloud computing environments. The proposed orchestrator significantly enables the network slicing services and efficiently enhances the clients' satisfaction of high availability.
RESUMEN
Network slicing is a promising technology that network operators can deploy the services by slices with heterogeneous quality of service (QoS) requirements. However, an orchestrator for network operation with efficient slice resource provisioning algorithms is essential. This work stands on Internet service provider (ISP) to design an orchestrator analyzing the critical influencing factors, namely access control, scheduling, and resource migration, to systematically evolve a sustainable network. The scalability and flexibility of resources are jointly considered. The resource management problem is formulated as a mixed-integer programming (MIP) problem. A solution approach based on Lagrangian relaxation (LR) is proposed for the orchestrator to make decisions to satisfy the high QoS applications. It can investigate the resources required for access control within a cost-efficient resource pool and consider allocating or migrating resources efficiently in each network slice. For high system utilization, the proposed mechanisms are modeled in a pay-as-you-go manner. Furthermore, the experiment results show that the proposed strategies perform the near-optimal system revenue to meet the QoS requirement by making decisions.
RESUMEN
This paper focuses on the use of an attention-based encoder-decoder model for the task of breathing sound segmentation and detection. This study aims to accurately segment the inspiration and expiration of patients with pulmonary diseases using the proposed model. Spectrograms of the lung sound signals and labels for every time segment were used to train the model. The model would first encode the spectrogram and then detect inspiratory or expiratory sounds using the encoded image on an attention-based decoder. Physicians would be able to make a more precise diagnosis based on the more interpretable outputs with the assistance of the attention mechanism.The respiratory sounds used for training and testing were recorded from 22 participants using digital stethoscopes or anti-noising microphone sets. Experimental results showed a high 92.006% accuracy when applied 0.5 second time segments and ResNet101 as encoder. Consistent performance of the proposed method can be observed from ten-fold cross-validation experiments.
Asunto(s)
Respiración , Ruidos Respiratorios , Atención , Espiración , Humanos , Aprendizaje AutomáticoRESUMEN
Recent advance in wireless sensor network (WSN) applications such as the Internet of Things (IoT) have attracted a lot of attention. Sensor nodes have to monitor and cooperatively pass their data, such as temperature, sound, pressure, etc. through the network under constrained physical or environmental conditions. The Quality of Service (QoS) is very sensitive to network delays. When resources are constrained and when the number of receivers increases rapidly, how the sensor network can provide good QoS (measured as end-to-end delay) becomes a very critical problem. In this paper; a solution to the wireless sensor network multicasting problem is proposed in which a mathematical model that provides services to accommodate delay fairness for each subscriber is constructed. Granting equal consideration to both network link capacity assignment and routing strategies for each multicast group guarantees the intra-group and inter-group delay fairness of end-to-end delay. Minimizing delay and achieving fairness is ultimately achieved through the Lagrangean Relaxation method and Subgradient Optimization Technique. Test results indicate that the new system runs with greater effectiveness and efficiency.