Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 254
Filter
1.
BioData Min ; 17(1): 22, 2024 Jul 12.
Article in English | MEDLINE | ID: mdl-38997749

ABSTRACT

BACKGROUND: The use of machine learning in medical diagnosis and treatment has grown significantly in recent years with the development of computer-aided diagnosis systems, often based on annotated medical radiology images. However, the lack of large annotated image datasets remains a major obstacle, as the annotation process is time-consuming and costly. This study aims to overcome this challenge by proposing an automated method for annotating a large database of medical radiology images based on their semantic similarity. RESULTS: An automated, unsupervised approach is used to create a large annotated dataset of medical radiology images originating from the Clinical Hospital Centre Rijeka, Croatia. The pipeline is built by data-mining three different types of medical data: images, DICOM metadata and narrative diagnoses. The optimal feature extractors are then integrated into a multimodal representation, which is then clustered to create an automated pipeline for labelling a precursor dataset of 1,337,926 medical images into 50 clusters of visually similar images. The quality of the clusters is assessed by examining their homogeneity and mutual information, taking into account the anatomical region and modality representation. CONCLUSIONS: The results indicate that fusing the embeddings of all three data sources together provides the best results for the task of unsupervised clustering of large-scale medical data and leads to the most concise clusters. Hence, this work marks the initial step towards building a much larger and more fine-grained annotated dataset of medical radiology images.

2.
J Family Med Prim Care ; 13(5): 1931-1936, 2024 May.
Article in English | MEDLINE | ID: mdl-38948570

ABSTRACT

Background: Artificial intelligence (AI) has led to the development of various opportunities during the COVID-19 pandemic. An abundant number of applications have surfaced responding to the pandemic, while some other applications were futile. Objectives: The present study aimed to assess the perception and opportunities of AI used during the COVID-19 pandemic and to explore the perception of medical data analysts about the inclusion of AI in medical education. Material and Methods: This study adopted a mixed-method research design conducted among medical doctors for the quantitative part while including medical data analysts for the qualitative interview. Results: The study reveals that nearly 64.8% of professionals were working in high COVID-19 patient-load settings and had significantly more acceptance of AI tools compared to others (P < 0.05). The learning barrier like engaging in new skills and working under a non-medical hierarchy led to dissatisfaction among medical data analysts. There was widespread recognition of their work after the COVID-19 pandemic. Conclusion: Notwithstanding that the majority of professionals are aware that public health emergency creates a significant strain on doctors, the majority still have to work in extremely high case load setting to demand solutions. AI applications are still not being integrated into medicine as fast as technology has been advancing. Sensitization workshops can be conducted among specialists to develop interest which will encourage them to identify problem statements in their fields, and along with AI experts, they can create AI-enabled algorithms to address the problems. A lack of educational opportunities about AI in formal medical curriculum was identified.

3.
Sci Rep ; 14(1): 16069, 2024 Jul 11.
Article in English | MEDLINE | ID: mdl-38992054

ABSTRACT

This work proposes a Blockchain-enabled Organ Matching System (BOMS) designed to manage the process of matching, storing, and sharing information. Biological factors are incorporated into matching and the cross-matching process is implemented into the smart contracts. Privacy is guaranteed by using patient-associated blockchain addresses, without transmitting or using patient personal records in the matching process. The matching algorithm implemented as a smart contract is verifiable by any party. Clinical records, process updates, and matching results are also stored on the blockchain, providing tamper-resistance of recipient's records and the recipients' waiting queue. The system also is capable of handling cases in which there is a donor without an immediate compatible recipient. The system is implemented on the Ethereum blockchain and several scenarios were tested. The performance of the proposed system is compared to other existing organ donation systems, and ours outperformed any existing organ matching system built on blockchain. BOMS is tested to ascertain its compatibility with public, private, and consortium blockchain networks, checks for security vulnerabilities and cross-matching efficiency. The implementation codes are available online.


Subject(s)
Algorithms , Blockchain , Tissue and Organ Procurement , Humans , Tissue and Organ Procurement/methods , Tissue Donors , Computer Security
4.
Artif Intell Med ; 154: 102925, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38968921

ABSTRACT

In this work, we present CodeAR, a medical time series generative model for electronic health record (EHR) synthesis. CodeAR employs autoregressive modeling on discrete tokens obtained using a vector quantized-variational autoencoder (VQ-VAE), which addresses key challenges of accurate distribution modeling and patient privacy preservation in the medical domain. The proposed model is trained with next-token prediction instead of a regression problem for more accurate distribution modeling, where the autoregressive property of CodeAR is useful to capture the inherent causality in time series data. In addition, the compressive property of the VQ-VAE prevents CodeAR from memorizing the original training data, which ensures patient privacy. Experimental results demonstrate that CodeAR outperforms the baseline autoregressive-based and GAN-based models in terms of maximum mean discrepancy (MMD) and Train on Synthetic, Test on Real tests. Our results highlight the effectiveness of autoregressive modeling on discrete tokens, the utility of CodeAR in causal modeling, and its robustness against data memorization.

5.
Int J Med Inform ; 190: 105545, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-39018708

ABSTRACT

INTRODUCTION: In German and international research networks different approaches concerning patient consent are applied. So far it is time-consuming to find out to what extent data from these networks can be used for a specific research project. To make the contents of the consents queryable, we aimed for a permission-based approach (Opt-In) that can map both the permission and the withdrawal of consent contents as well as make it queryable beyond project boundaries. MATERIALS AND METHODS: The current state of research was analysed in terms of approach and reusability. Selected process models for defining consent policies were abstracted in a next step. On this basis, a standardised semantic terminology for the description of consent policies was developed and initially agreed with experts. In a final step, the resulting code was evaluated with regards to different aspects of applicability. RESULTS: A first and extendable version for a Semantic Consent Code (SCC) based on 3-axis (CLASS, ACTION, PURPOSE) was developed, consolidated und published. The added value achieved by the SCC was illustrated using the example of real consents from large national research associations (Medical Informatics Initiative and NUM NAPKON/NUKLEUS). The applicability of the SCC was successfully evaluated in terms of the manual semantic mapping of consents by briefly trained personnel and the automated interpretability of consent policies according to the SCC (and vice versa). In addition, a concept for the use of the SCC to simplify consent queries in heterogeneous research scenarios was presented. CONCLUSIONS: The Semantic Consent Code has already successfully undergone initial evaluations. As the published 3-axis code SCC is an essential preliminary work to standardising initially diverse consent texts and contents and can iteratively be extended in multiple ways in terms of content and technical additions. It should be extended in cooperation with the potential user community.

6.
Med Biol Eng Comput ; 2024 Jun 14.
Article in English | MEDLINE | ID: mdl-38874706

ABSTRACT

The work elucidates the importance of accurate Parkinson's disease classification within medical diagnostics and introduces a novel framework for achieving this goal. Specifically, the study focuses on enhancing disease identification accuracy utilizing boosting methods. A standout contribution of this work lies in the utilization of a light gradient boosting machine (LGBM) coupled with hyperparameter tuning through grid search optimization (GSO) on the Parkinson's disease dataset derived from speech recording signals. In addition, the Synthetic Minority Over-sampling Technique (SMOTE) has also been employed as a pre-processing technique to balance the dataset, enhancing the robustness and reliability of the analysis. This approach is a novel addition to the study and underscores its potential to enhance disease identification accuracy. The datasets employed in this work include both gender-specific and combined cases, utilizing several distinctive feature subsets including baseline, Mel-frequency cepstral coefficients (MFCC), time-frequency, wavelet transform (WT), vocal fold, and tunable-Q-factor wavelet transform (TQWT). Comparative analyses against state-of-the-art boosting methods, such as AdaBoost and XG-Boost, reveal the superior performance of our proposed approach across diverse datasets and metrics. Notably, on the male cohort dataset, our method achieves exceptional results, demonstrating an accuracy of 0.98, precision of 1.00, sensitivity of 0.97, F1-Score of 0.98, and specificity of 1.00 when utilizing all features with GSO-LGBM. In comparison to AdaBoost and XGBoost, the proposed framework utilizing LGBM demonstrates superior accuracy, achieving an average improvement of 5% in classification accuracy across all feature subsets and datasets. These findings underscore the potential of the proposed methodology to enhance disease identification accuracy and provide valuable insights for further advancements in medical diagnostics.

7.
Heliyon ; 10(10): e31406, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38826742

ABSTRACT

As healthcare systems transition into an era dominated by quantum technologies, the need to fortify cybersecurity measures to protect sensitive medical data becomes increasingly imperative. This paper navigates the intricate landscape of post-quantum cryptographic approaches and emerging threats specific to the healthcare sector. Delving into encryption protocols such as lattice-based, code-based, hash-based, and multivariate polynomial cryptography, the paper addresses challenges in adoption and compatibility within healthcare systems. The exploration of potential threats posed by quantum attacks and vulnerabilities in existing encryption standards underscores the urgency of a change in basic assumptions in healthcare data security. The paper provides a detailed roadmap for implementing post-quantum cybersecurity solutions, considering the unique challenges faced by healthcare organizations, including integration issues, budget constraints, and the need for specialized training. Finally, the abstract concludes with an emphasis on the importance of timely adoption of post-quantum strategies to ensure the resilience of healthcare data in the face of evolving threats. This roadmap not only offers practical insights into securing medical data but also serves as a guide for future directions in the dynamic landscape of post-quantum healthcare cybersecurity.

8.
Digit Health ; 10: 20552076241259871, 2024.
Article in English | MEDLINE | ID: mdl-38832103

ABSTRACT

Objective: The significance of big data is increasingly acknowledged across all sectors, including medicine. Moreover, the trend of data trading is on the rise, particularly in exchanging other data for medical data to rejuvenate the medical industry. This study aimed to discern the facilitating factors of healthcare data trade. Methods: We assessed five medical data market platforms on October, 2022, based on three criteria: (a) clarity in articulating the data for sale; (b) transparency in specifying the data costs; and (c) explicit indication that payment grants data access. This helped identify the traded medical data types. Additionally, we anonymously surveyed 43 representatives from medical device companies about their demand for medical data trading, achieving a response rate of 66%. Results: Of the medical data traded on these platforms, 93.34% was structured, while 5.66% was unstructured, indicating an imbalance. Although there was a higher demand for structured medical data, there was also interest in purchasing unstructured medical data. Conclusion: Unstructured big data are crucial for medical device development, fueling the demand for trading such data. Many stakeholders view the data market as essential and are willing to procure medical data. Consequently, medical device companies will need methods to acquire unstructured medical data for developing innovative and enhanced medical devices.

9.
Cancer Res Treat ; 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38853539

ABSTRACT

Purpose: In 2024, medical researchers in the Republic of Korea were invited to amend the health and medical data utilization guidelines (Government Publications Registration Number: 11-1352000-0052828-14). This study aimed to show the overall impact of the guideline revision, with a focus on clinical genomic data. Materials and Methods: This study amended the pseudonymization of genomic data defined in the previous version through a joint study led by the Ministry of Health and Welfare, the Korea Health Information Service, and the Korea Genome Organization. To develop the previous version, we held three conferences with four main medical research institutes and seven academic societies. We conducted two surveys targeting special genome experts in academia, industry, and institutes. Results: We found that cases of pseudonymization in the application of genome data were rare and that there was ambiguity in the terminology used in the previous version of the guidelines. Most experts (> ~90%) agreed that the 'reserved' condition should be eliminated to make genomic data available after pseudonymization. In this study, the scope of genomic data was defined as clinical next generation sequencing data, including FASTQ, BAM/SAM, VCF, and medical records. Pseudonymization targets genomic sequences and metadata, embedding specific elements, such as germline mutations, short tandem repeats, single-nucleotide polymorphisms, and identifiable data (for example, ID or environmental values). Expression data generated from multi-omics can be used without pseudonymization. Conclusion: This amendment will not only enhance the safe use of healthcare data but also promote advancements in disease prevention, diagnosis, and treatment.

10.
J Med Internet Res ; 26: e56614, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38819879

ABSTRACT

BACKGROUND: Efficient data exchange and health care interoperability are impeded by medical records often being in nonstandardized or unstructured natural language format. Advanced language models, such as large language models (LLMs), may help overcome current challenges in information exchange. OBJECTIVE: This study aims to evaluate the capability of LLMs in transforming and transferring health care data to support interoperability. METHODS: Using data from the Medical Information Mart for Intensive Care III and UK Biobank, the study conducted 3 experiments. Experiment 1 assessed the accuracy of transforming structured laboratory results into unstructured format. Experiment 2 explored the conversion of diagnostic codes between the coding frameworks of the ICD-9-CM (International Classification of Diseases, Ninth Revision, Clinical Modification), and Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT) using a traditional mapping table and a text-based approach facilitated by the LLM ChatGPT. Experiment 3 focused on extracting targeted information from unstructured records that included comprehensive clinical information (discharge notes). RESULTS: The text-based approach showed a high conversion accuracy in transforming laboratory results (experiment 1) and an enhanced consistency in diagnostic code conversion, particularly for frequently used diagnostic names, compared with the traditional mapping approach (experiment 2). In experiment 3, the LLM showed a positive predictive value of 87.2% in extracting generic drug names. CONCLUSIONS: This study highlighted the potential role of LLMs in significantly improving health care data interoperability, demonstrated by their high accuracy and efficiency in data transformation and exchange. The LLMs hold vast potential for enhancing medical data exchange without complex standardization for medical terms and data structure.


Subject(s)
Health Information Exchange , Humans , Health Information Exchange/standards , Health Information Interoperability , Electronic Health Records , Natural Language Processing , Systematized Nomenclature of Medicine
11.
Heliyon ; 10(9): e29861, 2024 May 15.
Article in English | MEDLINE | ID: mdl-38707268

ABSTRACT

Probability distributions play a pivotal and significant role in modeling real-life data in every field. For this activity, a series of probability distributions have been introduced and exercised in applied sectors. This paper also contributes a new method for modeling continuous data sets. The proposed family is called the exponent power sine-G family of distributions. Based on the exponent power sine-G method, a new model, namely, the exponent power sine-Weibull model is studied. Several mathematical properties such as quantile function, identifiability property, and rth moment are derived. For the exponent power sine-G method, the maximum likelihood estimators are obtained. Simulation studies are also presented. Finally, the optimality of the exponent power sine-Weibull model is shown by taking two applications from the healthcare sector. Based on seven evaluating criteria, it is demonstrated that the proposed model is the best competing distribution for analyzing healthcare phenomena.

12.
Article in German | MEDLINE | ID: mdl-38748234

ABSTRACT

In order to achieve the goals of the Medical Informatics Initiative (MII), staff with skills in the field of medical informatics and data science are required. Each consortium has established training activities. Further, cross-consortium activities have emerged. This article describes the concepts, implemented programs, and experiences in the consortia. Fifty-one new professorships have been established and 10 new study programs have been created: 1 bachelor's degree and 6 consecutive and 3 part-time master's degree programs. Further, learning and training opportunities can be used by all MII partners. Certification and recognition opportunities have been created.The educational offers are aimed at target groups with a background in computer science, medicine, nursing, bioinformatics, biology, natural science, and data science. Additional qualifications for physicians in computer science and computer scientists in medicine seem to be particularly important. They can lead to higher quality in software development and better support for treatment processes by application systems.Digital learning methods were important in all consortia. They offer flexibility for cross-location and interprofessional training. This enables learning at an individual pace and an exchange between professional groups.The success of the MII depends largely on society's acceptance of the multiple use of medical data in both healthcare and research. The information required for this is provided by the MII's public relations work. There is also an enormous need in society for medical and digital literacy.


Subject(s)
Curriculum , Medical Informatics , Humans , Computer Security/standards , Electronic Health Records/standards , Germany , Medical Informatics/education , Professional Competence/standards
13.
Med Decis Making ; : 272989X241248612, 2024 May 13.
Article in English | MEDLINE | ID: mdl-38738479

ABSTRACT

BACKGROUND: Medical diagnosis in practice connects to research through continuous feedback loops: Studies of diagnosed cases shape our understanding of disease, which shapes future diagnostic practice. Without accounting for an imperfect and complex diagnostic process in which some cases are more likely to be diagnosed correctly (or diagnosed at all), the feedback loop can inadvertently exacerbate future diagnostic errors and biases. FRAMEWORK: A feedback loop failure occurs if misleading evidence about disease etiology encourages systematic errors that self-perpetuate, compromising future diagnoses and patient care. This article defines scenarios for feedback loop failure in medical diagnosis. DESIGN: Through simulated cases, we characterize how disease incidence, presentation, and risk factors can be misunderstood when observational data are summarized naive to biases arising from diagnostic error. A fourth simulation extends to a progressive disease. RESULTS: When severe cases of a disease are diagnosed more readily, less severe cases go undiagnosed, increasingly leading to underestimation of the prevalence and heterogeneity of the disease presentation. Observed differences in incidence and symptoms between demographic groups may be driven by differences in risk, presentation, the diagnostic process itself, or a combination of these. We suggested how perceptions about risk factors and representativeness may drive the likelihood of diagnosis. Differing diagnosis rates between patient groups can feed back to increasingly greater diagnostic errors and disparities in the timing of diagnosis and treatment. CONCLUSIONS: A feedback loop between past data and future medical practice may seem obviously beneficial. However, under plausible scenarios, poorly implemented feedback loops can degrade care. Direct summaries from observational data based on diagnosed individuals may be misleading, especially concerning those symptoms and risk factors that influence the diagnostic process itself. HIGHLIGHTS: Current evidence about a disease can (and should) influence the diagnostic process. A feedback loop failure may occur if biased "evidence" encourages diagnostic errors, leading to future errors in the evidence base.When diagnostic accuracy varies for mild versus severe cases or between demographic groups, incorrect conclusions about disease prevalence and presentation will result without specifically accounting for such variability.Use of demographic characteristics in the diagnostic process should be done with careful justification, in particular avoiding potential cognitive biases and overcorrection.

14.
Curr Med Sci ; 44(2): 273-280, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38632143

ABSTRACT

The global incidence of infectious diseases has increased in recent years, posing a significant threat to human health. Hospitals typically serve as frontline institutions for detecting infectious diseases. However, accurately identifying warning signals of infectious diseases in a timely manner, especially emerging infectious diseases, can be challenging. Consequently, there is a pressing need to integrate treatment and disease prevention data to conduct comprehensive analyses aimed at preventing and controlling infectious diseases within hospitals. This paper examines the role of medical data in the early identification of infectious diseases, explores early warning technologies for infectious disease recognition, and assesses monitoring and early warning mechanisms for infectious diseases. We propose that hospitals adopt novel multidimensional early warning technologies to mine and analyze medical data from various systems, in compliance with national strategies to integrate clinical treatment and disease prevention. Furthermore, hospitals should establish institution-specific, clinical-based early warning models for infectious diseases to actively monitor early signals and enhance preparedness for infectious disease prevention and control.


Subject(s)
Communicable Diseases , Disease Outbreaks , Humans , Disease Outbreaks/prevention & control , Communicable Diseases/diagnosis , Communicable Diseases/epidemiology , Communicable Diseases/therapy , Hospitals
15.
J Biopharm Stat ; : 1-12, 2024 Apr 14.
Article in English | MEDLINE | ID: mdl-38615346

ABSTRACT

The randomization design employed to gather the data is the basis for the exact distributions of the permutation tests. One of the designs that is frequently used in clinical trials to force balance and remove experimental bias is the truncated binomial design. The exact distribution of the weighted log-rank class of tests for censored cluster medical data under the truncated binomial design is examined in this paper. For p-values in this class, a double saddlepoint approximation is developed using the truncated binomial design. With the right censored cluster data, the saddlepoint approximation's speed and accuracy over the normal asymptotic make it easier to invert the weighted log-rank tests and find nominal 95% confidence intervals for the treatment effect.

16.
Front Comput Neurosci ; 18: 1356447, 2024.
Article in English | MEDLINE | ID: mdl-38404511

ABSTRACT

Colorectal polyp is an important early manifestation of colorectal cancer, which is significant for the prevention of colorectal cancer. Despite timely detection and manual intervention of colorectal polyps can reduce their chances of becoming cancerous, most existing methods ignore the uncertainties and location problems of polyps, causing a degradation in detection performance. To address these problems, in this paper, we propose a novel colorectal image analysis method for polyp diagnosis via PAM-Net. Specifically, a parallel attention module is designed to enhance the analysis of colorectal polyp images for improving the certainties of polyps. In addition, our method introduces the GWD loss to enhance the accuracy of polyp diagnosis from the perspective of polyp location. Extensive experimental results demonstrate the effectiveness of the proposed method compared with the SOTA baselines. This study enhances the performance of polyp detection accuracy and contributes to polyp detection in clinical medicine.

17.
Diagnostics (Basel) ; 14(4)2024 Feb 11.
Article in English | MEDLINE | ID: mdl-38396430

ABSTRACT

In the domain of AI-driven healthcare, deep learning models have markedly advanced pneumonia diagnosis through X-ray image analysis, thus indicating a significant stride in the efficacy of medical decision systems. This paper presents a novel approach utilizing a deep convolutional neural network that effectively amalgamates the strengths of EfficientNetB0 and DenseNet121, and it is enhanced by a suite of attention mechanisms for refined pneumonia image classification. Leveraging pre-trained models, our network employs multi-head, self-attention modules for meticulous feature extraction from X-ray images. The model's integration and processing efficiency are further augmented by a channel-attention-based feature fusion strategy, one that is complemented by a residual block and an attention-augmented feature enhancement and dynamic pooling strategy. Our used dataset, which comprises a comprehensive collection of chest X-ray images, represents both healthy individuals and those affected by pneumonia, and it serves as the foundation for this research. This study delves deep into the algorithms, architectural details, and operational intricacies of the proposed model. The empirical outcomes of our model are noteworthy, with an exceptional performance marked by an accuracy of 95.19%, a precision of 98.38%, a recall of 93.84%, an F1 score of 96.06%, a specificity of 97.43%, and an AUC of 0.9564 on the test dataset. These results not only affirm the model's high diagnostic accuracy, but also highlight its promising potential for real-world clinical deployment.

18.
Heliyon ; 10(1): e23575, 2024 Jan 15.
Article in English | MEDLINE | ID: mdl-38169943

ABSTRACT

In the period of big data, the Medical Internet of Things (MIoT) serves as a critical technology for modern medical data collection. Through medical devices and sensors, it enables real-time collection of a large amount of patients' physiological parameters and health data. However, these data are often generated in a high-speed, large-scale, and diverse manner, requiring integration with traditional medical systems, which further exacerbates the phenomenon of scattered and heterogeneous medical data. Additionally, the privacy and security requirements for the devices and sensor data involved in the MIoT are more stringent. Therefore, when designing a medical data sharing mechanism, the data privacy protection capability of the mechanism must be fully considered. This paper proposes an alliance chain medical data sharing mechanism based on a dual-chain structure to achieve secure sharing of medical data among entities such as medical institutions, research institutions, and cloud privacy centers, and at the same time provide privacy protection functions to achieve a balanced combination of privacy protection capability and data accessibility of medical data. First, a knowledge technology based on ciphertext policy attribute encryption with zero-knowledge concise non-interactive argumentation is used, combined with the data sharing structure of the federation chain, to ensure the integrity and privacy-protecting capability of medical data. Second, the approach employs certificate-based signing and proxy re-encryption technology, ensuring that entities can decrypt and verify medical data at the cloud privacy center using this methodology, consequently addressing the confidentiality concerns surrounding medical data. Third, an efficient and secure key identity-based encryption protocol is used to ensure the legitimacy of user identity and improve the security of medical data. Finally, the theoretical and practical performance analysis proves that the mechanism is feasible and efficient compared with other existing mechanisms.

19.
Big Data ; 12(2): 141-154, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37074400

ABSTRACT

The reliability in medical data organization and transmission is eased with the inheritance of information and communication technologies in recent years. The growth of digital communication and sharing medium imposes the necessity for optimizing the accessibility and transmission of sensitive medical data to the end-users. In this article, the Preemptive Information Transmission Model (PITM) is introduced for improving the promptness in medical data delivery. This transmission model is designed to acquire the least communication in an epidemic region for seamless information availability. The proposed model makes use of a noncyclic connection procedure and preemptive forwarding inside and outside the epidemic region. The first is responsible for replication-less connection maximization ensuring better availability of the edge nodes. The connection replications are reduced using the pruning tree classifiers based on the communication time and delivery balancing factor. The later process is responsible for the reliable forwarding of the acquired data using a conditional selection of the infrastructure units. Both the processes of PITM are accountable for improving the delivery of observed medical data, over better transmissions, communication time, and achieving fewer delays.


Subject(s)
Communication , Delivery of Health Care , Reproducibility of Results
20.
Phys Med Biol ; 69(1)2023 Dec 26.
Article in English | MEDLINE | ID: mdl-38052076

ABSTRACT

Fusion of multimodal medical data provides multifaceted, disease-relevant information for diagnosis or prognosis prediction modeling. Traditional fusion strategies such as feature concatenation often fail to learn hidden complementary and discriminative manifestations from high-dimensional multimodal data. To this end, we proposed a methodology for the integration of multimodality medical data by matching their moments in a latent space, where the hidden, shared information of multimodal data is gradually learned by optimization with multiple feature collinearity and correlation constrains. We first obtained the multimodal hidden representations by learning mappings between the original domain and shared latent space. Within this shared space, we utilized several relational regularizations, including data attribute preservation, feature collinearity and feature-task correlation, to encourage learning of the underlying associations inherent in multimodal data. The fused multimodal latent features were finally fed to a logistic regression classifier for diagnostic prediction. Extensive evaluations on three independent clinical datasets have demonstrated the effectiveness of the proposed method in fusing multimodal data for medical prediction modeling.


Subject(s)
Machine Learning , Medical Informatics
SELECTION OF CITATIONS
SEARCH DETAIL
...