Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 11.039
Filter
1.
Curr Cardiol Rev ; 2024 Jul 31.
Article in English | MEDLINE | ID: mdl-39092649

ABSTRACT

Recent endeavors have led to the exploration of Machine Learning (ML) to enhance the detection and accurate diagnosis of heart pathologies. This is due to the growing need to improve efficiency in diagnostics and hasten the process of delivering treatment. Several institutions have actively assessed the possibility of creating algorithms for advancing our understanding of atrial fibrillation (AF), a common form of sustained arrhythmia. This means that artificial intelligence is now being used to analyze electrocardiogram (ECG) data. The data is typically extracted from large patient databases and then subsequently used to train and test the algorithm with the help of neural networks. Machine learning has been used to effectively detect atrial fibrillation with more accuracy than clinical experts, and if applied to clinical practice, it will aid in early diagnosis and management of the condition and thus reduce thromboembolic complications of the disease. In this text, a review of the application of machine learning in the analysis and detection of atrial fibrillation, a comparison of the outcomes (sensitivity, specificity, and accuracy), and the framework and methods of the studies conducted have been presented.

2.
Neural Netw ; 179: 106567, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39089155

ABSTRACT

While Graph Neural Networks (GNNs) have demonstrated their effectiveness in processing non-Euclidean structured data, the neighborhood fetching of GNNs is time-consuming and computationally intensive, making them difficult to deploy in low-latency industrial applications. To address the issue, a feasible solution is graph knowledge distillation (KD), which can learn high-performance student Multi-layer Perceptrons (MLPs) to replace GNNs by mimicking the superior output of teacher GNNs. However, state-of-the-art graph knowledge distillation methods are mainly based on distilling deep features from intermediate hidden layers, this leads to the significance of logit layer distillation being greatly overlooked. To provide a novel viewpoint for studying logits-based KD methods, we introduce the idea of decoupling into graph knowledge distillation. Specifically, we first reformulate the classical graph knowledge distillation loss into two parts, i.e., the target class graph distillation (TCGD) loss and the non-target class graph distillation (NCGD) loss. Next, we decouple the negative correlation between GNN's prediction confidence and NCGD loss, as well as eliminate the fixed weight between TCGD and NCGD. We named this logits-based method Decoupled Graph Knowledge Distillation (DGKD). It can flexibly adjust the weights of TCGD and NCGD for different data samples, thereby improving the prediction accuracy of the student MLP. Extensive experiments conducted on public benchmark datasets show the effectiveness of our method. Additionally, DGKD can be incorporated into any existing graph knowledge distillation framework as a plug-and-play loss function, further improving distillation performance. The code is available at https://github.com/xsk160/DGKD.

3.
Neural Netw ; 179: 106564, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39089150

ABSTRACT

This study is centered around the dynamic behaviors observed in a class of fractional-order generalized reaction-diffusion inertial neural networks (FGRDINNs) with time delays. These networks are characterized by differential equations involving two distinct fractional derivatives of the state. The global uniform stability of FGRDINNs with time delays is explored utilizing Lyapunov comparison principles. Furthermore, global synchronization conditions for FGRDINNs with time delays are derived through the Lyapunov direct method, with consideration given to various feedback control strategies and parameter perturbations. The effectiveness of the theoretical findings is demonstrated through three numerical examples, and the impact of controller parameters on the error system is further investigated.

4.
Clin Lab Med ; 44(3): 397-408, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39089746

ABSTRACT

A leukocyte differential of peripheral blood can be performed using digital imaging coupled with cellular pre-classification by artificial neural networks. Platelet and erythrocyte morphology can be assessed and counts estimated. Systems from a single vendor have been used in clinical practice for several years, with other vendors' systems, in a development. These systems perform comparably to traditional manual optical microscopy, however, it is important to note that they are designed and intended to be operated by a trained morphologist. These systems have several benefits including increased standardization, efficiency, and remote-review capability.


Subject(s)
Neural Networks, Computer , Humans , Hematology , Image Processing, Computer-Assisted , Artificial Intelligence
5.
Article in English | MEDLINE | ID: mdl-39090504

ABSTRACT

PURPOSE: The integration of deep learning in image segmentation technology markedly improves the automation capabilities of medical diagnostic systems, reducing the dependence on the clinical expertise of medical professionals. However, the accuracy of image segmentation is still impacted by various interference factors encountered during image acquisition. METHODS: To address this challenge, this paper proposes a loss function designed to mine specific pixel information which dynamically changes during training process. Based on the triplet concept, this dynamic change is leveraged to drive the predicted boundaries of images closer to the real boundaries. RESULTS: Extensive experiments on the PH2 and ISIC2017 dermoscopy datasets validate that our proposed loss function overcomes the limitations of traditional triplet loss methods in image segmentation applications. This loss function not only enhances Jaccard indices of neural networks by 2.42 % and 2.21 % for PH2 and ISIC2017, respectively, but also neural networks utilizing this loss function generally surpass those that do not in terms of segmentation performance. CONCLUSION: This work proposed a loss function that mined the information of specific pixels deeply without incurring additional training costs, significantly improving the automation of neural networks in image segmentation tasks. This loss function adapts to dermoscopic images of varying qualities and demonstrates higher effectiveness and robustness compared to other boundary loss functions, making it suitable for image segmentation tasks across various neural networks.

6.
Small Methods ; : e2400620, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39091065

ABSTRACT

The heterogeneous micromechanical properties of biological tissues have profound implications across diverse medical and engineering domains. However, identifying full-field heterogeneous elastic properties of soft materials using traditional engineering approaches is fundamentally challenging due to difficulties in estimating local stress fields. Recently, there has been a growing interest in data-driven models for learning full-field mechanical responses, such as displacement and strain, from experimental or synthetic data. However, research studies on inferring full-field elastic properties of materials, a more challenging problem, are scarce, particularly for large deformation, hyperelastic materials. Here, a physics-informed machine learning approach is proposed to identify the elasticity map in nonlinear, large deformation hyperelastic materials. This study reports the prediction accuracies and computational efficiency of physics-informed neural networks (PINNs) in inferring the heterogeneous elasticity maps across materials with structural complexity that closely resemble real tissue microstructure, such as brain, tricuspid valve, and breast cancer tissues. Further, the improved architecture is applied to three hyperelastic constitutive models: Neo-Hookean, Mooney Rivlin, and Gent. The improved network architecture consistently produces accurate estimations of heterogeneous elasticity maps, even when there is up to 10% noise present in the training data.

7.
Front Vet Sci ; 11: 1436795, 2024.
Article in English | MEDLINE | ID: mdl-39086767

ABSTRACT

Facial expressions are essential for communication and emotional expression across species. Despite the improvements brought by tools like the Horse Grimace Scale (HGS) in pain recognition in horses, their reliance on human identification of characteristic traits presents drawbacks such as subjectivity, training requirements, costs, and potential bias. Despite these challenges, the development of facial expression pain scales for animals has been making strides. To address these limitations, Automated Pain Recognition (APR) powered by Artificial Intelligence (AI) offers a promising advancement. Notably, computer vision and machine learning have revolutionized our approach to identifying and addressing pain in non-verbal patients, including animals, with profound implications for both veterinary medicine and animal welfare. By leveraging the capabilities of AI algorithms, we can construct sophisticated models capable of analyzing diverse data inputs, encompassing not only facial expressions but also body language, vocalizations, and physiological signals, to provide precise and objective evaluations of an animal's pain levels. While the advancement of APR holds great promise for improving animal welfare by enabling better pain management, it also brings forth the need to overcome data limitations, ensure ethical practices, and develop robust ground truth measures. This narrative review aimed to provide a comprehensive overview, tracing the journey from the initial application of facial expression recognition for the development of pain scales in animals to the recent application, evolution, and limitations of APR, thereby contributing to understanding this rapidly evolving field.

8.
Article in English | MEDLINE | ID: mdl-39086252

ABSTRACT

Estimation of mental workload from electroencephalogram (EEG) signals aims to accurately measure the cognitive demands placed on an individual during multitasking mental activities. By analyzing the brain activity of the subject, we can determine the level of mental effort required to perform a task and optimize the workload to prevent cognitive overload or underload. This information can be used to enhance performance and productivity in various fields such as healthcare, education, and aviation. In this paper, we propose a method that uses EEG and deep neural networks to estimate the mental workload of human subjects during multitasking mental activities. Notably, our proposed method employs subject-independent classification. We use the "STEW" dataset, which consists of two tasks, namely "No task" and "simultaneous capacity (SIMKAP)-based multitasking activity". We estimate the different workload levels of two tasks using a composite framework consisting of brain connectivity and deep neural networks. After the initial preprocessing of EEG signals, an analysis of the relationships between the 14 EEG channels is conducted to evaluate effective brain connectivity. This assessment illustrates the information flow between various brain regions, utilizing the direct Directed Transfer Function (dDTF) method. Then, we propose a deep hybrid model based on pre-trained Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) for the classification of workload levels. The accuracy of the proposed deep model achieved 83.12% according to the subject-independent leave-subject-out (LSO) approach. The pre-trained CNN + LSTM approaches to EEG data have been found to be an accurate method for assessing the mental workload.

9.
Heliyon ; 10(13): e34146, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39091959

ABSTRACT

This investigation introduces advanced predictive models for estimating axial strains in Carbon Fiber-Reinforced Polymer (CFRP) confined concrete cylinders, addressing critical aspects of structural integrity in seismic environments. By synthesizing insights from a substantial dataset comprising 708 experimental observations, we harness the power of Artificial Neural Networks (ANNs) and General Regression Analysis (GRA) to refine predictive accuracy and reliability. The enhanced models developed through this research demonstrate superior performance, evidenced by an impressive R-squared value of 0.85 and a Root Mean Square Error (RMSE) of 1.42, and significantly advance our understanding of the behavior of CFRP-confined structures under load. Detailed comparisons with existing predictive models reveal our approaches' superior capacity to mimic and forecast axial strain behaviors accurately, offering essential benefits for designing and reinforcing concrete structures in earthquake-prone areas. This investigation sets a new benchmark in the field through meticulous analysis and innovative modeling, providing a robust framework for future engineering applications and research.

10.
BMC Med Imaging ; 24(1): 201, 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-39095688

ABSTRACT

Skin cancer stands as one of the foremost challenges in oncology, with its early detection being crucial for successful treatment outcomes. Traditional diagnostic methods depend on dermatologist expertise, creating a need for more reliable, automated tools. This study explores deep learning, particularly Convolutional Neural Networks (CNNs), to enhance the accuracy and efficiency of skin cancer diagnosis. Leveraging the HAM10000 dataset, a comprehensive collection of dermatoscopic images encompassing a diverse range of skin lesions, this study introduces a sophisticated CNN model tailored for the nuanced task of skin lesion classification. The model's architecture is intricately designed with multiple convolutional, pooling, and dense layers, aimed at capturing the complex visual features of skin lesions. To address the challenge of class imbalance within the dataset, an innovative data augmentation strategy is employed, ensuring a balanced representation of each lesion category during training. Furthermore, this study introduces a CNN model with optimized layer configuration and data augmentation, significantly boosting diagnostic precision in skin cancer detection. The model's learning process is optimized using the Adam optimizer, with parameters fine-tuned over 50 epochs and a batch size of 128 to enhance the model's ability to discern subtle patterns in the image data. A Model Checkpoint callback ensures the preservation of the best model iteration for future use. The proposed model demonstrates an accuracy of 97.78% with a notable precision of 97.9%, recall of 97.9%, and an F2 score of 97.8%, underscoring its potential as a robust tool in the early detection and classification of skin cancer, thereby supporting clinical decision-making and contributing to improved patient outcomes in dermatology.


Subject(s)
Deep Learning , Dermoscopy , Neural Networks, Computer , Skin Neoplasms , Humans , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Dermoscopy/methods , Image Interpretation, Computer-Assisted/methods
11.
Ultramicroscopy ; 265: 114020, 2024 Jul 20.
Article in English | MEDLINE | ID: mdl-39096695

ABSTRACT

Structural and chemical characterization of nanomaterials provides important information for understanding their functional properties. Nanomaterials with characteristic structure sizes in the nanometer range can be characterized by scanning transmission electron microscopy (STEM). In conventional STEM, two-dimensional (2D) projection images of the samples are acquired, information about the third dimension is lost. This drawback can be overcome by STEM tomography, where the three-dimensional (3D) structure is reconstructed from a series of projection images acquired using various projection directions. However, 3D measurements are expensive with respect to acquisition and evaluation time. Furthermore, the method is hardly applicable to beam-sensitive materials, i.e. samples that degrade under the electron beam. For this reason, it is desirable to know whether sufficient information on structural and chemical information can be extracted from 2D-projection measurements. In the present work, a comparison between 3D-reconstruction and 2D-projection characterization of structure and mixing in nanoparticle hetero-aggregates is provided. To this end, convolutional neural networks are trained in 2D and 3D to extract particle positions and material types from the simulated or experimental measurement. Results are used to evaluate structure, particle size distributions, hetero-aggregate compositions and mixing of particles quantitatively and to find an answer to the question, whether an expensive 3D characterization is required for this material system for future characterizations.

12.
Ann Biomed Eng ; 2024 Aug 03.
Article in English | MEDLINE | ID: mdl-39097542

ABSTRACT

PURPOSE: Estimating loading of the knee joint may be helpful in managing degenerative joint diseases. Contemporary methods to estimate loading involve calculating knee joint contact forces using musculoskeletal modeling and simulation from motion capture (MOCAP) data, which must be collected in a specialized environment and analyzed by a trained expert. To make the estimation of knee joint loading more accessible, simple input predictors should be used for predicting knee joint loading using artificial neural networks. METHODS: We trained feedforward artificial neural networks (ANNs) to predict knee joint loading peaks from the mass, height, age, sex, walking speed, and knee flexion angle (KFA) of subjects using their existing MOCAP data. We also collected an independent MOCAP dataset while recording walking with a video camera (VC) and inertial measurement units (IMUs). We quantified the prediction accuracy of the ANNs using walking speed and KFA estimates from (1) MOCAP data, (2) VC data, and (3) IMU data separately (i.e., we quantified three sets of prediction accuracy metrics). RESULTS: Using portable modalities, we achieved prediction accuracies between 0.13 and 0.37 root mean square error normalized to the mean of the musculoskeletal analysis-based reference values. The correlation between the predicted and reference loading peaks varied between 0.65 and 0.91. This was comparable to the prediction accuracies obtained when obtaining predictors from motion capture data. DISCUSSION: The prediction results show that both VCs and IMUs can be used to estimate predictors that can be used in estimating knee joint loading outside the motion laboratory. Future studies should investigate the usability of the methods in an out-of-laboratory setting.

13.
Sci Rep ; 14(1): 17785, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39090261

ABSTRACT

Skin cancer is a lethal disease, and its early detection plays a pivotal role in preventing its spread to other body organs and tissues. Artificial Intelligence (AI)-based automated methods can play a significant role in its early detection. This study presents an AI-based novel approach, termed 'DualAutoELM' for the effective identification of various types of skin cancers. The proposed method leverages a network of autoencoders, comprising two distinct autoencoders: the spatial autoencoder and the FFT (Fast Fourier Transform)-autoencoder. The spatial-autoencoder specializes in learning spatial features within input lesion images whereas the FFT-autoencoder learns to capture textural and distinguishing frequency patterns within transformed input skin lesion images through the reconstruction process. The use of attention modules at various levels within the encoder part of these autoencoders significantly improves their discriminative feature learning capabilities. An Extreme Learning Machine (ELM) with a single layer of feedforward is trained to classify skin malignancies using the characteristics that were recovered from the bottleneck layers of these autoencoders. The 'HAM10000' and 'ISIC-2017' are two publicly available datasets used to thoroughly assess the suggested approach. The experimental findings demonstrate the accuracy and robustness of the proposed technique, with AUC, precision, and accuracy values for the 'HAM10000' dataset being 0.98, 97.68% and 97.66%, and for the 'ISIC-2017' dataset being 0.95, 86.75% and 86.68%, respectively. This study highlights the possibility of the suggested approach for accurate detection of skin cancer.


Subject(s)
Machine Learning , Skin Neoplasms , Humans , Skin Neoplasms/diagnosis , Skin Neoplasms/diagnostic imaging , Early Detection of Cancer/methods , Algorithms , Artificial Intelligence , Image Processing, Computer-Assisted/methods
14.
BMC Oral Health ; 24(1): 772, 2024 Jul 10.
Article in English | MEDLINE | ID: mdl-38987714

ABSTRACT

Integrating artificial intelligence (AI) into medical and dental applications can be challenging due to clinicians' distrust of computer predictions and the potential risks associated with erroneous outputs. We introduce the idea of using AI to trigger second opinions in cases where there is a disagreement between the clinician and the algorithm. By keeping the AI prediction hidden throughout the diagnostic process, we minimize the risks associated with distrust and erroneous predictions, relying solely on human predictions. The experiment involved 3 experienced dentists, 25 dental students, and 290 patients treated for advanced caries across 6 centers. We developed an AI model to predict pulp status following advanced caries treatment. Clinicians were asked to perform the same prediction without the assistance of the AI model. The second opinion framework was tested in a 1000-trial simulation. The average F1-score of the clinicians increased significantly from 0.586 to 0.645.


Subject(s)
Artificial Intelligence , Dental Caries , Humans , Dental Caries/therapy , Referral and Consultation , Patient Care Planning , Algorithms
15.
Pharm Res ; 2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39048879

ABSTRACT

PURPOSE: In biotechnology, microscopic cell imaging is often used to identify and analyze cell morphology and cell state for a variety of applications. For example, microscopy can be used to detect the presence of cytopathic effects (CPE) in cell culture samples to determine virus contamination. Another application of microscopy is to verify clonality during cell line development. Conventionally, inspection of these microscopy images is performed manually by human analysts. This is both tedious and time consuming. In this paper, we propose using supervised deep learning algorithms to automate the cell detection processes mentioned above. METHODS: The proposed algorithms utilize image processing techniques and convolutional neural networks (CNN) to detect the presence of CPE and to verify the clonality in cell line development. RESULTS: We train and test the algorithms on image data which have been collected and labeled by domain experts. Our experiments have shown promising results in terms of both accuracy and speed. CONCLUSION: Deep learning algorithms achieve high accuracy (more than 95%) on both CPE detection and clonal selection applications, resulting in a highly efficient and cost-effective automation process.

16.
J Oral Biol Craniofac Res ; 14(5): 500-506, 2024.
Article in English | MEDLINE | ID: mdl-39050525

ABSTRACT

Aim: The aim of the questionnaire study was to determine the knowledge, attitude, and perception of orthodontists regarding the role of artificial intelligence in dentistry in general and orthodontics specifically, and to determine the use of artificial intelligence by the orthodontist. Methods: This cross-sectional study was done among the orthodontists of Northern India (clinicians, academicians, and postgraduates) through a web-based electronic survey using Google Forms. The study was designed to obtain information about AI and its basic usage in daily life, in dentistry, and in orthodontics from the participants. The options given were set specifically according to the Likert scale to maintain the correct format. The questionnaire was validated by one AI expert and one orthodontic expert, followed by pretesting in a smaller group of 25 orthodontists 2 weeks before circulation. A total of 100 orthodontists and postgraduate students responded to the pretested online questionnaire link for 31 questions in four sections sent via social media websites in a period of 3 months. Results: The majority of the participants believe that AI could be useful in diagnosis and treatment planning and could revolutionize dentistry in general. 84 % of the orthodontic academicians and clinicians, including PG students, consider AI a useful tool for boosting performance and delivering quality care in orthodontics, and 72 % see AI as a partner rather than a competitor in the foreseeable future of dentistry. 90 % of the participants believe that the incorporation of AI into CBCT analysis can be a valuable addition to diagnosis and treatment planning. 86 % of total participants agree that AI can be helpful in decision-making for orthognathic surgery, and 84 % find AI useful for bone age assessment. Conclusions: It was observed that academicians are more aware of AI terminologies and usage as compared to PG students and clinicians. There is a consensus that AI is a useful tool for diagnosis and treatment planning, boosting performance and quality care in orthodontics. In spite of these facts, 62.5 % of clinicians and 40 % of PG students are still not using AI for cephalometric analysis (p = 0.033).

17.
JMIR AI ; 3: e54885, 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39052997

ABSTRACT

BACKGROUND: The escalating global prevalence of obesity has necessitated the exploration of novel diagnostic approaches. Recent scientific inquiries have indicated potential alterations in voice characteristics associated with obesity, suggesting the feasibility of using voice as a noninvasive biomarker for obesity detection. OBJECTIVE: This study aims to use deep neural networks to predict obesity status through the analysis of short audio recordings, investigating the relationship between vocal characteristics and obesity. METHODS: A pilot study was conducted with 696 participants, using self-reported BMI to classify individuals into obesity and nonobesity groups. Audio recordings of participants reading a short script were transformed into spectrograms and analyzed using an adapted YOLOv8 model (Ultralytics). The model performance was evaluated using accuracy, recall, precision, and F1-scores. RESULTS: The adapted YOLOv8 model demonstrated a global accuracy of 0.70 and a macro F1-score of 0.65. It was more effective in identifying nonobesity (F1-score of 0.77) than obesity (F1-score of 0.53). This moderate level of accuracy highlights the potential and challenges in using vocal biomarkers for obesity detection. CONCLUSIONS: While the study shows promise in the field of voice-based medical diagnostics for obesity, it faces limitations such as reliance on self-reported BMI data and a small, homogenous sample size. These factors, coupled with variability in recording quality, necessitate further research with more robust methodologies and diverse samples to enhance the validity of this novel approach. The findings lay a foundational step for future investigations in using voice as a noninvasive biomarker for obesity detection.

18.
Neural Netw ; 178: 106545, 2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39053198

ABSTRACT

This paper is concerned with the input-to-state stability (ISS) for a kind of delayed memristor-based inertial neural networks (DMINNs). Based on the nonsmooth analysis and stability theory, novel delay-dependent and delay-independent criteria on the ISS of DMINNs are obtained by constructing different Lyapunov functions. Moreover, compared with the reduced order approach used in the previous works, this paper consider the ISS of DMINNs via non-reduced order approach. Directly analysis the model of DMINNs can better maintain its physical backgrounds, which reduces the complexity of calculations and is more rigorous in practical application. Additionally, the novel proposed results on the ISS of DMINNs here incorporate and complement the existing studies on memristive neural network dynamical systems. Lastly, a numerical example is provided to show that the obtained criteria are reliable.

19.
Neural Netw ; 179: 106553, 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39053303

ABSTRACT

Multi-modal representation learning has received significant attention across diverse research domains due to its ability to model a scenario comprehensively. Learning the cross-modal interactions is essential to combining multi-modal data into a joint representation. However, conventional cross-attention mechanisms can produce noisy and non-meaningful values in the absence of useful cross-modal interactions among input features, thereby introducing uncertainty into the feature representation. These factors have the potential to degrade the performance of downstream tasks. This paper introduces a novel Pre-gating and Contextual Attention Gate (PCAG) module for multi-modal learning comprising two gating mechanisms that operate at distinct information processing levels within the deep learning model. The first gate filters out interactions that lack informativeness for the downstream task, while the second gate reduces the uncertainty introduced by the cross-attention module. Experimental results on eight multi-modal classification tasks spanning various domains show that the multi-modal fusion model with PCAG outperforms state-of-the-art multi-modal fusion models. Additionally, we elucidate how PCAG effectively processes cross-modality interactions.

20.
Neural Netw ; 179: 106537, 2024 Jul 14.
Article in English | MEDLINE | ID: mdl-39053299

ABSTRACT

Portfolio management (PM) is a popular financial process that concerns the occasional reallocation of a particular quantity of capital into a portfolio of assets, with the main aim of maximizing profitability conditioned to a certain level of risk. Given the inherent dynamicity of stock exchanges and development for long-term performance, reinforcement learning (RL) has become a dominating solution for solving the problem of portfolio management in an automated and efficient manner. Nevertheless, the present RL-based PM methods just take into account the variations in prices of portfolio assets and the implications of price variations, while overlooking the significant relationships among different assets in the market, which are extremely valuable for managerial decisions. To close this gap, this paper introduces a novel deep model that combines two subnetworks; one to learn a temporal representation of historical prices using a refined temporal learner, while the other learns the relationships between different stocks in the market using a relation graph learner (RGL). Then, the above learners are integrated into the curriculum RL scheme for formulating the PM as a curriculum Markov Decision Process, in which an adaptive curriculum policy is presented to enable the agent to adaptively minimize risk value and maximize cumulative return. Proof-of-concept experiments are performed on data from three public stock indices (namely S&P500, NYSE, and NASDAQ), and the results demonstrate the efficiency of the proposed framework in improving the portfolio management performance over the competing RL solutions.

SELECTION OF CITATIONS
SEARCH DETAIL