RESUMEN
Background: This study investigates different strategies for estimating internal liver tumor motion during radiotherapy based on continuous monitoring of external respiratory motion combined with sparse internal imaging. Methods: Fifteen patients underwent three-fraction stereotactic liver radiotherapy. The 3D internal tumor motion (INT) was monitored by electromagnetic transponders while a camera monitored the external marker block motion (EXT). The ability of four external-internal correlation models (ECM) to estimate INT as function of EXT was investigated: a simple linear model (ECM1), an augmented linear model (ECM2), an augmented quadratic model (ECM3), and an extended quadratic model (ECM4). Each ECM was constructed by fitting INT and EXT during the first 60s of each fraction. The fit accuracy was calculated as the root-mean-square error (RMSE) between ECM-estimated and actual tumor motion. Next, the RMSE of the ECM-estimated tumor motion throughout the fractions was calculated for four simulated ECM update strategies: (A) no update, 0.33Hz internal sampling with continuous update of either (B) all ECM parameters based on the last 2 minutes samples or (C) only the baseline term based on the last 5 samples, (D) full ECM update every minute using 20s continuous internal sampling. Results: The augmented quadratic ECM3 had best fit accuracy with mean (± SD)) RMSEs of 0.32 ± 0.11mm (left-right, LR), 0.79 ± 0.30mm (cranio-caudal, CC) and 0.56 ± 0.31mm (anterior-posterior, AP). However, the simpler augmented linear ECM2 combined with frequent baseline updates (update strategy C) gave best motion estimations with mean RMSEs of 0.41 ± 0.14mm (LR), 1.02 ± 0.33mm (CC) and 0.78 ± 0.48mm (AP). This was significantly better than all other ECM-update strategy combinations for CC motion (Wilcoxon signed rank p<0.05). Conclusion: The augmented linear ECM2 combined with frequent baseline updates provided the best compromise between fit accuracy and robustness towards irregular motion. It allows accurate internal motion monitoring by combining external motioning with sparse 0.33Hz kV imaging, which is available at conventional linacs.
RESUMEN
Background: Intraoperative Irradiation Therapy (IORT) refers to the delivery of radiation during surgery and needs the computed- thickness of the target as one of the most significant factors. Objective: This paper aimed to compute target thickness and design a radiation pattern distributing the irradiation uniformly throughout the target. Material and Methods: The Monte Carlo code was used to simulate the experimental setup in this simulation study. The electron flux variations on an electronic board's metallic layer were studied for different thicknesses of the target tissue and validated with experimental data of the electronic board. Results: Based on the electron number for different Poly Methyl Methacrylate (PMMA) phantom thicknesses at various energies, 6 MeV electrons are suitable to determine the target thickness. Uniformity in radiation and corresponding time for each target were investigated. The iso-dose and percentage depth dose curves show that higher energies are suitable for treatment and distribute uniform radiation throughout the target. Increasing the phantom thickness leads to rising radiation time based on the radiation time corresponding to these energies. The tissue thickness of each section is determined, and the radiation time is managed by scanning the target. Conclusion: Calculation of the thickness of the remaining tissue and irradiation time are needed after incomplete tumor removal in IORT for various remaining tissues. The patients should be protected from overexposure to uniform irradiation of tissues since the radiation dose is prescribed and checked by an oncologist.
RESUMEN
Cancer, as identified by the World Health Organization, stands as the second leading cause of death globally. Its intricate nature makes it challenging to study solely based on biological knowledge, often leading to expensive research endeavors. While tremendous strides have been made in understanding cancer, gaps remain, especially in predicting tumor behavior across various stages. The integration of artificial intelligence in oncology research has accelerated our insights into tumor behavior, right from its genesis to metastasis. Nevertheless, there's a pressing need for a holistic understanding of the interactions between cancer cells, their microenvironment, and their subsequent interplay with the broader body environment. In this landscape, deep learning emerges as a potent tool with its multifaceted applications in diverse scientific challenges. Motivated by this, our study presents a novel approach to modeling cancer tumor growth from a molecular dynamics' perspective, harnessing the capabilities of deep-learning cellular automata. This not only facilitates a microscopic examination of tumor behavior and growth but also delves deeper into its overarching behavioral patterns. Our work primarily focused on evaluating the developed tumor growth model through the proposed network, followed by a rigorous compatibility check with traditional mathematical tumor growth models using R and Matlab software. The outcomes notably aligned with the Gompertz growth model, accentuating the robustness of our approach. Our validated model stands out by offering adaptability to diverse tumor growth datasets, positioning itself as a valuable tool for predictions and further research.
Asunto(s)
Inteligencia Artificial , Autómata Celular , Neoplasias , Humanos , Modelos Biológicos , Simulación de Dinámica Molecular , Neoplasias/patología , Microambiente Tumoral , Aprendizaje ProfundoRESUMEN
Background: Positron Emission Mammography (PEM) is a nuclear medicine imaging tool, playing a significant role in the diagnosis of patients with breast cancer. These days, many research has been done in order to improve the performance of this system. Objective: This study aims to propose a new method for optimizing the size of axial Field of View (FOV) in PEMs and improving the performance of the systems. Material and Methods: In this analytical study, a conventional Inveon PET is simulated using GATE in order to validate the simulation. For this simulation, the mean relative difference is 2.91%, showing the precision and correction of simulation and consequently it is benchmarked. In the next step, for design of the new optimized detector, several validated simulations are performed in order to find the best geometry. Results: The best result is obtained with the axial FOV of 101.7 mm. It has 1.6×1.6×15 mm3 lutetium yttrium orthosilicate (LYSO) crystals. The detector consists of 6 block rings with 30 detector blocks in each ring. In this paper, the performance of the scanner is improved and the geometry is optimized. Sensitivity and scatter fraction of the designed scanner are 4.65% and 21.2%, respectively, also noise equivalent count rate (NECR) is 105.442 kcps. Conclusion: The results showed 1 up to 3% improvement in the sensitivity of this new detector compared with different PEMs.
RESUMEN
PURPOSE: This study investigates a new approach for estimating the planning target volume (PTV) margin for moving tumors treated with robotic stereotactic body radiotherapy (SBRT). METHODS: In this new approach, the covariance of modeling and prediction errors was estimated using error propagation and implemented in the Van Herk formula to form a Modified Van Herk formula (MVHF). To perform a retrospective multi-center analysis, the MVHF was studied using 163 patients treated with different system versions of robotic SBRT (G3 version 6.2.3, VSI version 8.5, and VSI version 9.5) and compared with two established PTV margins estimation methods: The original Van Herk Formula (VHF) and the Uncertainty Estimation Method (UEM). RESULTS: Overall, the PTV margins provided by the three formalisms are similar with 4-5 mm in the lung region and 4 mm in abdomen region to the PTV margins used in clinical. Furthermore, when analyzing individual patients, a difference of up to 1 mm was found between the PTV margin estimations using MVHF and VHF. The corresponding average discrepancies for the superior-inferior (SI) direction ranged between -0.19 mm to 0.38 mm in CK G3 version 6.2.3, -0.36 mm to 0.33 mm in CK VSI version 8.5, and -0.34 mm to 0.40 mm in CK VSI version 9.5. CONCLUSIONS: It was found that for the lower left lung, upper left lung, lower right lung, upper right lung, central liver, and upper liver, the effect of covariance between model and prediction errors in SI direction was around 20%, 30%, 25%, 25%, 25%, and 30%, respectively. Notable covariance effects between model and prediction errors can be considered in PTV margin estimation using a modified VHF, which allowed for more precise target localization in robotic SBRT for moving tumors. Overall, in each of the three directions, the difference between MVHF and utilized clinical margins is 0.65 mm in the lung and abdominal region. Therefore, to improve the clinical PTV margins with the new approach, it is suggested to use the adaptive PTV margins in the next fractions.
Asunto(s)
Neoplasias , Radiocirugia , Humanos , Planificación de la Radioterapia Asistida por Computador/métodos , Estudios de Factibilidad , Pulmón , Radiocirugia/métodosRESUMEN
BACKGROUND: In external beam radiotherapy, a prediction model is required to compensate for the temporal system latency that affects the accuracy of radiation dose delivery. This study focused on a thorough comparison of seven deep artificial neural networks to propose an accurate and reliable prediction model. METHODS: Seven deep predictor models are trained and tested with 800 breathing signals. In this regard, a nonsequential-correlated hyperparameter optimization algorithm is developed to find the best configuration of parameters for all models. The root mean square error (RMSE), mean absolute error, normalized RMSE, and statistical F-test are also used to evaluate network performance. RESULTS: Overall, tuning the hyperparameters results in a 25%-30% improvement for all models compared to previous studies. The comparison between all models also shows that the gated recurrent unit (GRU) with RMSE = 0.108 ± 0.068 mm predicts respiratory signals with higher accuracy and better performance. CONCLUSION: Overall, tuning the hyperparameters in the GRU model demonstrates a better result than the hybrid predictor model used in the CyberKnife VSI system to compensate for the 115 ms system latency. Additionally, it is demonstrated that the tuned parameters have a significant impact on the prediction accuracy of each model.
Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos , Movimiento (Física) , RespiraciónRESUMEN
PURPOSE: Calculating the adequate target margin for real-time tumor tracking using the Cyberknife system is a challenging issue since different sources of error exist. In this study, the clinical log data of the Cyberknife system were analyzed to adequately quantify the planned target volume (PTV) margins of tumors located in the lung and abdomen regions. METHODS: In this study, 45 patients treated with the Cyberknife module were examined. In this context, adequate PTV margins were estimated based on the Van Herk formulation and the uncertainty estimation method by considering the impact of errors and uncertainties. To investigate the impact of errors and uncertainties on the estimated PTV margins, a statistical analysis was also performed. RESULTS: Our study demonstrates five different sources of errors, including segmentation, deformation, correlation, prediction, and targeting errors, which were identified as the main sources of error in the Cyberknife system. Furthermore, the clinical evaluation of the current study reveals that the two different formalisms provided almost identical PTV margin estimates. Additionally, 4-5 mm and 5 mm margins on average could provide adequate PTV margins at lung and abdomen tumors in all three directions, respectively. Overall, it was found that concerning the PTV margins, the impact of correlation and prediction errors is very high, while the impact of robotics error is low. CONCLUSIONS: The current study can address two limitations in previous researches, namely insufficient sample sites and a smaller number of patients. A comparison of the present results concerning the lung and abdomen areas with other studies reveals that the proposed strategy could provide a better reference in selection the PTV margins. To our knowledge, this study is one of the first attempts to estimate the PTV margins in the lung and abdomen regions for a large cohort of patients treated using the Cyberknife system.
Asunto(s)
Neoplasias Pulmonares , Radiocirugia , Humanos , Pulmón , Neoplasias Pulmonares/cirugía , Márgenes de Escisión , Radiocirugia/métodos , Planificación de la Radioterapia Asistida por Computador/métodosRESUMEN
How does the human brain encode visual object categories? Our understanding of this has advanced substantially with the development of multivariate decoding analyses. However, conventional electroencephalography (EEG) decoding predominantly uses the mean neural activation within the analysis window to extract category information. Such temporal averaging overlooks the within-trial neural variability that is suggested to provide an additional channel for the encoding of information about the complexity and uncertainty of the sensory input. The richness of temporal variabilities, however, has not been systematically compared with the conventional mean activity. Here we compare the information content of 31 variability-sensitive features against the mean of activity, using three independent highly varied data sets. In whole-trial decoding, the classical event-related potential (ERP) components of P2a and P2b provided information comparable to those provided by original magnitude data (OMD) and wavelet coefficients (WC), the two most informative variability-sensitive features. In time-resolved decoding, the OMD and WC outperformed all the other features (including the mean), which were sensitive to limited and specific aspects of temporal variabilities, such as their phase or frequency. The information was more pronounced in the theta frequency band, previously suggested to support feedforward visual processing. We concluded that the brain might encode the information in multiple aspects of neural variabilities simultaneously such as phase, amplitude, and frequency rather than mean per se. In our active categorization data set, we found that more effective decoding of the neural codes corresponds to better prediction of behavioral performance. Therefore, the incorporation of temporal variabilities in time-resolved decoding can provide additional category information and improved prediction of behavior.
Asunto(s)
Electroencefalografía , Percepción Visual , Encéfalo , Humanos , Análisis MultivarianteRESUMEN
Cytogenetic biodosimetry is a well-known method for quantifying the absorbed dose based on measuring biological radiation effects. To correlate the induced chromosomal abberrations with the absorbed dose of the individuals, a reliable dose-response calibration curve should be established. This study aimed to use frequencies and distributions of radiation-induced dicentric chromosome aberrations to develop a standard dose-response calibration curve. Peripheral blood samples taken from six male donors irradiated by an X-ray generator up to 4 Gy were studied. Three different blood samples were irradiated by known doses, then scored blindly for verification of the proposed calibration curve. Dose estimation was also carried out for three real overexposed cases. The results showed good accordance with the other published curves. The constructed dose-response curve provides a reliable tool for biological dosimetry in accidental or occupational radiation exposures.
Asunto(s)
Aberraciones Cromosómicas , Radiometría , Calibración , Relación Dosis-Respuesta en la Radiación , Humanos , Linfocitos , Masculino , Rayos XRESUMEN
The diagnosis of cancer by modern computer tools, at the very first stages of the incident, is a very important issue that has involved many researchers. In the meantime, skin cancer is a great deal of research because many people are involved with it. The purpose of this paper is to introduce an innovative method based on tissue frequency analyzes to obtain the accurate and real-time evaluation of skin cancers. According to the Biological resonance theory, body cells have natural and unique frequencies based on their biological fluctuations, which, if the structure, profile and cellular status change, its frequency also varies. This concept and theory is considered as the basis for analyzing skin tissue health in the proposed method. Reflected ultrasound waves from tissue have been processed and studied based on frequency analysis as a new method for early detection and diagnosis of accurate location and type of skin diseases. The developed algorithm was approved through 400 patients from CRED; its ability to evaluate benign and malignant skin lesions was shown (AUC = 0.959), with comparable clinical precision; as for the selected threshold, sensitivity, and specificity were 93.8% and 97.3%, respectively. Therefore, this method can detect skin malignancy with an accurate, noninvasive and real-time procedure.
Asunto(s)
Algoritmos , Detección Precoz del Cáncer/métodos , Neoplasias Cutáneas/diagnóstico , Ultrasonografía/métodos , Adolescente , Adulto , Anciano , Humanos , Persona de Mediana Edad , Curva ROC , Neoplasias Cutáneas/diagnóstico por imagen , Adulto JovenRESUMEN
BACKGROUND: Given the increasing recognition of the significance of non-motor symptoms in Parkinson's disease, we investigate the optimal use of machine learning methods for the prediction of the Montreal Cognitive Assessment (MoCA) score at year 4 from longitudinal data obtained at years 0 and 1. METHODS: We selected nâ¯=â¯184 PD subjects from the Parkinson's Progressive Marker Initiative (PPMI) database (93 features). A range of robust predictor algorithms (accompanied with automated machine learning hyperparameter tuning) and feature subset selector algorithms (FSSAs) were selected. We utilized 65%, 5% and 30% of patients in each arrangement for training, training validation and final testing respectively (10 randomized arrangements). For further testing, we enrolled 308 additional patients. RESULTS: First, we employed 10 predictor algorithms, provided with all 93 features; an error of 1.83⯱â¯0.13 was obtained by LASSOLAR (Least Absolute Shrinkage and Selection Operator - Least Angle Regression). Subsequently, we used feature subset selection followed by predictor algorithms. GA (Genetic Algorithm) selected 18 features; subsequently LOLIMOT (Local Linear Model Trees) reached an error of 1.70⯱â¯0.10. DE (Differential evolution) also selected 18 features and coupled with Thiel-Sen regression arrived at a similar performance. NSGAII (Non-dominated sorting genetic algorithm) yielded the best performance: it selected six vital features, which combined with LOLIMOT reached an error of 1.68⯱â¯0.12. Finally, using this last approach on independent test data, we reached an error of 1.65. CONCLUSION: By employing appropriate optimization tools (including automated hyperparameter tuning), it is possible to improve prediction of cognitive outcome. Overall, we conclude that optimal utilization of FSSAs and predictor algorithms can produce very good prediction of cognitive outcome in PD patients.
Asunto(s)
Aprendizaje Automático , Pruebas de Estado Mental y Demencia/estadística & datos numéricos , Enfermedad de Parkinson , Adulto , Anciano , Anciano de 80 o más Años , Algoritmos , Progresión de la Enfermedad , Femenino , Humanos , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Modelos Estadísticos , Enfermedad de Parkinson/diagnóstico , Enfermedad de Parkinson/epidemiología , Enfermedad de Parkinson/patologíaRESUMEN
Speech emotion recognition is a challenging obstacle to enabling communication between humans and machines. The present study introduces a new model of speech emotion recognition based on the relationship between the human brain and mind. According to this relationship, the proposed model consists of two parts: the brain short term memory (BSTM) and mind long term memory (MLTM). The input of the BSTM is emotional speech signals. Then, this part gives one copy of information to the MLTM. The reason is that the brain needs to save information as knowledge in a bigger and safer place similar to the human mind. The proposed model not only provides a computational model of speech emotion recognition based on the relationship between the BSTM and MLTM, but also illustrates a new relationship between brain and mind. The proposed model has been compared with different models of recognition. As the aim is to prove the efficiency of the suggested model, the effect of noise with different noise rates on the input signals has been analyzed in the experiment part. Experimental results show that the proposed algorithm has a powerful capability to identify and explore human emotion even in the noisy environment.
Asunto(s)
Encéfalo , Emociones , Aprendizaje Automático , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Percepción del Habla , Encéfalo/fisiología , Simulación por Computador , Emociones/fisiología , Humanos , Aprendizaje/fisiología , Memoria a Largo Plazo/fisiología , Memoria a Corto Plazo/fisiología , Modelos Neurológicos , Modelos Psicológicos , Patrones de Reconocimiento Fisiológico/fisiología , Percepción del Habla/fisiologíaRESUMEN
Fingertip-type pulse oximeters are popular, but their inconvenience for long-term monitoring in daily life means that other types of wearable pulse oximeters, such as reflectance pulse oximeters, need to be developed. For the purpose of developing reflection pulse oximetry, we have analyzed the light propagation in tissue to calculate and estimate the measured intensities of reflected light using the analytical and numerical solutions of the diffusion approximation equation. The reflectance of light from the biological tissue is investigated from theoretical and experimental perspectives, for light in the visible and near-infrared wavelengths. To establish the model, the calculated curves were compared with the analytical solution (AS) of the diffusion approximation equation in biological tissue. The results validated that the diffusion approximation equation could resolve the heterogeneous advanced tissue and the finite element method (FEM) could offer the simulation with higher efficiency and accuracy. Our aim has been to demonstrate the power of the FEM and AS in modeling of the steady-state diffusion approximation in a heterogeneous medium. Also, experimental data and the Monte Carlo model as a gold standard were used to verify the effectiveness of these methods.
Asunto(s)
Luz , Oximetría/métodos , Dispersión de Radiación , Difusión , Análisis de Elementos Finitos , Modelos Biológicos , Método de MontecarloRESUMEN
This article presents a new approach for estimating the depth, size, and metabolic heat generation rate of a tumour. For this purpose, the surface temperature distribution of a breast thermal image and the dynamic neural network was used. The research consisted of two steps: forward and inverse. For the forward section, a finite element model was created. The Pennes bio-heat equation was solved to find surface and depth temperature distributions. Data from the analysis, then, were used to train the dynamic neural network model (DNN). Results from the DNN training/testing confirmed those of the finite element model. For the inverse section, the trained neural network was applied to estimate the depth temperature distribution (tumour position) from the surface temperature profile, extracted from the thermal image. Finally, tumour parameters were obtained from the depth temperature distribution. Experimental findings (20 patients) were promising in terms of the model's potential for retrieving tumour parameters.
RESUMEN
BACKGROUND: Alpha particle irradiation from radon progeny is one of the major natural sources of effective dose in the public population. Oncogenic transformation is a biological effectiveness of radon progeny alpha particle hits. The biological effects which has caused by exposure to radon, were the main result of a complex series of physical, chemical, biological and physiological interactions. The cellular and molecular mechanisms for radon-induced carcinogenesis have not been clear yet. METHODS: Various biological models, including cultured cells and animals, have been found useful for studying the carcinogenesis effects of radon progeny alpha particles. In this paper, sugars cape cellular automata have been presented for computational study of complex biological effect of radon progeny alpha particles in lung bronchial airways. The model has included mechanism of DNA damage, which has been induced alpha particles hits, and then formation of transformation in the lung cells. Biomarkers were an objective measure or evaluation of normal or abnormal biological processes. In the model, the metabolism rate of infected cell has been induced alpha particles traversals, as a biomarker, has been followed to reach oncogenic transformation. RESULTS: The model results have successfully validated in comparison with "in vitro oncogenic transformation data" for C3H 10T1/2 cells. This model has provided an opportunity to study the cellular and molecular changes, at the various stages in radiation carcinogenesis, involving human cells. CONCLUSION: It has become well known that simulation could be used to investigate complex biomedical systems, in situations where traditional methodologies were difficult or too costly to employ.
RESUMEN
Features like size, shape, and volume of red blood cells are important factors in diagnosing related blood disorders such as iron deficiency and anemia. This paper proposes a method to detect abnormality in red blood cells using cell microscopic images. Adaptive local thresholding and bounding box methods are used to extract inner and outer diameters of red cells. An adaptive network-based fuzzy inference system (ANFIS) is used to classify blood samples to normal and abnormal. Accuracy of the proposed method and area under ROC curve are 96.6% and 0.9950 respectively.
Asunto(s)
Diagnóstico por Computador , Eritrocitos Anormales/fisiología , Lógica Difusa , Microscopía , Redes Neurales de la Computación , Humanos , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
BACKGROUND: Mammography is the primary imaging technique for detection and diagnosis of breast cancer; however, the contrast of a mammogram image is often poor, especially for dense and glandular tissues. In these cases the radiologist may miss some diagnostically important microcalcifications. In order to improve diagnosis of cancer correctly, image enhancement technology is often used to enhance the image and help radiologists. METHODS: This paper presents a comparative study in digital mammography image enhancement based on four different algorithms: wavelet-based enhancement (Asymmetric Daubechies of order 8), Contrast-Limited Adaptive Histogram Equalization (CLAHE), morphological operators and unsharp masking. These algorithms have been tested on 114 clinical digital mammography images. The comparison for all the proposed image enhancement techniques was carried out to find out the best technique in enhancement of the mammogram images to detect microcalcifications. RESULTS: For evaluation of performance of image enhancement algorithms, the Contrast Improvement Index (CII) and profile intensity surface area distribution curve quality assessment have been used after any enhancement. The results of this study have shown that the average of CII is about 2.61 for wavelet and for CLAHE, unsharp masking and morphology operation are about 2.047, 1.63 and 1.315 respectively. CONCLUSION: Experimental results strongly suggest that the wavelet transformation can be more effective and improve significantly overall detection of the Computer-Aided Diagnosis (CAD) system especially for dense breast. Compare to other studies, our method achieved a higher CII.
RESUMEN
PURPOSE: We have developed an algorithm for real-time detection and complete correction of the patient motion effects during single photon emission computed tomography. The algorithm is based on a linear prediction filter (LPC). MATERIALS AND METHODS: The new prediction of projection data algorithm (PPDA) detects most motions-such as those of the head, legs, and hands-using comparison of the predicted and measured frame data. When the data acquisition for a specific frame is completed, the accuracy of the acquired data is evaluated by the PPDA. If patient motion is detected, the scanning procedure is stopped. After the patient rests in his or her true position, data acquisition is repeated only for the corrupted frame and the scanning procedure is continued. RESULTS: Various experimental data were used to validate the motion detection algorithm; on the whole, the proposed method was tested with approximately 100 test cases. The PPDA shows promising results. CONCLUSION: Using the PPDA enables us to prevent the scanner from collecting disturbed data during the scan and replaces them with motion-free data by real-time rescanning for the corrupted frames. As a result, the effects of patient motion is corrected in real time.