Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.646
Filter
2.
Rev. Círc. Argent. Odontol ; 80(231): 6-13, jul. 2022. ilus, tab, graf
Article in Spanish | LILACS | ID: biblio-1391619

ABSTRACT

Este trabajo tuvo como objetivo conocer la fiabilidad de la impresora 3D (i3D) aditiva por Matriz de Proceso Digital de Luz (MDLP) Hellbot modelo Apolo®, a través de verificar la congruencia dimensional entre las mallas de modelos impresos (MMi) y su correspondiente archivo digital de origen (MMo), obtenido del software de planificación ortodontica Orchestrate 3D® (O3D). Para determinar su uso en odontología y sus posibilidades clínicas, fue comparada entre cinco i3D de manufactura aditiva, dos DLP, dos por estereolitografía (SLA) y una por Depósito de Material Fundido (FDM). La elección de las cinco i3D se fundamentó en su valor de mercado, intentando abarcar la mayor diversidad argentina disponible. Veinte modelos fueron impresos con cada i3D y escaneados con Escáner Intraoral (IOS) Carestream modelo 3600® (Cs3600). Las 120 MMi fueron importadas dentro del programa de ingeniería inversa Geomagic® Control X® (Cx) para su análisis 3D, consistiendo en la superposición de MMo con cada una de las MMi. Luego, una evaluación cualitativa de la desviación entre la MMi y MMo fue realizada. Un análisis estadístico cuidadoso fue realizado obteniendo como resultado comparaciones en 3d y 2d. Las coincidencias metrológicas en la superposición tridimensional permitieron un análisis exhaustivo y fácilmente reconocible a través de mapas colorimétricos. En el análisis bidimensional se plantearon planos referenciados dentariamente desde la MMo, para hacer coincidir las mediciones desde el mismo punto de partida dentaria. Los resultados fueron satisfactorios y muy alentadores. Las probabilidades de obtener rangos de variabilidad equivalentes a +/- 50µm fueron de un 40,35 % y de +/- 100µm un 71,04 %. Por lo tanto, te- niendo en cuenta las exigencias de congruencia dimensional clínicas de precisión y exactitud a las cuales es sometida nuestra profesión odontológica, se evitan problemas clínicos arrastrados por los errores dimensionales en la manufactura (Cam) (AU)


The objective of this study was to determine the reliability of the Hellbot Apollo® model additive 3D printer (i3D) by Matrix Digital Light Processing (MDLP) by verifying the dimensional congruence between the printed model meshes (MMi) and their corresponding digital source file (MMo), obtained from the Orchestrate 3D® (O3D) orthodontic planning software. A comparison was made between five i3D of additive manufacturing, two DLP, two by stereolithography (SLA), and one by Fused Material Deposition (FDM), to determine its use in dentistry and its clinical possibilities. The choice of the five i3D was based on their market value, trying to cover most of the Argentinean diversity available. Twenty models were printed with each i3D and scanned with Carestream Intraoral Scanner (IOS) model 3600® (Cs3600). The 120 MMi were imported into the reverse engineering program Geomagic® Control X® (Cx) for 3D analysis, consisting of overlaying MMo with each MMi. Then, a qualitative evaluation of the deviation between MMi and MMo. Also, a careful statistical analysis was performed, resulting in 3d and 2d comparisons. Metrological coincidences in three-dimensional overlay allowed a comprehensive and easily recognizable analysis through colorimetric maps. In the two-dimensional analysis, dentally referenced planes were proposed from the MMo, to match the measurements from the same dental starting point. The results were satisfactory and very encouraging. The probabilities of obtaining ranges of variability equivalent to +/- 50µm were 40.35 % and +/- 100µm 71.04 %. Therefore, considering the demands of clinical dimensional congruence, precision, and accuracy to which our dental profession it is subjected, clinical problems caused by dimensional errors in manufacturing (Cam) are avoided (AU)


Subject(s)
Models, Dental , Printing, Three-Dimensional , Stereolithography , Orthodontics/methods , In Vitro Techniques , Algorithms , Software , Image Interpretation, Computer-Assisted/methods , Data Interpretation, Statistical , Evaluation Studies as Topic
3.
Arch. argent. pediatr ; 120(3): 209-216, junio 2022. tab, ilus
Article in Spanish | LILACS, BINACIS | ID: biblio-1368241

ABSTRACT

La laringe se localiza en la encrucijada aerodigestiva; cualquier patología que la comprometa tendrá repercusión en la respiración, la deglución y/o la voz. Se divide en tres regiones: la supraglotis (comprende la epiglotis, las bandas ventriculares y los ventrículos laríngeos), la glotis (espacio limitado por las cuerdas vocales) y la subglotis (zona más estrecha de la vía aérea pediátrica y único punto rodeado en su totalidad por cartílago: el anillo cricoides). La obstrucción laríngea se puede presentar como una condición aguda potencialmente fatal o como un proceso crónico. El síntoma principal es el estridor inspiratorio o bifásico. La etiología varía mucho según la edad y puede ser de origen congénito, inflamatorio, infeccioso, traumático, neoplásico o iatrogénico. Se describen las patologías que ocasionan obstrucción laríngea con más frecuencia o que revisten importancia por su gravedad, sus síntomas orientadores para el diagnóstico presuntivo, los estudios complementarios y el tratamiento.


The larynx is at the aerodigestive crossroads; any pathology that involves it will have an impact on breathing, swallowing and/or the voice. It`s divided into three regions: supraglottis (includes epiglottis, ventricular bands and laryngeal ventricles), glottis (space limited by the vocal cords) and subglottis (narrowest area of pediatric airway and the only point of larynx completely surrounded by cartilage: the cricoid ring). Laryngeal obstruction can present as a potentially fatal acute condition or as a chronic process. The main symptom is inspiratory or biphasic stridor. The etiology varies widely according to age and it may be of congenital, inflammatory, infectious, traumatic, neoplastic or iatrogenic origin. We describe the pathologies that cause laryngeal obstruction, either those that occur very often or those which are important for their severity, their guiding symptoms to the presumptive diagnosis, additional studies and treatment.


Subject(s)
Humans , Child , Pediatrics , Laryngeal Diseases/diagnosis , Laryngeal Diseases/etiology , Airway Obstruction/etiology , Larynx/pathology , Algorithms , Laryngeal Diseases/therapy
4.
Medwave ; 22(3): e002100, 29-04-2022.
Article in English, Spanish | LILACS | ID: biblio-1368124

ABSTRACT

INTRODUCCIÓN: Bogotá cuenta con un sistema de emergencias médicas de ambulancias públicas y privadas que responden a incidentes de salud. No se conoce, sin embargo, su suficiencia en cantidad, tipo y ubicación de recursos demandados. OBJETIVOS: A partir de los datos del sistema de emergencias médicas de Bogotá, Colombia, se buscó primero caracterizar la respuesta pre hospitalaria en paro cardiaco. Luego, con el modelo se buscó determinar cuál sería el menor número de recursos necesarios para responder antes de ocho minutos, teniendo en cuenta su ubicación, número y tipo. MÉTODOS: Se obtuvo una base de datos de incidentes reportados en registros administrativos de la autoridad sanitaria distrital de Bogotá (de 2014 a 2017). A partir de esa información, se diseñó un modelo híbrido basado en la simulación de eventos discretos y algoritmos genéticos para establecer la cantidad, tipo y ubicación geográfica de recursos, conforme a frecuencias y tipología de los eventos. RESULTADOS: De la base de datos, Bogotá presentó 938 671 envíos de ambulancias en el período. El 47,4% de prioridad alta, 18,9% media y 33,74% baja. El 92% de estos correspondieron a 15 de 43 códigos de emergencias médicas. Los tiempos de respuesta registrados fueron mayores a lo esperado, especialmente en paro cardiaco extra hospitalario (mediana de 19 minutos). En el modelo planteado, el mejor escenario requirió al menos 281 ambulancias, medicalizadas y básicas en proporción de 3:1 respectivamente para responder en tiempos adecuados. CONCLUSIONES: Los resultados sugieren la necesidad de incrementar los recursos que responden a estos incidentes para acercar estos tiempos de respuesta a las necesidades de nuestra población.


INTRODUCTION: Bogotá has a Medical Emergency System of public and private ambulances that respond to health incidents. However, its sufficiency in quantity, type and location of the resources demanded is not known. OBJECTIVE: Based on the data from the Medical Emergency System of Bogotá, Colombia, we first sought to characterize the prehospital response in cardiac arrest and determine with the model which is the least number of resources necessary to respond within eight minutes, taking into account their location, number, and type. METHODS: A database of incidents reported in administrative records of the district health authority of Bogotá (2014 to 2017) was obtained. Based on this information, a hybrid model based on discrete event simulation and genetic algorithms was designed to establish the amount, type and geographic location of resources according to the frequencies and typology of the events. RESULTS: From the database, Bogotá presented 938 671 ambulances dispatches in the period. 47.4% high priority, 18.9% medium and 33.74% low. 92% of these corresponded to 15 of 43 medical emergency codes. The response times recorded were longer than expected, especially in out-of-hospital cardiac arrest (median 19 minutes). In the proposed model, the best scenario required at least 281 ambulances, medicalized and basic in a 3:1 ratio, respectively, to respond in adequate time. CONCLUSIONS: Results suggest the need for an increase in the resources that respond to these incidents to bring these response times to the needs of our population.


Subject(s)
Humans , Emergency Medical Services , Time Factors , Algorithms , Ambulances , Colombia
5.
Rev. Hosp. Ital. B. Aires (2004) ; 42(1): 12-20, mar. 2022. graf, ilus, tab
Article in Spanish | LILACS, BINACIS, UNISALUD | ID: biblio-1368801

ABSTRACT

Introducción: determinar la causa de muerte de los pacientes internados con enfermedad cardiovascular es de suma importancia para poder tomar medidas y así mejorar la calidad su atención y prevenir muertes evitables. Objetivos: determinar las principales causas de muerte durante la internación por enfermedades cardiovasculares. Desarrollar y validar un algoritmo para clasificar automáticamente a los pacientes fallecidos durante la internación con enfermedades cardiovasculares Diseño del estudio: estudio exploratorio retrospectivo. Desarrollo de un algoritmo de clasificación. Resultados: del total de 6161 pacientes, el 21,3% (1316) se internaron por causas cardiovasculares; las enfermedades cerebrovasculares representan el 30,7%, la insuficiencia cardíaca el 24,9% y las enfermedades cardíacas isquémicas el 14%. El algoritmo de clasificación según motivo de internación cardiovascular vs. no cardiovascular alcanzó una precisión de 0,9546 (IC 95%: 0,9351-0,9696). El algoritmo de clasificación de causa específica de internación cardiovascular alcanzó una precisión global de 0,9407 (IC 95%: 0,8866-0,9741). Conclusiones: la enfermedad cardiovascular representa el 21,3% de los motivos de internación de pacientes que fallecen durante su desarrollo. Los algoritmos presentaron en general buena performance, particularmente el de clasificación del motivo de internación cardiovascular y no cardiovascular y el clasificador según causa específica de internación cardiovascular. (AU)


Introduction: determining the cause of death of hospitalized patients with cardiovascular disease is of the utmost importance in order to take measures and thus improve the quality of care of these patients and prevent preventable deaths. Objectives: to determine the main causes of death during hospitalization due to cardiovascular diseases.To development and validate a natural language processing algorithm to automatically classify deceased patients according to their cause for hospitalization. Design: retrospective exploratory study. Development of a natural language processing classification algorithm. Results: of the total 6161 patients in our sample who died during hospitalization, 21.3% (1316) were hospitalized due to cardiovascular causes. The stroke represent 30.7%, heart failure 24.9%, and ischemic cardiac disease 14%. The classification algorithm for detecting cardiovascular vs. Non-cardiovascular admission diagnoses yielded an accuracy of 0.9546 (95% CI 0.9351, 0.9696), the algorithm for detecting specific cardiovascular cause of admission resulted in an overall accuracy of 0.9407 (95% CI 0.8866, 0.9741). Conclusions: cardiovascular disease represents 21.3% of the reasons for hospitalization of patients who die during hospital stays. The classification algorithms generally showed good performance, particularly the classification of cardiovascular vs non-cardiovascular cause for admission and the specific cardiovascular admission cause classifier. (AU)


Subject(s)
Humans , Artificial Intelligence/statistics & numerical data , Cerebrovascular Disorders/mortality , Myocardial Ischemia/mortality , Heart Failure/mortality , Hospitalization , Quality of Health Care , Algorithms , Reproducibility of Results , Factor Analysis, Statistical , Mortality , Cause of Death , Electronic Health Records
6.
Article in English | WPRIM | ID: wpr-928940

ABSTRACT

Computational medicine is an emerging discipline that uses computer models and complex software to simulate the development and treatment of diseases. Advances in computer hardware and software technology, especially the development of algorithms and graphics processing units (GPUs), have led to the broader application of computers in the medical field. Computer vision based on mathematical biological modelling will revolutionize clinical research and diagnosis, and promote the innovative development of Chinese medicine, some biological models have begun to play a practical role in various types of research. This paper introduces the concepts and characteristics of computational medicine and then reviews the developmental history of the field, including Digital Human in Chinese medicine. Additionally, this study introduces research progress in computational medicine around the world, lists some specific clinical applications of computational medicine, discusses the key problems and limitations of the research and the development and application of computational medicine, and ultimately looks forward to the developmental prospects, especially in the field of computational Chinese medicine.


Subject(s)
Algorithms , Computer Simulation , Humans
7.
Article in Chinese | WPRIM | ID: wpr-928910

ABSTRACT

With the market development and demand change, the use of adaptive algorithms in medical devices has become a possible trend. However, some uncertainties in the adaptive algorithm itself will bring challenges to the existing current supervisory work model. This article focuses on the ademptions of US agencies on artificial intelligence device supervision, and discusses the problems that existing pilot policies may encounter when facing devices with adaptive algorithms. In this way, we will provide relevant suggestions and look forward to discussing with scholars.


Subject(s)
Algorithms , Artificial Intelligence
8.
Article in Chinese | WPRIM | ID: wpr-928898

ABSTRACT

To solve the problem of real-time detection and removal of EEG signal noise in anesthesia depth monitoring, we proposed an adaptive EEG signal noise detection and removal method. This method uses discrete wavelet transform to extract the low-frequency energy and high-frequency energy of a segment of EEG signals, and sets two sets of thresholds for the low-frequency band and high-frequency band of the EEG signal. These two sets of thresholds can be updated adaptively according to the energy situation of the most recent EEG signal. Finally, we judge the level of signal interference according to the range of low-frequency energy and high-frequency energy, and perform corresponding denoising processing. The results show that the method can more accurately detect and remove the noise interference in the EEG signal, and improve the stability of the calculated characteristic parameters.


Subject(s)
Algorithms , Electroencephalography , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio , Wavelet Analysis
9.
Article in Chinese | WPRIM | ID: wpr-928897

ABSTRACT

Premature delivery is one of the direct factors that affect the early development and safety of infants. Its direct clinical manifestation is the change of uterine contraction intensity and frequency. Uterine Electrohysterography(EHG) signal collected from the abdomen of pregnant women can accurately and effectively reflect the uterine contraction, which has higher clinical application value than invasive monitoring technology such as intrauterine pressure catheter. Therefore, the research of fetal preterm birth recognition algorithm based on EHG is particularly important for perinatal fetal monitoring. We proposed a convolution neural network(CNN) based on EHG fetal preterm birth recognition algorithm, and a deep CNN model was constructed by combining the Gramian angular difference field(GADF) with the transfer learning technology. The structure of the model was optimized using the clinical measured term-preterm EHG database. The classification accuracy of 94.38% and F1 value of 97.11% were achieved. The experimental results showed that the model constructed in this paper has a certain auxiliary diagnostic value for clinical prediction of premature delivery.


Subject(s)
Algorithms , Electromyography , Female , Humans , Infant, Newborn , Neural Networks, Computer , Pregnancy , Premature Birth/diagnosis , Uterine Contraction
10.
Article in Chinese | WPRIM | ID: wpr-928892

ABSTRACT

Objective The study aims to investigate the effects of different adaptive statistical iterative reconstruction-V( ASiR-V) and convolution kernel parameters on stability of CT auto-segmentation which is based on deep learning. Method Twenty patients who have received pelvic radiotherapy were selected and different reconstruction parameters were used to establish CT images dataset. Then structures including three soft tissue organs (bladder, bowelbag, small intestine) and five bone organs (left and right femoral head, left and right femur, pelvic) were segmented automatically by deep learning neural network. Performance was evaluated by dice similarity coefficient( DSC) and Hausdorff distance, using filter back projection(FBP) as the reference. Results Auto-segmentation of deep learning is greatly affected by ASIR-V, but less affected by convolution kernel, especially in soft tissues. Conclusion The stability of auto-segmentation is affected by parameter selection of reconstruction algorithm. In practical application, it is necessary to find a balance between image quality and segmentation quality, or improve segmentation network to enhance the stability of auto-segmentation.


Subject(s)
Algorithms , Humans , Image Processing, Computer-Assisted , Neural Networks, Computer , Radiation Dosage , Tomography, X-Ray Computed
11.
Article in Chinese | WPRIM | ID: wpr-928879

ABSTRACT

Body temperature is an essential physiological parameter. Conducting non-contact, fast and accurate measurement of temperature is increasing important under the background of COVID-19. The study introduces an infrared temperature measurement system based on the thermopile infrared temperature sensor ZTP-135SR. Extracting original temperature date of sensor, post-amplification and filter processing have been performed to ensure accuracy of the system. In addition, the temperature data of environmental compensation which obtained by polynomial fitting is added to the system to further improve measurement accuracy.


Subject(s)
Algorithms , Body Temperature , COVID-19 , Humans , Temperature , Thermometers
12.
Article in Chinese | WPRIM | ID: wpr-928871

ABSTRACT

Clinical applications of cone-beam breast CT(CBBCT) are hindered by relatively higher radiation dose and longer scan time. This study proposes sparse-view CBBCT, i.e. with a small number of projections, to overcome the above bottlenecks. A deep learning method - conditional generative adversarial network constrained by image edges (ECGAN) - is proposed to suppress artifacts on sparse-view CBBCT images reconstructed by filtered backprojection (FBP). The discriminator of the ECGAN is the combination of patchGAN and LSGAN for preserving high frequency information, with a modified U-net as the generator. To further preserve subtle structures and micro calcifications which are particularly important for breast cancer screening and diagnosis, edge images of CBBCT are added to both the generator and the discriminator to guide the learning. The proposed algorithm has been evaluated on 20 clinical raw datasets of CBBCT. ECGAN substantially improves the image qualities of sparse-view CBBCT, with a performance superior to those of total variation (TV) based iterative reconstruction and FBPConvNet based post-processing. On one CBBCT case with the projection number reduced from 300 to 100, ECGAN enhances peak-signal-to-noise ratio (PSNR) and structural similarity (SSIM) on FBP reconstruction from 24.26 and 0.812 to 37.78 and 0.963, respectively. These results indicate that ECGAN successfully reduces radiation dose and scan time of CBBCT by 1/3 with only small image degradations.


Subject(s)
Algorithms , Breast , Cone-Beam Computed Tomography , Humans , Image Processing, Computer-Assisted , Phantoms, Imaging , Tomography, X-Ray Computed
13.
Article in Chinese | WPRIM | ID: wpr-928239

ABSTRACT

Brain-computer interface (BCI) systems based on steady-state visual evoked potential (SSVEP) have become one of the major paradigms in BCI research due to their high signal-to-noise ratio and short training time required by users. Fast and accurate decoding of SSVEP features is a crucial step in SSVEP-BCI research. However, the current researches lack a systematic overview of SSVEP decoding algorithms and analyses of the connections and differences between them, so it is difficult for researchers to choose the optimum algorithm under different situations. To address this problem, this paper focuses on the progress of SSVEP decoding algorithms in recent years and divides them into two categories-trained and non-trained-based on whether training data are needed. This paper also explains the fundamental theories and application scopes of decoding algorithms such as canonical correlation analysis (CCA), task-related component analysis (TRCA) and the extended algorithms, concludes the commonly used strategies for processing decoding algorithms, and discusses the challenges and opportunities in this field in the end.


Subject(s)
Algorithms , Brain-Computer Interfaces , Electroencephalography , Evoked Potentials, Visual , Photic Stimulation
14.
Article in Chinese | WPRIM | ID: wpr-928228

ABSTRACT

Early screening based on computed tomography (CT) pulmonary nodule detection is an important means to reduce lung cancer mortality, and in recent years three dimensional convolutional neural network (3D CNN) has achieved success and continuous development in the field of lung nodule detection. We proposed a pulmonary nodule detection algorithm by using 3D CNN based on a multi-scale attention mechanism. Aiming at the characteristics of different sizes and shapes of lung nodules, we designed a multi-scale feature extraction module to extract the corresponding features of different scales. Through the attention module, the correlation information between the features was mined from both spatial and channel perspectives to strengthen the features. The extracted features entered into a pyramid-similar fusion mechanism, so that the features would contain both deep semantic information and shallow location information, which is more conducive to target positioning and bounding box regression. On representative LUNA16 datasets, compared with other advanced methods, this method significantly improved the detection sensitivity, which can provide theoretical reference for clinical medicine.


Subject(s)
Algorithms , Humans , Lung Neoplasms/diagnostic imaging , Neural Networks, Computer , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods
15.
Article in Chinese | WPRIM | ID: wpr-928226

ABSTRACT

Electrocardiogram (ECG) can visually reflect the physiological electrical activity of human heart, which is important in the field of arrhythmia detection and classification. To address the negative effect of label imbalance in ECG data on arrhythmia classification, this paper proposes a nested long short-term memory network (NLSTM) model for unbalanced ECG signal classification. The NLSTM is built to learn and memorize the temporal characteristics in complex signals, and the focal loss function is used to reduce the weights of easily identifiable samples. Then the residual attention mechanism is used to modify the assigned weights according to the importance of sample characteristic to solve the sample imbalance problem. Then the synthetic minority over-sampling technique is used to perform a simple manual oversampling process on the Massachusetts institute of technology and Beth Israel hospital arrhythmia (MIT-BIH-AR) database to further increase the classification accuracy of the model. Finally, the MIT-BIH arrhythmia database is applied to experimentally verify the above algorithms. The experimental results show that the proposed method can effectively solve the issues of imbalanced samples and unremarkable features in ECG signals, and the overall accuracy of the model reaches 98.34%. It also significantly improves the recognition and classification of minority samples and has provided a new feasible method for ECG-assisted diagnosis, which has practical application significance.


Subject(s)
Algorithms , Arrhythmias, Cardiac/diagnosis , Electrocardiography , Humans , Memory, Short-Term , Neural Networks, Computer , Signal Processing, Computer-Assisted
16.
Article in Chinese | WPRIM | ID: wpr-928225

ABSTRACT

In recent years, epileptic seizure detection based on electroencephalogram (EEG) has attracted the widespread attention of the academic. However, it is difficult to collect data from epileptic seizure, and it is easy to cause over fitting phenomenon under the condition of few training data. In order to solve this problem, this paper took the CHB-MIT epilepsy EEG dataset from Boston Children's Hospital as the research object, and applied wavelet transform for data augmentation by setting different wavelet transform scale factors. In addition, by combining deep learning, ensemble learning, transfer learning and other methods, an epilepsy detection method with high accuracy for specific epilepsy patients was proposed under the condition of insufficient learning samples. In test, the wavelet transform scale factors 2, 4 and 8 were set for experimental comparison and verification. When the wavelet scale factor was 8, the average accuracy, average sensitivity and average specificity was 95.47%, 93.89% and 96.48%, respectively. Through comparative experiments with recent relevant literatures, the advantages of the proposed method were verified. Our results might provide reference for the clinical application of epilepsy detection.


Subject(s)
Algorithms , Child , Deep Learning , Electroencephalography , Epilepsy/diagnosis , Humans , Seizures/diagnosis , Signal Processing, Computer-Assisted , Wavelet Analysis
17.
Article in Chinese | WPRIM | ID: wpr-928224

ABSTRACT

The diagnosis of hypertrophic cardiomyopathy (HCM) is of great significance for the early risk classification of sudden cardiac death and the screening of family genetic diseases. This research proposed a HCM automatic detection method based on convolution neural network (CNN) model, using single-lead electrocardiogram (ECG) signal as the research object. Firstly, the R-wave peak locations of single-lead ECG signal were determined, followed by the ECG signal segmentation and resample in units of heart beats, then a CNN model was built to automatically extract the deep features in the ECG signal and perform automatic classification and HCM detection. The experimental data is derived from 108 ECG records extracted from three public databases provided by PhysioNet, the database established in this research consists of 14,459 heartbeats, and each heartbeat contains 128 sampling points. The results revealed that the optimized CNN model could effectively detect HCM, the accuracy, sensitivity and specificity were 95.98%, 98.03% and 95.79% respectively. In this research, the deep learning method was introduced for the analysis of single-lead ECG of HCM patients, which could not only overcome the technical limitations of conventional detection methods based on multi-lead ECG, but also has important application value for assisting doctor in fast and convenient large-scale HCM preliminary screening.


Subject(s)
Algorithms , Cardiomyopathy, Hypertrophic/diagnosis , Databases, Factual , Electrocardiography , Heart Rate , Humans , Neural Networks, Computer
18.
Article in Chinese | WPRIM | ID: wpr-928221

ABSTRACT

The research shows that personality assessment can be achieved by regression model based on electroencephalogram (EEG). Most of existing researches use event-related potential or power spectral density for personality assessment, which can only represent the brain information of a single region. But some research shows that human cognition is more dependent on the interaction of brain regions. In addition, due to the distribution difference of EEG features among subjects, the trained regression model can not get accurate results of cross subject personality assessment. In order to solve the problem, this research proposes a personality assessment method based on EEG functional connectivity and domain adaption. This research collected EEG data from 45 normal people under different emotional pictures (positive, negative and neutral). Firstly, the coherence of 59 channels in 5 frequency bands was taken as the original feature set. Then the feature-based domain adaptation was used to map the feature to a new feature space. It can reduce the distribution difference between training and test set in the new feature space, so as to reduce the distribution difference between subjects. Finally, the support vector regression model was trained and tested based on the transformed feature set by leave-one-out cross-validation. What's more, this paper compared the methods used in previous researches. The results showed that the method proposed in this paper improved the performance of regression model and obtained better personality assessment results. This research provides a new method for personality assessment.


Subject(s)
Algorithms , Brain , Electroencephalography/methods , Emotions , Humans , Personality Assessment
19.
Article in Chinese | WPRIM | ID: wpr-928214

ABSTRACT

Steady-state visual evoked potential (SSVEP) is one of the commonly used control signals in brain-computer interface (BCI) systems. The SSVEP-based BCI has the advantages of high information transmission rate and short training time, which has become an important branch of BCI research field. In this review paper, the main progress on frequency recognition algorithm for SSVEP in past five years are summarized from three aspects, i.e., unsupervised learning algorithms, supervised learning algorithms and deep learning algorithms. Finally, some frontier topics and potential directions are explored.


Subject(s)
Algorithms , Brain-Computer Interfaces , Electroencephalography/methods , Evoked Potentials, Visual , Photic Stimulation
20.
Article in Chinese | WPRIM | ID: wpr-928211

ABSTRACT

As an important basis for lesion determination and diagnosis, medical image segmentation has become one of the most important and hot research fields in the biomedical field, among which medical image segmentation algorithms based on full convolutional neural network and U-Net neural network have attracted more and more attention by researchers. At present, there are few reports on the application of medical image segmentation algorithms in the diagnosis of rectal cancer, and the accuracy of the segmentation results of rectal cancer is not high. In this paper, a convolutional network model of encoding and decoding combined with image clipping and pre-processing is proposed. On the basis of U-Net, this model replaced the traditional convolution block with the residual block, which effectively avoided the problem of gradient disappearance. In addition, the image enlargement method is also used to improve the generalization ability of the model. The test results on the data set provided by the "Teddy Cup" Data Mining Challenge showed that the residual block-based improved U-Net model proposed in this paper, combined with image clipping and preprocessing, could greatly improve the segmentation accuracy of rectal cancer, and the Dice coefficient obtained reached 0.97 on the verification set.


Subject(s)
Algorithms , Delayed Emergence from Anesthesia , Humans , Image Processing, Computer-Assisted , Rectal Neoplasms/diagnostic imaging , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL