Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
PLoS One ; 19(5): e0302590, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38758731

RESUMEN

Automatic Urdu handwritten text recognition is a challenging task in the OCR industry. Unlike printed text, Urdu handwriting lacks a uniform font and structure. This lack of uniformity causes data inconsistencies and recognition issues. Different writing styles, cursive scripts, and limited data make Urdu text recognition a complicated task. Major languages, such as English, have experienced advances in automated recognition, whereas low-resource languages, such as Urdu, still lag. Transformer-based models are promising for automated recognition in high- and low-resource languages such as Urdu. This paper presents a transformer-based method called ET-Network that integrates self-attention into EfficientNet for feature extraction and a transformer for language modeling. The use of self-attention layers in EfficientNet helps to extract global and local features that capture long-range dependencies. These features proceeded into a vanilla transformer to generate text, and a prefix beam search is used for the finest outcome. NUST-UHWR, UPTI2.0, and MMU-OCR-21 are three datasets used to train and test the ET Network for a handwritten Urdu script. The ET-Network improved the character error rate by 4% and the word error rate by 1.55%, while establishing a new state-of-the-art character error rate of 5.27% and a word error rate of 19.09% for Urdu handwritten text.


Asunto(s)
Aprendizaje Profundo , Escritura Manual , Humanos , Lenguaje , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos
2.
Comput Methods Programs Biomed ; 224: 106981, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35863125

RESUMEN

BACKGROUND AND OBJECTIVE: The ever-mutating COVID-19 has infected billions of people worldwide and seriously affected the stability of human society and the world economic development. Therefore, it is essential to make long-term and short-term forecasts for COVID-19. However, the pandemic situation in different countries and regions may be dominated by different virus variants, and the transmission capacity of different virus variants diversifies. Therefore, there is a need to develop a predictive model that can incorporate mutational information to make reasonable predictions about the current pandemic situation. METHODS: This paper proposes a deep learning prediction framework, VOC-DL, based on Variants Of Concern (VOC). The framework uses slope feature method to process the time series dataset containing VOC variant information, and uses VOC-LSTM, VOC-GRU and VOC-BILSTM prediction models included in the framework to predict the daily newly confirmed cases. RESULTS: We analyzed daily newly confirmed cases in Italy, South Korea, Russia, Japan and India from April 14th, 2021 to July 3rd, 2021. The experimental results show that all VOC-DL models proposed in this paper can accurately predict the pandemic trend in the medium and long term, and VOC-LSTM model has the best prediction performance, with the highest average determination coefficient R2 of 96.83% in five nations' datasets. The overall prediction has robustness. CONCLUSIONS: The experimental results show that VOC-LSTM is the best predictor for such a series of data and has higher prediction accuracy in the long run. At the same time, our VOC-DL framework combining VOC variants has reference significance for predicting other variants in the future.


Asunto(s)
COVID-19 , Aprendizaje Profundo , COVID-19/diagnóstico , Predicción , Humanos , India , Pandemias
3.
Comput Methods Programs Biomed ; 220: 106821, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35487181

RESUMEN

BACKGROUND: Due to the advancement of medical imaging and computer technology, machine intelligence to analyze clinical image data increases the probability of disease prevention and successful treatment. When diagnosing and detecting heart disease, medical imaging can provide high-resolution scans of every organ or tissue in the heart. The diagnostic results obtained by the imaging method are less susceptible to human interference. They can process numerous patient information, assist doctors in early detection of heart disease, intervene and treat patients, and improve the understanding of heart disease symptoms and clinical diagnosis of great significance. In a computer-aided diagnosis system, accurate segmentation of cardiac scan images is the basis and premise of subsequent thoracic function analysis and 3D image reconstruction. EXISTING TECHNIQUES: This paper systematically reviews automatic methods and some difficulties for cardiac segmentation in radiographic images. Combined with recent advanced deep learning techniques, the feasibility of using deep learning network models for image segmentation is discussed, and the commonly used deep learning frameworks are compared. DEVELOPED INSIGHTS: There are many standard methods for medical image segmentation, such as traditional methods based on regions and edges and methods based on deep learning. Because of characteristics of non-uniform grayscale, individual differences, artifacts and noise of medical images, the above image segmentation methods have certain limitations. It is tough to obtain the needed results sensitivity and accuracy when performing heart segmentation. The deep learning model proposed has achieved good results in image segmentation. Accurate segmentation improves the accuracy of disease diagnosis and reduces subsequent irrelevant computations. SUMMARY: There are two requirements for accurate segmentation of radiological images. One is to use image segmentation to improve the development of computer-aided diagnosis. The other is to achieve complete segmentation of the heart. When there are lesions or deformities in the heart, there will be some abnormalities in the radiographic images, and the segmentation algorithm needs to segment the heart altogether. The quantity of processing inside a certain range will no longer be a restriction for real-time detection with the advancement of deep learning and the enhancement of hardware device performance.


Asunto(s)
Aprendizaje Profundo , Cardiopatías , Cardiopatías/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional , Radiografía
4.
PLoS One ; 16(3): e0247444, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33661985

RESUMEN

Software defect prediction (SDP) can be used to produce reliable, high-quality software. The current SDP is practiced on program granular components (such as file level, class level, or function level), which cannot accurately predict failures. To solve this problem, we propose a new framework called DP-AGL, which uses attention-based GRU-LSTM for statement-level defect prediction. By using clang to build an abstract syntax tree (AST), we define a set of 32 statement-level metrics. We label each statement, then make a three-dimensional vector and apply it as an automatic learning model, and then use a gated recurrent unit (GRU) with a long short-term memory (LSTM). In addition, the Attention mechanism is used to generate important features and improve accuracy. To verify our experiments, we selected 119,989 C/C++ programs in Code4Bench. The benchmark tests cover various programs and variant sets written by thousands of programmers. As an evaluation standard, compared with the state evaluation method, the recall, precision, accuracy and F1 measurement of our well-trained DP-AGL under normal conditions have increased by 1%, 4%, 5%, and 2% respectively.


Asunto(s)
Redes Neurales de la Computación , Programas Informáticos
5.
PLoS One ; 15(9): e0238535, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32941468

RESUMEN

The deep multiple kernel Learning (DMKL) method has attracted wide attention due to its better classification performance than shallow multiple kernel learning. However, the existing DMKL methods are hard to find suitable global model parameters to improve classification accuracy in numerous datasets and do not take into account inter-class correlation and intra-class diversity. In this paper, we present a group-based local adaptive deep multiple kernel learning (GLDMKL) method with lp norm. Our GLDMKL method can divide samples into multiple groups according to the multiple kernel k-means clustering algorithm. The learning process in each well-grouped local space is exactly adaptive deep multiple kernel learning. And our structure is adaptive, so there is no fixed number of layers. The learning model in each group is trained independently, so the number of layers of the learning model maybe different. In each local space, adapting the model by optimizing the SVM model parameter α and the local kernel weight ß in turn and changing the proportion of the base kernel of the combined kernel in each layer by the local kernel weight, and the local kernel weight is constrained by the lp norm to avoid the sparsity of basic kernel. The hyperparameters of the kernel are optimized by the grid search method. Experiments on UCI and Caltech 256 datasets demonstrate that the proposed method is more accurate in classification accuracy than other deep multiple kernel learning methods, especially for datasets with relatively complex data.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Análisis por Conglomerados , Conjuntos de Datos como Asunto , Reconocimiento de Normas Patrones Automatizadas , Diseño de Software
6.
PLoS One ; 15(4): e0231331, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32275731

RESUMEN

Fault localization, a technique to fix and ensure the dependability of software, is rapidly becoming infeasible due to the increasing scale and complexity of multilingual programs. Compared to other fault localization techniques, slicing can directly narrow the range of the code which needed checking by abstracting a program into a reduced one by deleting irrelevant parts. Only minority slicing methods take into account the fact that the probability of different statements leading to failure is different. Moreover, no existing prioritized slicing techniques can work on multilingual programs. In this paper, we propose a new technique called weight prioritized slicing(WP-Slicing), an improved static slicing technique based on constraint logic programming, to help the programmer locate the fault quickly and precisely. WP-Slicing first converts the original program into logic facts. Then it extracts dependences from the facts, computes the static backward slice and calculates the statements' weight. Finally, WP-Slicing provides the slice in a suggested check sequence by weighted-sorting. By comparing it's slice time and locate effort with three pre-exsiting slicing techniques on five real world C projects, we prove that WP-Slicing can locate fault within less time and effort, which means WP-Slicing is more effectively.


Asunto(s)
Metodologías Computacionales , Programas Informáticos/normas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...