RESUMO
Facial expressions vary with different health conditions, making a facial expression recognition (FER) system valuable within a healthcare framework. Achieving accurate recognition of facial expressions is a considerable challenge due to the difficulty in capturing subtle features. This research introduced an ensemble neural random forest method that utilizes convolutional neural network (CNN) architecture for feature extraction and optimized random forest for classification. For feature extraction, four convolutional layers with different numbers of filters and kernel sizes are used. Further, the maxpooling, batch normalization, and dropout layers are used in the model to expedite the process of feature extraction and avoid the overfitting of the model. The extracted features are provided to the optimized random forest for classification, which is based on the number of trees, criterion, maximum tree depth, maximum terminal nodes, minimum sample split, and maximum features per tree, and applied to the classification process. To demonstrate the significance of the proposed model, we conducted a thorough assessment of the proposed neural random forest through an extensive experiment encompassing six publicly available datasets. The remarkable weighted average recognition rate of 97.3% achieved across these diverse datasets highlights the effectiveness of our approach in the context of FER systems.
RESUMO
The examination of Alzheimer's disease (AD) using adaptive machine learning algorithms has unveiled promising findings. However, achieving substantial credibility in medical contexts necessitates a combination of notable accuracy, minimal processing time, and universality across diverse populations. Therefore, we have formulated a hybrid methodology in this study to classify AD by employing a brain MRI image dataset. We incorporated an averaging filter during preprocessing in the initial stage to reduce extraneous details. Subsequently, a combined strategy was utilized, involving principal component analysis (PCA) in conjunction with stepwise linear discriminant analysis (SWLDA), followed by an artificial neural network (ANN). SWLDA employs a combination of forward and backward recursion methods to choose a restricted set of features. The forward recursion identifies the most interconnected features based on partial Z-test values. Conversely, the backward recursion method eliminates the least correlated features from the same feature space. After the extraction and selection of features, an optimized artificial neural network (ANN) was utilized to differentiate the various classes of AD. To demonstrate the significance of this hybrid approach, we utilized publicly available brain MRI datasets using a 10-fold cross-validation strategy. The proposed method excelled over existing state-of-the-art systems, attaining weighted average recognition rates of 99.35% and 96.66%, respectively, across all the datasets.
RESUMO
As the population increases, the number of motorized vehicles on the roads also increases. As the number of vehicles increases, traffic congestion occurs. Traffic lights are used at road junctions, intersections, pedestrian crossings, and other places where traffic needs to be controlled to avoid traffic chaos. Due to traffic lights installed in the city, queues of vehicles are formed on the streets for most of the day, and many problems arise because of this. One of the most important problems is that emergency vehicles, such as ambulances, fire engines, police cars, etc., cannot arrive on time despite traffic priorities. Emergency vehicles such as hospitals and police departments need to reach the scene in a very short time. Time loss is a problem that needs to be addressed, especially for emergency vehicles traveling in traffic. In this study, ambulances, fire brigades, police, etc., respond to emergencies. A solution and a related application have been developed so privileged vehicles can reach their target destination as soon as possible. In this study, a route is determined between the current location of an emergency vehicle and its target location in an emergency. Communication between traffic lights is provided with a mobile application developed specifically for the vehicle driver. In this process, the person controlling the lights can turn on the traffic lights during the passage of vehicles. After the vehicles with priority to pass passed, traffic signaling was normalized via the mobile application. This process was repeated until the vehicle reached its destination.
RESUMO
The recent COVID-19 pandemic has hit humanity very hard in ways rarely observed before. In this digitally connected world, the health informatics and investigation domains (both public and private) lack a robust framework to enable rapid investigation and cures. Since the data in the healthcare domain are highly confidential, any framework in the healthcare domain must work on real data, be verifiable, and support reproducibility for evidence purposes. In this paper, we propose a health informatics framework that supports data acquisition from various sources in real-time, correlates these data from various sources among each other and to the domain-specific terminologies, and supports querying and analyses. Various sources include sensory data from wearable sensors, clinical investigation (for trials and devices) data from private/public agencies, personnel health records, academic publications in the healthcare domain, and semantic information such as clinical ontologies and the Medical Subject Heading ontology. The linking and correlation of various sources include mapping personnel wearable data to health records, clinical oncology terms to clinical trials, and so on. The framework is designed such that the data are Findable, Accessible, Interoperable, and Reusable with proper Identity and Access Mechanisms. This practically means to tracing and linking each step in the data management lifecycle through discovery, ease of access and exchange, and data reuse. We present a practical use case to correlate a variety of aspects of data relating to a certain medical subject heading from the Medical Subject Headings ontology and academic publications with clinical investigation data. The proposed architecture supports streaming data acquisition and servicing and processing changes throughout the lifecycle of the data management. This is necessary in certain events, such as when the status of a certain clinical or other health-related investigation needs to be updated. In such cases, it is required to track and view the outline of those events for the analysis and traceability of the clinical investigation and to define interventions if necessary.
RESUMO
Magnetic Resonance Imaging (MRI) is a noninvasive technique used in medical imaging to diagnose a variety of disorders. The majority of previous systems performed well on MRI datasets with a small number of images, but their performance deteriorated when applied to large MRI datasets. Therefore, the objective is to develop a quick and trustworthy classification system that can sustain the best performance over a comprehensive MRI dataset. This paper presents a robust approach that has the ability to analyze and classify different types of brain diseases using MRI images. In this paper, global histogram equalization is utilized to remove unwanted details from the MRI images. After the picture has been enhanced, a symlet wavelet transform-based technique has been suggested that can extract the best features from the MRI images for feature extraction. On gray scale images, the suggested feature extraction approach is a compactly supported wavelet with the lowest asymmetry and the most vanishing moments for a given support width. Because the symlet wavelet can accommodate the orthogonal, biorthogonal, and reverse biorthogonal features of gray scale images, it delivers higher classification results. Following the extraction of the best feature, the linear discriminant analysis (LDA) is employed to minimize the feature space's dimensions. The model was trained and evaluated using logistic regression, and it correctly classified several types of brain illnesses based on MRI pictures. To illustrate the importance of the proposed strategy, a standard dataset from Harvard Medical School and the Open Access Series of Imaging Studies (OASIS), which encompasses 24 different brain disorders (including normal), is used. The proposed technique achieved the best classification accuracy of 96.6% when measured against current cutting-edge systems.
RESUMO
In today's era, vegetables are considered a very important part of many foods. Even though every individual can harvest their vegetables in the home kitchen garden, in vegetable crops, Tomatoes are the most popular and can be used normally in every kind of food item. Tomato plants get affected by various diseases during their growing season, like many other crops. Normally, in tomato plants, 40-60% may be damaged due to leaf diseases in the field if the cultivators do not focus on control measures. In tomato production, these diseases can bring a great loss. Therefore, a proper mechanism is needed for the detection of these problems. Different techniques were proposed by researchers for detecting these plant diseases and these mechanisms are vector machines, artificial neural networks, and Convolutional Neural Network (CNN) models. In earlier times, a technique was used for detecting diseases called the benchmark feature extraction technique. In this area of study for detecting tomato plant diseases, another model was proposed, which was known as the real-time faster region convolutional neural network (RTF-RCNN) model, using both images and real-time video streaming. For the RTF-RCNN, we used different parameters like precision, accuracy, and recall while comparing them with the Alex net and CNN models. Hence the final result shows that the accuracy of the proposed RTF-RCNN is 97.42%, which is higher than the rate of the Alex net and CNN models, which were respectively 96.32% and 92.21%.
RESUMO
Magnetic resonance imaging (MRI) is an accurate and noninvasive method employed for the diagnosis of various kinds of diseases in medical imaging. Most of the existing systems showed significant performances on small MRI datasets, while their performances decrease against large MRI datasets. Hence, the goal was to design an efficient and robust classification system that sustains a high recognition rate against large MRI dataset. Accordingly, in this study, we have proposed the usage of a novel feature extraction technique that has the ability to extract and select the prominent feature from MRI image. The proposed algorithm selects the best features from the MRI images of various diseases. Further, this approach discriminates various classes based on recursive values such as partial Z-value. The proposed approach only extracts a minor feature set through, respectively, forward and backward recursion models. The most interrelated features are nominated in the forward regression model that depends on the values of partial Z-test, while the minimum interrelated features are diminished from the corresponding feature space under the presence of the backward model. In both cases, the values of Z-test are estimated through the defined labels of the diseases. The proposed model is efficiently looking the localized features, which is one of the benefits of this method. After extracting and selecting the best features, the model is trained by utilizing support vector machine (SVM) to provide the predicted labels to the corresponding MRI images. To show the significance of the proposed model, we utilized a publicly available standard dataset such as Harvard Medical School and Open Access Series of Imaging Studies (OASIS), which contains 24 various brain diseases including normal. The proposed approach achieved the best classification accuracy against existing state-of-the-art systems.
Assuntos
Encéfalo , Imageamento por Ressonância Magnética , Algoritmos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Máquina de Vetores de SuporteRESUMO
Image denoising methods are important in order to diminish various kinds of noises, which are presented either capturing the image or distorted during image transmission. Signal-to-noise ratio (SNR) is one of the main barriers which avoids the theoretical observations to be accomplished in practice. In this study, we have utilized various kinds of filtering operators against three various noises, which are the signal-to-noise ratio comparison against the phantom image in spatial and frequency domain. In frequency domain, the average filter is used to smooth the image and frequency domain, and Gaussian low-pass filter is applied with empirically determined cutoff frequency. This work has six major parts such as applying average filter, determining the SNR of region of interest, transforming the image in frequency domain by discrete Fourier transform, obtaining the rectangular Gaussian low-pass filter along with a cutoff frequency, multiplying them, and carrying out the inverse Fourier transform. These steps are repeated accordingly until the resulting image SNR is equal to or greater than the spatial domain SNR. In order to achieve the goal of this study, we have analyzed the proposed approach against some of complex phantom images. The performances of these filters are compared against signal-to-noise ratio.
Assuntos
Algoritmos , Humanos , Distribuição Normal , Imagens de Fantasmas , Razão Sinal-RuídoRESUMO
Human activity recognition (HAR) is a fascinating and significant challenging task. Generally, the accuracy of HAR systems relies on the best features from the input frames. Mostly, the activity frames have the hostile noisy conditions that cannot be handled by most of the existing edge operators. In this paper, we have designed an adoptive feature extraction method based on edge detection for HAR systems. The proposed method calculates the direction of the edges under the presence of nonmaximum conquest. The benefits are in ease that depends upon the modest procedures, and the extension possibility is to determine other types of features. Normally, it is practical to extract extra low-level information in the form of features when determining the shapes and to get the appropriate information, the additional cultured shape detection procedure is utilized or discarded. Basically, this method enlarges the percentage of the product of the signal-to-noise ratio (SNR) and the highest isolation along with localization. During the processing of the frames, again some edges are demonstrated as a footstep function; the proposed approach might give better performance than other operators. The appropriate information is extracted to form feature vector, which further be fed to the classifier for activity recognition. We assess the performance of the proposed edge-based feature extraction method under the depth dataset having thirteen various kinds of actions in a comprehensive experimental setup.
Assuntos
Atividades Humanas , Ruído , HumanosRESUMO
Most medical images are low in contrast because adequate details that may prove vital decisions are not visible to the naked eye. Also, due to the low-contrast nature of the image, it is not easily segmented because there is no significant change between the pixel values, which makes the gradient very small Hence, the contour cannot converge on the edges of the object. In this work, we have proposed an ensembled spatial method for image enhancement. In this ensembled approach, we first employed the Laplacian filter, which highlights the areas of fast intensity variation. This filter can determine the sufficient details of an image. The Laplacian filter will also improve those features having shrill disjointedness. Then, the gradient of the image has been determined, which utilizes the surrounding pixels for the weighted convolution operation for noise diminishing. However, in the gradient filter, there is one negative integer in the weighting. The intensity value of the middle pixel might be deducted from the surrounding pixels, to enlarge the difference between the head-to-head pixels for calculating the gradients. This is one of the reasons due to which the gradient filter is not entirely optimistic, which may be calculated in eight directions. Therefore, the averaging filter has been utilized, which is an effective filter for image enhancement. This approach does not rely on the values that are completely diverse from distinctive values in the surrounding due to which it recollects the details of the image. The proposed approach significantly showed the best performance on various images collected in dynamic environments.
Assuntos
Algoritmos , Aumento da Imagem , Atenção à Saúde , Humanos , Aumento da Imagem/métodosRESUMO
Breast cancer forms in breast cells and is considered as a very common type of cancer in women. Breast cancer is also a very life-threatening disease of women after lung cancer. A convolutional neural network (CNN) method is proposed in this study to boost the automatic identification of breast cancer by analyzing hostile ductal carcinoma tissue zones in whole-slide images (WSIs). The paper investigates the proposed system that uses various convolutional neural network (CNN) architectures to automatically detect breast cancer, comparing the results with those from machine learning (ML) algorithms. All architectures were guided by a big dataset of about 275,000, 50 × 50-pixel RGB image patches. Validation tests were done for quantitative results using the performance measures for every methodology. The proposed system is found to be successful, achieving results with 87% accuracy, which could reduce human mistakes in the diagnosis process. Moreover, our proposed system achieves accuracy higher than the 78% accuracy of machine learning (ML) algorithms. The proposed system therefore improves accuracy by 9% above results from machine learning (ML) algorithms.
Assuntos
Neoplasias da Mama , Algoritmos , Mama/diagnóstico por imagem , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Aprendizado de Máquina , Redes Neurais de ComputaçãoRESUMO
Heart angiography is a test in which the concerned medical specialist identifies the abnormality in heart vessels. This type of diagnosis takes a lot of time by the concerned physician. In our proposed method, we segmented the interested regions of heart vessels and then classified. Segmentation and classification of heart angiography provides significant information for the physician as well as patient. Contradictorily, in the mention domain of heart angiography, the charge is prone to error, phase overwhelming, and thought-provoking task for the physician (heart specialist). An automatic segmentation and classification of heart blood vessels descriptions can improve the truthfulness and speed up the finding of heart illnesses. In this work, we recommend a computer-assisted conclusion arrangement for the localization of human heart blood vessels within heart angiographic imageries by using multiclass ensemble classification mechanism. In the proposed work, the heart blood vessels will be first segmented, and the various features according to accuracy have been extracted. Low-level features such as texture, statistical, and geometrical features were extracted in human heart blood vessels. At last, in the proposed framework, heart blood vessels have been categorized in their four respective classes including normal, block, narrow, and blood flow-reduced vessels. The proposed approach has achieved best result which provides very useful, easy, accurate, and time-saving environment to cardiologists for the diagnosis of heart-related diseases.
Assuntos
Cardiopatias , Aprendizado de Máquina , Algoritmos , Coração , Humanos , Processamento de Imagem Assistida por Computador/métodosRESUMO
Oral cancer is a complex disorder. Its creation and spreading are due to the interaction of several proteins and genes in different biological thoroughfares. To study biological pathways, many high-yield methods have been used. Efforts to merge several data found at separate levels related to biological thoroughfares and interlinkage networks remain elusive. In our research work, we have proposed a technique known as protein-protein interaction network for analysis and exploring the genes involved in oral cancer disorders. The previous studies have not fully analyzed the proteins or genes involved in oral cancer. Our proposed technique is fully interactive and analyzes the data of oral cancer disorder more accurately and efficiently. The methods used here enabled us to observe the wide network consists of one mighty network comprising of 208 nodes 1572 edges which connect these nodes and various detached small networks. In our study, TP53 is a gene that occupied an important position in the network. TP53 has a 113-degree value and 0.03881821 BC value, indicating that TP53 is centrally localized in the network and is a significant bottleneck protein in the oral cancer protein-protein interaction network. These findings suggested that the pathogenesis of oral cancer variation was organized by means of an integrated PPI network, which is centered on TP53. Furthermore, our identification shows that TP53 is the key role-playing protein in the oral cancer network, and its significance in the cellular networks in the body is determined as well. As TP53 (tumor protein 53) is a vital player in the cell division process, the cells may not grow or divide disorderly; it fulfills the function of at least one of the gene groups in oral cancer. However, the latter progression in the area is any measure; the intention of developing these networks is to transfigure sketch of core disease development, prognosis, and treatment.
Assuntos
Biologia Computacional , Neoplasias Bucais , Humanos , Neoplasias Bucais/genética , Prognóstico , Mapas de Interação de Proteínas , Proteína Supressora de Tumor p53RESUMO
Accurate detection of traffic accidents as well as condition analysis are essential to effectively restoring traffic flow and reducing serious injuries and fatalities. This goal can be obtained using an advanced data classification model with a rich source of traffic information. Several systems based on sensors and social networking platforms have been presented recently to detect traffic events and monitor traffic conditions. However, sensor-based systems provide limited information, and may fail owing to the long detection times and high false-alarm rates. In addition, social networking data are unstructured, unpredictable, and contain idioms, jargon, and dynamic topics. The machine learning algorithms utilized for traffic event detection might not extract valuable information from social networking data. In this paper, a social network-based, real-time monitoring framework is proposed for traffic accident detection and condition analysis using ontology and latent Dirichlet allocation (OLDA) and bidirectional long short-term memory (Bi-LSTM). First, the query-based search engine effectively collects traffic information from social networks, and the data preprocessing module transforms it into structured form. Second, the proposed OLDA-based topic modeling method automatically labels each sentence (e.g., traffic or non-traffic) to identify the exact traffic information. In addition, the ontology-based event recognition approach detects traffic events from traffic-related data. Next, the sentiment analysis technique identifies the polarity of traffic events employing user's opinions, which helps determine accurate conditions of traffic events. Finally, the FastText model and Bi-LSTM with softmax regression are trained for traffic event detection and condition analysis. The proposed framework is evaluated using traffic-related data, comparing OLDA and Bi-LSTM with existing topic modeling methods and traditional classifiers using word embedding models, respectively. Our system outperforms state-of-the-art methods and achieves accuracy of 97 %. This finding demonstrates that the proposed system is more efficient for traffic event detection and condition analysis, in comparison to other existing systems.
Assuntos
Acidentes de Trânsito , Redes Neurais de Computação , Algoritmos , Humanos , Aprendizado de Máquina , Rede SocialRESUMO
In healthcare, the analysis of patients' activities is one of the important factors that offer adequate information to provide better services for managing their illnesses well. Most of the human activity recognition (HAR) systems are completely reliant on recognition module/stage. The inspiration behind the recognition stage is the lack of enhancement in the learning method. In this study, we have proposed the usage of the hidden conditional random fields (HCRFs) for the human activity recognition problem. Moreover, we contend that the existing HCRF model is inadequate by independence assumptions, which may reduce classification accuracy. Therefore, we utilized a new algorithm to relax the assumption, allowing our model to use full-covariance distribution. Also, in this work, we proved that computation wise our method has very much lower complexity against the existing methods. For the experiments, we used four publicly available standard datasets to show the performance. We utilized a 10-fold cross-validation scheme to train, assess, and compare the proposed model with the conditional learning method, hidden Markov model (HMM), and existing HCRF model which can only use diagonal-covariance Gaussian distributions. From the experiments, it is obvious that the proposed model showed a substantial improvement with p value ≤0.2 regarding the classification accuracy.
Assuntos
Acelerometria/métodos , Algoritmos , Reconhecimento Automatizado de Padrão/métodos , Actigrafia , Humanos , Cadeias de Markov , Atividade Motora , Distribuição NormalRESUMO
Research in video based FER systems has exploded in the past decade. However, most of the previous methods work well when they are trained and tested on the same dataset. Illumination settings, image resolution, camera angle, and physical characteristics of the people differ from one dataset to another. Considering a single dataset keeps the variance, which results from differences, to a minimum. Having a robust FER system, which can work across several datasets, is thus highly desirable. The aim of this work is to design, implement, and validate such a system using different datasets. In this regard, the major contribution is made at the recognition module which uses the maximum entropy Markov model (MEMM) for expression recognition. In this model, the states of the human expressions are modeled as the states of an MEMM, by considering the video-sensor observations as the observations of MEMM. A modified Viterbi is utilized to generate the most probable expression state sequence based on such observations. Lastly, an algorithm is designed which predicts the expression state from the generated state sequence. Performance is compared against several existing state-of-the-art FER systems on six publicly available datasets. A weighted average accuracy of 97% is achieved across all datasets.
Assuntos
Entropia , Expressão Facial , Reconhecimento Facial , Cadeias de Markov , Modelos Teóricos , HumanosRESUMO
A wellness system provides wellbeing recommendations to support experts in promoting a healthier lifestyle and inducing individuals to adopt healthy habits. Adopting physical activity effectively promotes a healthier lifestyle. A physical activity recommendation system assists users to adopt daily routines to form a best practice of life by involving themselves in healthy physical activities. Traditional physical activity recommendation systems focus on general recommendations applicable to a community of users rather than specific individuals. These recommendations are general in nature and are fit for the community at a certain level, but they are not relevant to every individual based on specific requirements and personal interests. To cover this aspect, we propose a multimodal hybrid reasoning methodology (HRM) that generates personalized physical activity recommendations according to the user׳s specific needs and personal interests. The methodology integrates the rule-based reasoning (RBR), case-based reasoning (CBR), and preference-based reasoning (PBR) approaches in a linear combination that enables personalization of recommendations. RBR uses explicit knowledge rules from physical activity guidelines, CBR uses implicit knowledge from experts׳ past experiences, and PBR uses users׳ personal interests and preferences. To validate the methodology, a weight management scenario is considered and experimented with. The RBR part of the methodology generates goal, weight status, and plan recommendations, the CBR part suggests the top three relevant physical activities for executing the recommended plan, and the PBR part filters out irrelevant recommendations from the suggested ones using the user׳s personal preferences and interests. To evaluate the methodology, a baseline-RBR system is developed, which is improved first using ranged rules and ultimately using a hybrid-CBR. A comparison of the results of these systems shows that hybrid-CBR outperforms the modified-RBR and baseline-RBR systems. Hybrid-CBR yields a 0.94% recall, a 0.97% precision, a 0.95% f-score, and low Type I and Type II errors.
Assuntos
Inteligência Artificial , Tomada de Decisões Assistida por Computador , Atividade Motora , HumanosRESUMO
Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement.
Assuntos
Modelos Teóricos , Software , AlgoritmosRESUMO
A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM) to provide a global unified data structure for all data sources and generate a unified dataset by a "data modeler" tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.
Assuntos
Sistemas de Gerenciamento de Base de Dados , Armazenamento e Recuperação da Informação/métodos , Aplicações da Informática Médica , Ensaios Clínicos como Assunto , Humanos , Mídias SociaisRESUMO
Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.