Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Comput Biol Med ; 164: 107295, 2023 09.
Article in English | MEDLINE | ID: mdl-37557053

ABSTRACT

The early diagnosis and personalised treatment of diseases are facilitated by machine learning. The quality of data has an impact on diagnosis because medical data are usually sparse, imbalanced, and contain irrelevant attributes, resulting in suboptimal diagnosis. To address the impacts of data challenges, improve resource allocation, and achieve better health outcomes, a novel visual learning approach is proposed. This study contributes to the visual learning approach by determining whether less or more synthetic data are required to improve the quality of a dataset, such as the number of observations and features, according to the intended personalised treatment and early diagnosis. In addition, numerous visualisation experiments are conducted, including using statistical characteristics, cumulative sums, histograms, correlation matrix, root mean square error, and principal component analysis in order to visualise both original and synthetic data to address the data challenges. Real medical datasets for cancer, heart disease, diabetes, cryotherapy and immunotherapy are selected as case studies. As a benchmark and point of classification comparison in terms of such as accuracy, sensitivity, and specificity, several models are implemented such as k-Nearest Neighbours and Random Forest. To simulate algorithm implementation and data, Generative Adversarial Network is used to create and manipulate synthetic data, whilst, Random Forest is implemented to classify the data. An amendable and adaptable system is constructed by combining Generative Adversarial Network and Random Forest models. The system model presents working steps, overview and flowchart. Experiments reveal that the majority of data-enhancement scenarios allow for the application of visual learning in the first stage of data analysis as a novel approach. To achieve meaningful adaptable synergy between appropriate quality data and optimal classification performance while maintaining statistical characteristics, visual learning provides researchers and practitioners with practical human-in-the-loop machine learning visualisation tools. Prior to implementing algorithms, the visual learning approach can be used to actualise early, and personalised diagnosis. For the immunotherapy data, the Random Forest performed best with precision, recall, f-measure, accuracy, sensitivity, and specificity of 81%, 82%, 81%, 88%, 95%, and 60%, as opposed to 91%, 96%, 93%, 93%, 96%, and 73% for synthetic data, respectively. Future studies might examine the optimal strategies to balance the quantity and quality of medical data.


Subject(s)
Early Detection of Cancer , Precision Medicine , Humans , Algorithms , Machine Learning , Delivery of Health Care
2.
Health Policy ; 132: 104827, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37099856

ABSTRACT

Effective strategic workforce planning for integrated and co-ordinated health and social care is essential if future services are to be resourced such that skill mix, clinical practice and productivity meet population health and social care needs in timely, safe and accessible ways globally. This review presents international literature to illustrate how strategic workforce planning in health and social care has been undertaken around the world with examples of planning frameworks, models and modelling approaches. The databases Business Source Premier, CINAHL, Embase, Health Management Information Consortium, Medline and Scopus were searched for full texts, from 2005 to 2022, detailing empirical research, models or methodologies to explain how strategic workforce planning (with at least a one-year horizon) in health and/or social care has been undertaken, yielding ultimately 101 included references. The supply/demand of a differentiated medical workforce was discussed in 25 references. Nursing and midwifery were characterised as undifferentiated labour, requiring urgent growth to meet demand. Unregistered workers were poorly represented as was the social care workforce. One reference considered planning for health and social care workers. Workforce modelling was illustrated in 66 references with predilection for quantifiable projections. Increasingly needs-based approaches were called for to better consider demography and epidemiological impacts. This review's findings advocate for whole-system needs-based approaches that consider the ecology of a co-produced health and social care workforce.


Subject(s)
Health Personnel , Health Services Needs and Demand , Humans , Workforce , Forecasting
3.
Dementia (London) ; 18(3): 1060-1074, 2019 Apr.
Article in English | MEDLINE | ID: mdl-28358268

ABSTRACT

The reuse of existing datasets to identify mechanisms for improving healthcare quality has been widely encouraged. There has been limited application within dementia care. Dementia Care Mapping is an observational tool in widespread use, predominantly to assess and improve quality of care in single organisations. Dementia Care Mapping data have the potential to be used for secondary purposes to improve quality of care. However, its suitability for such use requires careful evaluation. This study conducted in-depth interviews with 29 Dementia Care Mapping users to identify issues, concerns and challenges regarding the secondary use of Dementia Care Mapping data. Data were analysed using modified Grounded Theory. Major themes identified included the need to collect complimentary contextual data in addition to Dementia Care Mapping data, to reassure users regarding ethical issues associated with storage and reuse of care related data and the need to assess and specify data quality for any data that might be available for secondary analysis.


Subject(s)
Datasets as Topic , Dementia/therapy , Patient-Centered Care , Quality Improvement , Computer Security , Female , Grounded Theory , Humans , Interviews as Topic , Quality of Health Care
4.
Toxicol Res (Camb) ; 6(1): 42-53, 2017 Jan 01.
Article in English | MEDLINE | ID: mdl-28261444

ABSTRACT

Two approaches for the prediction of which of two vehicles will result in lower toxicity for anticancer agents are presented. Machine-learning models are developed using decision tree, random forest and partial least squares methodologies and statistical evidence is presented to demonstrate that they represent valid models. Separately, a clustering method is presented that allows the ordering of vehicles by the toxicity they show for chemically-related compounds.

5.
J Cheminform ; 5(1): 16, 2013 Mar 22.
Article in English | MEDLINE | ID: mdl-23517649

ABSTRACT

: Predictive toxicology is concerned with the development of models that are able to predict the toxicity of chemicals. A reliable prediction of toxic effects of chemicals in living systems is highly desirable in cosmetics, drug design or food protection to speed up the process of chemical compound discovery while reducing the need for lab tests. There is an extensive literature associated with the best practice of model generation and data integration but management and automated identification of relevant models from available collections of models is still an open problem. Currently, the decision on which model should be used for a new chemical compound is left to users. This paper intends to initiate the discussion on automated model identification. We present an algorithm, based on Pareto optimality, which mines model collections and identifies a model that offers a reliable prediction for a new chemical compound. The performance of this new approach is verified for two endpoints: IGC50 and LogP. The results show a great potential for automated model identification methods in predictive toxicology.

6.
Mol Inform ; 32(1): 65-78, 2013 Jan.
Article in English | MEDLINE | ID: mdl-27481024

ABSTRACT

Quality assessment (QA) requires high levels of domain-specific experience and knowledge. QA tasks for toxicological data are usually performed by human experts manually, although a number of quality evaluation schemes have been proposed in the literature. For instance, the most widely utilised Klimisch scheme1 defines four data quality categories in order to tag data instances with respect to their qualities; ToxRTool2 is an extension of the Klimisch approach aiming to increase the transparency and harmonisation of the approach. Note that the processes of QA in many other areas have been automatised by employing expert systems. Briefly, an expert system is a computer program that uses a knowledge base built upon human expertise, and an inference engine that mimics the reasoning processes of human experts to infer new statements from incoming data. In particular, expert systems have been extended to deal with the uncertainty of information by representing uncertain information (such as linguistic terms) as fuzzy sets under the framework of fuzzy set theory and performing inferences upon fuzzy sets according to fuzzy arithmetic. This paper presents an experimental fuzzy expert system for toxicological data QA which is developed on the basis of the Klimisch approach and the ToxRTool in an effort to illustrate the power of expert systems to toxicologists, and to examine if fuzzy expert systems are a viable solution for QA of toxicological data. Such direction still faces great difficulties due to the well-known common challenge of toxicological data QA that "five toxicologists may have six opinions". In the meantime, this challenge may offer an opportunity for expert systems because the construction and refinement of the knowledge base could be a converging process of different opinions which is of significant importance for regulatory policy making under the regulation of REACH, though a consensus may never be reached. Also, in order to facilitate the implementation of Weight of Evidence approaches and in silico modelling proposed by REACH, there is a higher appeal of numerical quality values than nominal (categorical) ones, where the proposed fuzzy expert system could help. Most importantly, the deriving processes of quality values generated in this way are fully transparent, and thus comprehensible, for final users, which is another vital point for policy making specified in REACH. Case studies have been conducted and this report not only shows the promise of the approach, but also demonstrates the difficulties of the approach and thus indicates areas for future development.

7.
J Cheminform ; 3(1): 24, 2011 Jul 13.
Article in English | MEDLINE | ID: mdl-21752279

ABSTRACT

BACKGROUND: Due to recent advances in data storage and sharing for further data processing in predictive toxicology, there is an increasing need for flexible data representations, secure and consistent data curation and automated data quality checking. Toxicity prediction involves multidisciplinary data. There are hundreds of collections of chemical, biological and toxicological data that are widely dispersed, mostly in the open literature, professional research bodies and commercial companies. In order to better manage and make full use of such large amount of toxicity data, there is a trend to develop functionalities aiming towards data governance in predictive toxicology to formalise a set of processes to guarantee high data quality and better data management. In this paper, data quality mainly refers in a data storage sense (e.g. accuracy, completeness and integrity) and not in a toxicological sense (e.g. the quality of experimental results). RESULTS: This paper reviews seven widely used predictive toxicology data sources and applications, with a particular focus on their data governance aspects, including: data accuracy, data completeness, data integrity, metadata and its management, data availability and data authorisation. This review reveals the current problems (e.g. lack of systematic and standard measures of data quality) and desirable needs (e.g. better management and further use of captured metadata and the development of flexible multi-level user access authorisation schemas) of predictive toxicology data sources development. The analytical results will help to address a significant gap in toxicology data quality assessment and lead to the development of novel frameworks for predictive toxicology data and model governance. CONCLUSIONS: While the discussed public data sources are well developed, there nevertheless remain some gaps in the development of a data governance framework to support predictive toxicology. In this paper, data governance is identified as the new challenge in predictive toxicology, and a good use of it may provide a promising framework for developing high quality and easy accessible toxicity data repositories. This paper also identifies important research directions that require further investigation in this area.

8.
Altern Lab Anim ; 35(1): 25-32, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17411348

ABSTRACT

This paper reports results of a comparative study of widely used machine learning algorithms applied to predictive toxicology data mining. The machine learning algorithms involved were chosen in terms of their representability and diversity, and were extensively evaluated with seven toxicity data sets which were taken from real-world applications. Some results based on visual analysis of the correlations of different descriptors to the class values of chemical compounds, and on the relationships of the range of chosen descriptors to the performance of machine learning algorithms, are emphasised from our experiments. Some interesting findings relating to the data and the quality of the models are presented--for example, that no specific algorithm appears best for all seven toxicity data sets, and that up to five descriptors are sufficient for creating classification models for each toxicity data set with good accuracy. We suggest that, for a specific data set, model accuracy is affected by the feature selection method and model development technique. Models built with too many or too few descriptors are undesirable, and finding the optimal feature subset appears at least as important as selecting appropriate algorithms with which to build a final model.


Subject(s)
Algorithms , Artificial Intelligence , Databases, Factual , Toxicity Tests/methods , Animals , Bees , Daphnia , Data Interpretation, Statistical , Phenols/toxicity , Predictive Value of Tests , Quail , Reproducibility of Results , Trout
9.
J Chem Inf Comput Sci ; 42(5): 1250-5, 2002.
Article in English | MEDLINE | ID: mdl-12377016

ABSTRACT

While mining a data set of 554 chemicals in order to extract information on their toxicity value, we faced the problem of scaling all the data. There are numerous different approaches to this procedure, and in most cases the choice greatly influences the results. The aim of this paper is 2-fold. First, we propose a universal scaling procedure for acute toxicity in fish according to the Directive 92/32/EEC. Second, we look at how expert preprocessing of the data effects the performance of qualitative structure-activity relationship (QSAR) approach to toxicity prediction.


Subject(s)
Drug-Related Side Effects and Adverse Reactions , Animals , Computer Simulation , Cyprinidae , Drug Evaluation, Preclinical/statistics & numerical data , Neural Networks, Computer , Pharmaceutical Preparations/chemistry , Quantitative Structure-Activity Relationship
SELECTION OF CITATIONS
SEARCH DETAIL
...