Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
Add more filters

Country/Region as subject
Affiliation country
Publication year range
1.
Clin Chem Lab Med ; 52(7): 951-8, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24622792

ABSTRACT

Quality indicators (QIs) are fundamental tools for enabling users to quantify the quality of all operational processes by comparing it against a defined criterion. QIs data should be collected over time to identify, correct, and continuously monitor defects and improve performance and patient safety by identifying and implementing effective interventions. According to the international standard for medical laboratories accreditation, the laboratory shall establish and periodically review QIs to monitor and evaluate performance throughout critical aspects of pre-, intra-, and post-analytical processes. However, while some interesting programs on indicators in the total testing process have been developed in some countries, there is no consensus for the production of joint recommendations focusing on the adoption of universal QIs and common terminology in the total testing process. A preliminary agreement has been achieved in a Consensus Conference organized in Padua in 2013, after revising the model of quality indicators (MQI) developed by the Working Group on "Laboratory Errors and Patient Safety" of the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC). The consensually accepted list of QIs, which takes into consideration both their importance and applicability, should be tested by all potentially interested clinical laboratories to identify further steps in the harmonization project.


Subject(s)
Clinical Laboratory Techniques/standards , Clinical Medicine/standards , Quality Indicators, Health Care/standards , Humans
2.
Am J Clin Pathol ; 129(6): 959-62, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18480014

ABSTRACT

Competency assessment is critical for laboratory operations and is mandated by the Clinical Laboratory Improvement Amendments of 1988. However, no previous reports describe methods for assessing competency in patient safety. We developed and implemented a Web-based tool to assess performance of 875 laboratory staff from 29 laboratories in patient safety. Question categories included workplace culture, categorizing error, prioritization of patient safety interventions, strength of specific interventions, and general patient safety concepts. The mean score was 85.0%, with individual scores ranging from 56% to 100% and scores by category from 81.3% to 88.6%. Of the most difficult questions (<72% correct), 6 were about intervention strength, 3 about categorizing error, 1 about workplace culture, and 1 about prioritization of interventions. Of the 13 questions about intervention strength, 6 (46%) were in the lowest quartile, suggesting that this may be a difficult topic for laboratory technologists. Computer-based competency assessments help laboratories identify topics for continuing education in patient safety.


Subject(s)
Clinical Competence/standards , Laboratories, Hospital/standards , Medical Errors/prevention & control , Medical Laboratory Science/standards , Safety Management , Humans , Medical Errors/statistics & numerical data , Surveys and Questionnaires , United States
3.
J Appl Lab Med ; 2(2): 259-268, 2017 Sep 01.
Article in English | MEDLINE | ID: mdl-32630981

ABSTRACT

Appropriate utilization of clinical laboratory services is important for patient care and requires institutional stewardship. Clinical laboratory stewardship programs are dedicated to improving the ordering, retrieval, and interpretation of appropriate laboratory tests. In addition, these programs focus on developing, maintaining, and improving systems to provide proper financial coverage for medically necessary testing. Overall, clinical laboratory stewardship programs help clinicians improve the quality of patient care while reducing costs to patients, hospitals, and health systems. This document, which was created by a new multiinstitutional committee interested in promoting and formalizing laboratory stewardship, summarizes core elements of successful hospital-based clinical laboratory stewardship programs. The core elements will also be helpful for independent commercial clinical laboratories.

4.
Am J Clin Pathol ; 125(1): 28-33, 2006 Jan.
Article in English | MEDLINE | ID: mdl-16482988

ABSTRACT

We used a computer-based competency assessment tool for Gram stain interpretation to assess the performance of 278 laboratory staff from 40 laboratories on 40 multiple-choice questions. We report test reliability, mean scores, median, item difficulty, discrimination, and analysis of the highest- and lowest-scoring questions. The questions were reliable (KR-20 coefficient, 0.80). Overall mean score was 88% (range, 63%-98%). When categorized by cell type, the means were host cells, 93%; other cells (eg, yeast), 92%; gram-positive, 90%; and gram-negative, 88%. When categorized by type of interpretation, the means were other (eg, underdecolorization), 92%; identify by structure (eg, bacterial morphologic features), 91%; and identify by name (eg, genus and species), 87%. Of the 6 highest-scoring questions (mean scores, > or = 99%) 5 were identify by structure and 1 was identify by name. Of the 6 lowest-scoring questions (mean scores, < 75%) 5 were gram-negative and 1 was host cells. By type of interpretation, 2 were identify by structure and 4 were identify by name. Computer-based Gram stain competency assessment examinations are reliable. Our analysis helps laboratories identify areas for continuing education in Gram stain interpretation and will direct future revisions of the tests.


Subject(s)
Computer-Assisted Instruction/methods , Gentian Violet , Medical Laboratory Science/standards , Phenazines , Professional Competence/standards , Staining and Labeling/standards , Gram-Negative Bacteria/isolation & purification , Gram-Positive Bacteria/isolation & purification , Humans , Laboratories/standards , Medical Laboratory Science/education , Surveys and Questionnaires/standards , United States
5.
Am J Clin Pathol ; 146(2): 221-6, 2016 Aug.
Article in English | MEDLINE | ID: mdl-27473740

ABSTRACT

OBJECTIVES: To characterize error rates for genetic test orders between medical specialties and in different settings by examining detailed order information. METHODS: We performed a retrospective analysis of a detailed utilization management case database, comprising 2.5 years of data and almost 1,400 genetic test orders. After review by multiple reviewers, we categorized order modifications and cancellations, quantified rates of positive results and order errors, and compared genetics with nongenetics providers and inpatient with outpatient orders. RESULTS: High cost or problems with preauthorization were the most common reasons for modification and cancellation, respectively. The cancellation rate for nongenetics providers was three times the rate for geneticists, but abnormal result rates were similar between the two groups. The approval rate for inpatient orders was not significantly lower than outpatient orders, and abnormal result rates were similar for these two groups as well. Order error rates were approximately 8% among tests recommended by genetics providers in the inpatient setting, and tests ordered or recommended by nongeneticists had error rates near 5% in both inpatient and outpatient settings. CONCLUSIONS: Clinicians without specialty training in genetics make genetic test order errors at a significantly higher rate than geneticists. A laboratory utilization management program prevents these order errors from becoming diagnostic errors and reaching the patient.


Subject(s)
Genetic Techniques , Medical Errors/prevention & control , Quality Assurance, Health Care/methods , Humans , Medical Order Entry Systems , Retrospective Studies
8.
Am J Clin Pathol ; 118(4): 494-500, 2002 Oct.
Article in English | MEDLINE | ID: mdl-12375634

ABSTRACT

The University of Washington, Seattle, has developed educational software for clinical laboratories. We used a 32-question survey to study software implementation. Of 106 clinical laboratories (response rate, 60%) that purchased the software and completed the survey, 89 laboratories (84%) that reported using the software formed the basis for the study. The most common software users were laboratory personnel, followed by medical technologist or medical laboratory technician students, residents, and medical students; the mean (SD) number of personnel categories using the software per laboratory was 1.8 (0.8). The most common reasons for use were initial instruction, cross-training, and competency assessment. The most frequent setting for software use was an area where laboratory testing occurred, followed by a dedicated training location, a location chosen by the employee, a classroom, and a distance learning mode. On a scale of 1 (poor) to 5 (excellent), the average satisfaction rating as an instructional tool was 4.4 and as a competency assessment tool, 4.2. Compared with laboratories in hospitals with 400 beds or fewer, laboratories in hospitals with more than 400 beds used the software for more categories of users (P = .008), had a higher proportion of laboratories using it for residents (P = .003), and had a higher proportion of laboratories with dedicated training areas (P = .02).


Subject(s)
Clinical Laboratory Techniques , Computer-Assisted Instruction , Medical Laboratory Science/education , Pathology, Clinical/education , Software , Humans , Medical Laboratory Science/instrumentation , Medical Laboratory Science/methods , Surveys and Questionnaires
9.
Am J Clin Pathol ; 120(1): 18-26, 2003 Jul.
Article in English | MEDLINE | ID: mdl-12866368

ABSTRACT

We developed a laboratory incident report classification system that can guide reduction of actual and potential adverse events. The system was applied retrospectively to 129 incident reports occurring during a 16-month period. Incidents were classified by type of adverse event (actual or potential), specific and potential patient impact, nature of laboratory involvement, testing phase, and preventability. Of 129 incidents, 95% were potential adverse events. The most common specific impact was delay in receiving test results (85%). The average potential impact was 2.9 (SD, 1.0; median, 3; scale, 1-5). The laboratory alone was responsible for 60% of the incidents; 21% were due solely to problems outside the laboratory's authority. The laboratory function most frequently implicated in incidents was specimen processing (31%). The preanalytic testing phase was involved in 71% of incidents, the analytic in 18%, and the postanalytic in 11%. The most common preanalytic problem was specimen transportation (16%). The average preventability score was 4.0 (range, 1-5; median, 4; scale, 1-5), and 94 incidents (73%) were preventable (score, 3 or more). Of the 94 preventable incidents, 30% involved cognitive errors, defined as incorrect choices caused by insufficient knowledge, and 73% involved noncognitive errors, defined as inadvertent or unconscious lapses in expected automatic behavior.


Subject(s)
Hospital Information Systems , Laboratories, Hospital/standards , Patient Care , Risk Management/classification , Safety , Hospitals, University , Humans , Medical Errors/prevention & control , Retrospective Studies , Risk Management/methods , Specimen Handling
10.
Clin Chim Acta ; 434: 1-5, 2014 Jul 01.
Article in English | MEDLINE | ID: mdl-24685573

ABSTRACT

OBJECTIVE: Errors associated with laboratory testing can cause significant patient harm. Sendout testing refers to tests sent by a primary lab to a reference lab when testing is unavailable at the primary lab. Sendout testing is particularly high risk for patient harm, due to many factors including increased hand-offs, manual processes, and complexity associated with rare, low-volume tests. No published prospective tools exist for sendout risk assessment. METHODS: A novel prospective tool was developed to assess risk of diagnostic errors involving laboratory sendout testing. This tool was successfully piloted at nine sites. RESULTS: Marked diversity was noted among survey respondents, particularly in the sections on quality metrics and utilization management. Of note, most sites had committees who managed rules for test ordering, but few places reported enforcing these rules. Only one site claimed to routinely measure the frequency clinicians failed to retrieve test results. An evaluation of the tool indicated that it was both useful and easy to use. CONCLUSIONS: This tool could be used by other laboratories to identify the areas of highest risk to patients, which in turn may guide them in focusing their quality improvement efforts and resources.


Subject(s)
Clinical Laboratory Services/standards , Clinical Laboratory Techniques/standards , Laboratories/standards , Medical Errors/prevention & control , Quality Assurance, Health Care/methods , Data Collection , Quality Control , Risk Assessment , United States
11.
Arch Pathol Lab Med ; 138(1): 110-3, 2014 Jan.
Article in English | MEDLINE | ID: mdl-24377818

ABSTRACT

CONTEXT: Tests that are performed outside of the ordering institution, send-out tests, represent an area of risk to patients because of complexity associated with sending tests out. Risks related to send-out tests include increased number of handoffs, ordering the wrong or unnecessary test, specimen delays, data entry errors, preventable delays in reporting and acknowledging results, and excess financial liability. Many of the most expensive and most misunderstood tests are send-out genetic tests. OBJECTIVE: To design and develop an active utilization management program to reduce the risk to patients and improve value of genetic send-out tests. DESIGN: Send-out test requests that met defined criteria were reviewed by a rotating team of doctoral-level consultants and a genetic counselor in a pediatric tertiary care center. RESULTS: Two hundred fifty-one cases were reviewed during an 8-month period. After review, nearly one-quarter of genetic test requests were modified in the downward direction, saving a total of 2% of the entire send-out bill and 19% of the test requests under management. Ultimately, these savings were passed on to patients. CONCLUSIONS: Implementing an active utilization strategy for expensive send-out tests can be achieved with minimal technical resources and results in improved value of testing to patients.


Subject(s)
Genetic Testing/economics , Genetic Testing/statistics & numerical data , Laboratories/economics , Laboratories/statistics & numerical data , Humans
12.
Am J Clin Pathol ; 139(1): 118-23, 2013 Jan.
Article in English | MEDLINE | ID: mdl-23270907

ABSTRACT

The FilmArray respiratory virus panel detects 15 viral agents in respiratory specimens using polymerase chain reaction. We performed FilmArray respiratory viral testing in a core laboratory at a regional children's hospital that provides service 24 hours a day 7 days a week. The average and median turnaround time were 1.6 and 1.4 hours, respectively, in contrast to 7 and 6.5 hours documented 1 year previously at an on-site reference laboratory using a direct fluorescence assay (DFA) that detected 8 viral agents. During the study period, rhinovirus was detected in 20% and coronavirus in 6% of samples using FilmArray; these viruses would not have been detected with DFA. We followed 97 patients with influenza A or influenza B who received care at the emergency department (ED). Overall, 79 patients (81%) were given oseltamivir in a timely manner defined as receiving the drug in the ED, a prescription in the ED, or a prescription within 3 hours of ED discharge. Our results demonstrate that molecular technology can be successfully deployed in a nonspecialty, high-volume, multidisciplinary core laboratory.


Subject(s)
RNA Viruses/isolation & purification , Respiratory Tract Infections/diagnosis , Virology/methods , Virus Diseases/diagnosis , Adolescent , Antigens, Viral/analysis , Child , Child, Preschool , Coronavirus/genetics , Coronavirus/immunology , Coronavirus/isolation & purification , Early Diagnosis , Humans , Infant , Influenza A virus/genetics , Influenza A virus/immunology , Influenza A virus/isolation & purification , Influenza B virus/genetics , Influenza B virus/immunology , Influenza B virus/isolation & purification , Molecular Diagnostic Techniques , Multiplex Polymerase Chain Reaction , RNA Viruses/genetics , RNA Viruses/immunology , RNA, Viral/isolation & purification , Respiratory Tract Infections/virology , Rhinovirus/genetics , Rhinovirus/immunology , Rhinovirus/isolation & purification , Time Factors , Virus Diseases/virology , Young Adult
13.
Am J Clin Pathol ; 135(5): 760-5, 2011 May.
Article in English | MEDLINE | ID: mdl-21502431

ABSTRACT

Physicians are urged to communicate more openly following medical errors, but little is known about pathologists' attitudes about reporting errors to their institution and disclosing them to patients. We undertook a survey to characterize pathologists' and laboratory medical directors' attitudes and experience regarding the communication of errors with hospitals, treating physicians, and affected patients. We invited 260 practicing pathologists and 81 academic hospital laboratory medical directors to participate in a self-administered survey. This survey included questions regarding estimated error rates and barriers to and experience with error disclosure. The majority of respondents (~95%) reported having been involved with an error, and respondents expressed near unanimous belief that errors should be disclosed to hospitals, colleagues, and patients; however, only about 48% thought that current error reporting systems were adequate. In addition, pathologists expressed discomfort with their communication skills in regard to error disclosure. Improving error reporting systems and developing robust disclosure training could help prevent future errors, improving patient safety and trust.


Subject(s)
Medical Errors , Pathology, Clinical , Attitude of Health Personnel , Clinical Laboratory Techniques , Hospitals , Humans , Physician Executives , Physician-Patient Relations , Physicians , Surveys and Questionnaires , Truth Disclosure
15.
Clin Chem ; 53(1): 134-7, 2007 Jan.
Article in English | MEDLINE | ID: mdl-17040954

ABSTRACT

BACKGROUND: Training of clinical pathologists is evolving and must now address the 6 core competencies described by the Accreditation Council for Graduate Medical Education (ACGME), which include patient care. A substantial portion of the patient care performed by the clinical pathology resident takes place while the resident is on call for the laboratory, a practice that provides the resident with clinical experience and assists the laboratory in providing quality service to clinicians in the hospital and surrounding community. Documenting the educational value of these on-call experiences and providing evidence of competence is difficult for residency directors. An online database of these calls, entered by residents and reviewed by faculty, would provide a mechanism for documenting and improving the education of clinical pathology residents. METHODS: With Microsoft Access we developed an online database that uses active server pages and secure sockets layer encryption to document calls to the clinical pathology resident. Using the data collected, we evaluated the efficacy of 3 interventions aimed at improving resident education. RESULTS: The database facilitated the documentation of more than 4 700 calls in the first 21 months it was online, provided archived resident-generated data to assist in serving clients, and demonstrated that 2 interventions aimed at improving resident education were successful. CONCLUSIONS: We have developed a secure online database, accessible from any computer with Internet access, that can be used to easily document clinical pathology resident education and competency.


Subject(s)
Curriculum , Databases, Factual , Internship and Residency , Online Systems , Pathology, Clinical/education , Clinical Competence , Humans , Internet , Personnel Staffing and Scheduling
16.
Teach Learn Med ; 19(2): 106-14, 2007.
Article in English | MEDLINE | ID: mdl-17564537

ABSTRACT

BACKGROUND: Little is known about strategies for developing teaching cases and strategies for identifying design features that optimize a learner's interactions with Web-based cases. PURPOSES: We examined design features in Web cases that facilitated interactive and engaging learning. METHODS: Nine collaborators reviewed selected Web cases and documented the presence of features that facilitate interactive learning, including opportunities for information gathering, decision making, and receiving feedback. RESULTS: Eighteen Web sites offered cases. These cases mainly were narrated based on biomedical information without patient voices. The cases were organized in a linear structure from patient presentation to follow-up. Many cases presented only a single case. We found little use of features for augmenting a learner's interaction with cases. Only a handful of cases generated feedback on the basis of the learners' responses. CONCLUSION: Our study suggests ways to improve the development of Web cases. These methods contribute to future research in testing cases for educational effectiveness.


Subject(s)
Education, Medical , Internet/statistics & numerical data , Teaching , User-Computer Interface , Humans , Washington
17.
J Clin Microbiol ; 43(5): 2188-93, 2005 May.
Article in English | MEDLINE | ID: mdl-15872240

ABSTRACT

We developed a strategy to determine the clinical impact associated with errors in clinical microbiology testing. Over a 9-month period, we used a sequential three-stage method to prospectively evaluate 480 consecutive corrected microbiology laboratory reports. The three stages were physician review of the corrected report, medical record review, and interview with the clinician(s) taking care of the patient. Of the 480 corrected reports, 301 (62.7%) were ruled out for significant clinical impact by physician review and an additional 25 cases (5.2%) were ruled out for clinical impact by medical record review. This left 154 cases (32.1%) that required clinician interview to determine clinical impact. The clinician interview revealed that 32 (6.7%) of the corrected reports were associated with adverse clinical impact. Of these 32 cases, 19 (59.4%) involved delayed therapy, 8 (25.0%) involved unnecessary therapy, 8 (25.0%) were associated with inappropriate therapy, and 4 (12.5%) were associated with an increased level of care. The laboratory was entirely responsible for the error in 28 (87.5%) of the 32 cases and partially responsible in the other 4 cases (12.5%). Twenty-six (81.3%) of the 32 cases involved potentially preventable analytic errors that were due to lack of knowledge (cognitive error). In summary, we used evaluation of corrected reports to identify laboratory errors with adverse clinical impact, and most of the errors were amenable to laboratory-based interventions. Our method has the potential to be implemented in other laboratory settings to identify and characterize errors that impact patient safety.


Subject(s)
Bacteriological Techniques , Microbiology/standards , Bacteria/classification , Bacteria/isolation & purification , Bacterial Infections/diagnosis , Bacteriological Techniques/standards , Humans , Reproducibility of Results , Sensitivity and Specificity
18.
Clin Chem Lab Med ; 41(5): 711-3, 2003 May.
Article in English | MEDLINE | ID: mdl-12812272

ABSTRACT

This study explores the feasibility of using computer tutorials to train laboratory personnel in Nepal. Training incorporated three software programs that teach microscope-based laboratory tests (peripheral blood smears, urinalysis, Gram stains). Forty-seven participants attended training sessions and completed a questionnaire. The participants' overall perception was: 1) the software was superior to formal lectures for learning image-based laboratory tests (43 participants, 92%); 2) the software would enhance job performance (43 participants, 92%); 3) more subjects should be taught using software (40 participants, 85%); and 4) the software helped participants learn new materials (38 participants, 81%). Considering that 79% of the participants were novice computer users, it is noteworthy that 38 (81%) participants thought the method of instruction was easy to understand. Factors contributing to learning included: 1) the resemblance of the computer images to actual microscope images derived from patient samples (37 participants, 68%); 2) the use of multiple examples of cells and other microscopic structures (28 participants, 60%); 3) the ability to interact with images and animations (23 participants, 49%); 4) the step-by-step explanation of laboratory techniques (21 participants, 45%); and 5) the self-pacing of the tutorial (12 participants, 26%). Overall, the pilot study suggests that educational software could help train clinical laboratory personnel in developing countries.


Subject(s)
Clinical Laboratory Techniques , Computer-Assisted Instruction/methods , Developing Countries , Image Interpretation, Computer-Assisted , Medical Laboratory Personnel/education , Software , Adolescent , Adult , Female , Humans , Male , Nepal , Pilot Projects , Teaching/methods
SELECTION OF CITATIONS
SEARCH DETAIL