Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52
Filter
2.
Int J Med Inform ; 130: 103938, 2019 10.
Article in English | MEDLINE | ID: mdl-31442847

ABSTRACT

OBJECTIVE: To assess the role of speech recognition (SR) technology in clinicians' documentation workflows by examining use of, experience with and opinions about this technology. MATERIALS AND METHODS: We distributed a survey in 2016-2017 to 1731 clinician SR users at two large medical centers in Boston, Massachusetts and Aurora, Colorado. The survey asked about demographic and clinical characteristics, SR use and preferences, perceived accuracy, efficiency, and usability of SR, and overall satisfaction. Associations between outcomes (e.g., satisfaction) and factors (e.g., error prevalence) were measured using ordinal logistic regression. RESULTS: Most respondents (65.3%) had used their SR system for under one year. 75.5% of respondents estimated seeing 10 or fewer errors per dictation, but 19.6% estimated half or more of errors were clinically significant. Although 29.4% of respondents did not include SR among their preferred documentation methods, 78.8% were satisfied with SR, and 77.2% agreed that SR improves efficiency. Satisfaction was associated positively with efficiency and negatively with error prevalence and editing time. Respondents were interested in further training about using SR effectively but expressed concerns regarding software reliability, editing and workflow. DISCUSSION: Compared to other documentation methods (e.g., scribes, templates, typing, traditional dictation), SR has emerged as an effective solution, overcoming limitations inherent in other options and potentially improving efficiency while preserving documentation quality. CONCLUSION: While concerns about SR usability and accuracy persist, clinicians expressed positive opinions about its impact on workflow and efficiency. Faster and better approaches are needed for clinical documentation, and SR is likely to play an important role going forward.


Subject(s)
Documentation/methods , Electronic Health Records/statistics & numerical data , Electronic Health Records/standards , Health Personnel/statistics & numerical data , Medical Errors/statistics & numerical data , Speech Recognition Software/statistics & numerical data , Speech/physiology , Adult , Aged , Boston , Female , Humans , Male , Middle Aged , Perception , Surveys and Questionnaires , Workflow
3.
BMC Res Notes ; 11(1): 690, 2018 Oct 01.
Article in English | MEDLINE | ID: mdl-30285818

ABSTRACT

OBJECTIVE: The purpose of this paper is to extend a previous study by evaluating the use of a speech recognition software in a clinical psychiatry milieu. Physicians (n = 55) at a psychiatric hospital participated in a limited implementation and were provided with training, licenses, and relevant devices. Post-implementation usage data was collected via the software. Additionally, a post-implementation survey was distributed 5 months after the technology was introduced. RESULTS: In the first month, 45 out of 51 (88%) physicians were active users of the technology; however, after the full evaluation period only 53% were still active. The average active user minutes and the average active user lines dictated per month remained consistent throughout the evaluation. The use of speech recognition software within a psychiatric setting is of value to some physicians. Our results indicate a post-implementation reduction in adoption, with stable usage for physicians who remained active users. Future studies to identify characteristics of users and/or technology that contribute to ongoing use would be of value.


Subject(s)
Documentation , Hospitals, Psychiatric/statistics & numerical data , Medical Staff, Hospital/statistics & numerical data , Speech Recognition Software/statistics & numerical data , Adult , Humans
4.
JAMA Netw Open ; 1(3): e180530, 2018 07.
Article in English | MEDLINE | ID: mdl-30370424

ABSTRACT

IMPORTANCE: Accurate clinical documentation is critical to health care quality and safety. Dictation services supported by speech recognition (SR) technology and professional medical transcriptionists are widely used by US clinicians. However, the quality of SR-assisted documentation has not been thoroughly studied. OBJECTIVE: To identify and analyze errors at each stage of the SR-assisted dictation process. DESIGN SETTING AND PARTICIPANTS: This cross-sectional study collected a stratified random sample of 217 notes (83 office notes, 75 discharge summaries, and 59 operative notes) dictated by 144 physicians between January 1 and December 31, 2016, at 2 health care organizations using Dragon Medical 360 | eScription (Nuance). Errors were annotated in the SR engine-generated document (SR), the medical transcriptionist-edited document (MT), and the physician's signed note (SN). Each document was compared with a criterion standard created from the original audio recordings and medical record review. MAIN OUTCOMES AND MEASURES: Error rate; mean errors per document; error frequency by general type (eg, deletion), semantic type (eg, medication), and clinical significance; and variations by physician characteristics, note type, and institution. RESULTS: Among the 217 notes, there were 144 unique dictating physicians: 44 female (30.6%) and 10 unknown sex (6.9%). Mean (SD) physician age was 52 (12.5) years (median [range] age, 54 [28-80] years). Among 121 physicians for whom specialty information was available (84.0%), 35 specialties were represented, including 45 surgeons (37.2%), 30 internists (24.8%), and 46 others (38.0%). The error rate in SR notes was 7.4% (ie, 7.4 errors per 100 words). It decreased to 0.4% after transcriptionist review and 0.3% in SNs. Overall, 96.3% of SR notes, 58.1% of MT notes, and 42.4% of SNs contained errors. Deletions were most common (34.7%), then insertions (27.0%). Among errors at the SR, MT, and SN stages, 15.8%, 26.9%, and 25.9%, respectively, involved clinical information, and 5.7%, 8.9%, and 6.4%, respectively, were clinically significant. Discharge summaries had higher mean SR error rates than other types (8.9% vs 6.6%; difference, 2.3%; 95% CI, 1.0%-3.6%; P < .001). Surgeons' SR notes had lower mean error rates than other physicians' (6.0% vs 8.1%; difference, 2.2%; 95% CI, 0.8%-3.5%; P = .002). One institution had a higher mean SR error rate (7.6% vs 6.6%; difference, 1.0%; 95% CI, -0.2% to 2.8%; P = .10) but lower mean MT and SN error rates (0.3% vs 0.7%; difference, -0.3%; 95% CI, -0.63% to -0.04%; P = .03 and 0.2% vs 0.6%; difference, -0.4%; 95% CI, -0.7% to -0.2%; P = .003). CONCLUSIONS AND RELEVANCE: Seven in 100 words in SR-generated documents contain errors; many errors involve clinical information. That most errors are corrected before notes are signed demonstrates the importance of manual review, quality assurance, and auditing.


Subject(s)
Medical Errors/statistics & numerical data , Medical Records/statistics & numerical data , Medical Records/standards , Speech Recognition Software/statistics & numerical data , Speech Recognition Software/standards , Adult , Aged , Aged, 80 and over , Boston , Clinical Audit , Colorado , Cross-Sectional Studies , Female , Humans , Male , Medical Records Systems, Computerized , Middle Aged , Physicians
6.
BMJ Open ; 7(8): e015597, 2017 08 11.
Article in English | MEDLINE | ID: mdl-28801402

ABSTRACT

OBJECTIVES: This study explored the reasons for patients' non-adherence to cardiometabolic medications, and tested the acceptability of the interactive voice response (IVR) as a way to address these reasons, and support patients, between primary care consultations. DESIGN, METHOD, PARTICIPANTS AND SETTING: The study included face-to-face interviews with 19 patients with hypertension and/or type 2 diabetes mellitus, selected from primary care databases, and presumed to be non-adherent. Thirteen of these patients pretested elements of the IVR intervention few months later, using a think-aloud protocol. Five practice nurses were interviewed. Data were analysed using multiperspective, and longitudinalthematic analysis. RESULTS: Negative beliefs about taking medications, the complexity of prescribed medication regimens, and the limited ability to cope with the underlying affective state, within challenging contexts, were mentioned as important reasons for non-adherence. Nurses reported time constraints to address each patient's different reasons for non-adherence, and limited efficacy to support patients, between primary care consultations. Patients gave positive experiential feedback about the IVR messages as a way to support them take their medicines, and provided recommendations for intervention content and delivery mode. Specifically, they liked the voice delivering the messages and the voice recognition software. For intervention content, they preferred messages that were tailored, and included messages with 'information about health consequences', 'action plans', or simple reminders for performing the behaviour. CONCLUSIONS: Patients with hypertension and/or type 2 diabetes, and practice nurses, suggested messages tailored to each patient's reasons for non-adherence. Participants recommended IVR as an acceptable platform to support adherence to cardiometabolic medications between primary care consultations. Future studies could usefully test the acceptability, and feasibility, of tailored IVR interventions to support medication adherence, as an adjunct to primary care.


Subject(s)
Diabetes Mellitus, Type 2/drug therapy , Hypertension/drug therapy , Medication Adherence/statistics & numerical data , Reminder Systems/instrumentation , Speech Recognition Software/statistics & numerical data , Adult , Aged , Aged, 80 and over , Diabetes Mellitus, Type 2/psychology , Feasibility Studies , Female , Humans , Hypertension/psychology , Male , Medication Therapy Management , Middle Aged , Patient Acceptance of Health Care/statistics & numerical data , Prescription Drugs/therapeutic use , Qualitative Research , Surveys and Questionnaires , United Kingdom
7.
Int Rev Neurobiol ; 134: 1189-1205, 2017.
Article in English | MEDLINE | ID: mdl-28805569

ABSTRACT

Communication changes are an important feature of Parkinson's and include both motor and nonmotor features. This chapter will cover briefly the motor features affecting speech production and voice function before focusing on the nonmotor aspects. A description of the difficulties experienced by people with Parkinson's when trying to communicate effectively is presented along with some of the assessment tools and therapists' treatment options. The idea of clinical heterogeneity of PD and subtyping patients with different communication problems is explored and suggestions are made on how this may influence clinicians' treatment methods and choices so as to provide personalized therapy programmes. The importance of encouraging and supporting people to maintain social networks, employment, and leisure activities is stated as the key to achieving sustainability. Finally looking into the future, the emergence of new technologies is seen as providing further possibilities to support therapists in the goal of helping people with Parkinson's to maintain good communication skills throughout the course of the disease.


Subject(s)
Communication , Dysarthria/physiopathology , Parkinson Disease/physiopathology , Speech/physiology , Voice/physiology , Dysarthria/epidemiology , Dysarthria/therapy , Humans , Parkinson Disease/epidemiology , Parkinson Disease/therapy , Speech Disorders/epidemiology , Speech Disorders/physiopathology , Speech Disorders/therapy , Speech Recognition Software/statistics & numerical data , Speech Therapy/methods
8.
J Gen Intern Med ; 32(9): 1005-1013, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28616847

ABSTRACT

BACKGROUND: Hospitalization offers smokers an opportunity to quit smoking. Starting cessation treatment in hospital is effective, but sustaining treatment after discharge is a challenge. Automated telephone calls with interactive voice response (IVR) technology could support treatment continuance after discharge. OBJECTIVE: To assess smokers' use of and satisfaction with an IVR-facilitated intervention and to test the relationship between intervention dose and smoking cessation. DESIGN: Analysis of pooled quantitative and qualitative data from the intervention groups of two similar randomized controlled trials with 6-month follow-up. PARTICIPANTS: A total of 878 smokers admitted to three hospitals. All received cessation counseling in hospital and planned to stop smoking after discharge. INTERVENTION: After discharge, participants received free cessation medication and five automated IVR calls over 3 months. Calls delivered messages promoting smoking cessation and medication adherence, offered medication refills, and triaged smokers to additional telephone counseling. MAIN MEASURES: Number of IVR calls answered, patient satisfaction, biochemically validated tobacco abstinence 6 months after discharge. KEY RESULTS: Participants answered a median of three of five IVR calls; 70% rated the calls as helpful, citing the social support, access to counseling and medication, and reminders to quit as positive factors. Older smokers (OR 1.36, 95% CI 1.20-1.54 per decade) and smokers hospitalized for a smoking-related disease (OR 1.65, 95% CI 1.21-2.23) completed more calls. Smokers who completed more calls had higher quit rates at 6-month follow-up (OR 1.49, 95% CI 1.30-1.70, for each additional call) after multivariable adjustment for age, sex, education, discharge diagnosis, nicotine dependence, duration of medication use, and perceived importance of and confidence in quitting. CONCLUSIONS: Automated IVR calls to support smoking cessation after hospital discharge were viewed favorably by patients. Higher IVR utilization was associated with higher odds of tobacco abstinence at 6-month follow-up. IVR technology offers health care systems a potentially scalable means of sustaining tobacco cessation interventions after hospital discharge. CLINICAL TRIAL REGISTRATION: ClinicalTrials.gov Identifiers NCT01177176, NCT01714323.


Subject(s)
Reminder Systems , Smoking Cessation/methods , Smoking Cessation/statistics & numerical data , Smoking Prevention/methods , Adult , Female , Humans , Length of Stay/statistics & numerical data , Medication Adherence , Middle Aged , Nicotinic Agonists/therapeutic use , Patient Discharge , Qualitative Research , Smoking/epidemiology , Smoking/psychology , Speech Recognition Software/statistics & numerical data , Telephone , Tobacco Use Cessation Devices/statistics & numerical data
9.
Accid Anal Prev ; 106: 31-43, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28554063

ABSTRACT

Driver distraction is a growing and pervasive issue that requires multiple solutions. Voice-recognition (V-R) systems may decrease the visual-manual (V-M) demands of a wide range of in-vehicle system and smartphone interactions. However, the degree that V-R systems integrated into vehicles or available in mobile phone applications affect driver distraction is incompletely understood. A comprehensive meta-analysis of experimental studies was conducted to address this knowledge gap. To meet study inclusion criteria, drivers had to interact with a V-R system while driving and doing everyday V-R tasks such as dialing, initiating a call, texting, emailing, destination entry or music selection. Coded dependent variables included detection, reaction time, lateral position, speed and headway. Comparisons of V-R systems with baseline driving and/or a V-M condition were also coded. Of 817 identified citations, 43 studies involving 2000 drivers and 183 effect sizes (r) were analyzed in the meta-analysis. Compared to baseline, driving while interacting with a V-R system is associated with increases in reaction time and lane positioning, and decreases in detection. When V-M systems were compared to V-R systems, drivers had slightly better performance with the latter system on reaction time, lane positioning and headway. Although V-R systems have some driving performance advantages over V-M systems, they have a distraction cost relative to driving without any system at all. The pattern of results indicates that V-R systems impose moderate distraction costs on driving. In addition, drivers minimally engage in compensatory performance adjustments such as reducing speed and increasing headway while using V-R systems. Implications of the results for theory, design guidelines and future research are discussed.


Subject(s)
Distracted Driving/statistics & numerical data , Reaction Time/physiology , Speech Recognition Software/statistics & numerical data , Accidents, Traffic/prevention & control , Distracted Driving/prevention & control , Female , Humans , Male , Smartphone/statistics & numerical data
11.
Methods Inf Med ; 56(3): 248-260, 2017 May 18.
Article in English | MEDLINE | ID: mdl-28220929

ABSTRACT

BACKGROUND: Radiology reports are commonly written on free-text using voice recognition devices. Structured reports (SR) have a high potential but they are usually considered more difficult to fill-in so their adoption in clinical practice leads to a lower efficiency. However, some studies have demonstrated that in some cases, producing SRs may require shorter time than plain-text ones. This work focuses on the definition and demonstration of a methodology to evaluate the productivity of software tools for producing radiology reports. A set of SRs for breast cancer diagnosis based on BI-RADS have been developed using this method. An analysis of their efficiency with respect to free-text reports has been performed. MATERIAL AND METHODS: The methodology proposed compares the Elapsed Time (ET) on a set of radiological reports. Free-text reports are produced with the speech recognition devices used in the clinical practice. Structured reports are generated using a web application generated with TRENCADIS framework. A team of six radiologists with three different levels of experience in the breast cancer diagnosis was recruited. These radiologists performed the evaluation, each one introducing 50 reports for mammography, 50 for ultrasound scan and 50 for MRI using both approaches. Also, the Relative Efficiency (REF) was computed for each report, dividing the ET of both methods. We applied the T-Student (T-S) test to compare the ETs and the ANOVA test to compare the REFs. Both tests were computed using the SPSS software. RESULTS: The study produced three DICOM-SR templates for Breast Cancer Diagnosis on mammography, ultrasound and MRI, using RADLEX terms based on BIRADs 5th edition. The T-S test on radiologists with high or intermediate profile, showed that the difference between the ET was only statistically significant for mammography and ultrasound. The ANOVA test performed grouping the REF by modalities, indicated that there were no significant differences between mammograms and ultrasound scans, but both have significant statistical differences with MRI. The ANOVA test of the REF for each modality, indicated that there were only significant differences in Mammography (ANOVA p = 0.024) and Ultrasound (ANOVA p = 0.008). The ANOVA test for each radiologist profile, indicated that there were significant differences on the high profile (ANOVA p = 0.028) and medium (ANOVA p = 0.045). CONCLUSIONS: In this work, we have defined and demonstrated a methodology to evaluate the productivity of software tools for producing radiology reports in Breast Cancer. We have evaluated that adopting Structured Reporting in mammography and ultrasound studies in breast cancer diagnosis improves the performance in producing reports.


Subject(s)
Breast Neoplasms/diagnostic imaging , Diagnostic Imaging/classification , Efficiency, Organizational/statistics & numerical data , Information Storage and Retrieval/statistics & numerical data , Radiology Information Systems/statistics & numerical data , Workload/statistics & numerical data , Breast Neoplasms/classification , Diagnostic Imaging/statistics & numerical data , Electronic Health Records/statistics & numerical data , Humans , Radiology/statistics & numerical data , Spain , Speech Recognition Software/statistics & numerical data , Time and Motion Studies , Workflow
12.
Health Informatics J ; 23(1): 3-13, 2017 03.
Article in English | MEDLINE | ID: mdl-26635322

ABSTRACT

Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors ( p < .001). Proportion of errors and fraction of material errors varied significantly among radiologists and between imaging subspecialties ( p < .001). Errors were more common in cross-sectional reports, reports reinterpreting results of outside examinations, and procedural studies (all p < .001). Error rate decreased over time ( p < .001), which suggests that a quality control program with regular feedback may reduce errors.


Subject(s)
Radiology Information Systems/standards , Research Design/statistics & numerical data , Research Report/standards , Semantics , Speech Recognition Software/standards , Cross-Sectional Studies , Documentation/methods , Documentation/standards , Documentation/statistics & numerical data , Humans , Radiologists/standards , Radiologists/statistics & numerical data , Radiology Information Systems/statistics & numerical data , Retrospective Studies , Speech Recognition Software/statistics & numerical data
13.
Stud Health Technol Inform ; 225: 649-53, 2016.
Article in English | MEDLINE | ID: mdl-27332294

ABSTRACT

Adoption of new health information technology is shown to be challenging. However, the degree to which new technology will be adopted can be predicted by measures of usefulness and ease of use. In this work these key determining factors are focused on for design of a wound documentation tool. In the context of wound care at home, consistent with evidence in the literature from similar settings, use of Speech Recognition Technology (SRT) for patient documentation has shown promise. To achieve a user-centred design, the results from a conducted ethnographic fieldwork are used to inform SRT features; furthermore, exploratory prototyping is used to collect feedback about the wound documentation tool from home care nurses. During this study, measures developed for healthcare applications of the Technology Acceptance Model will be used, to identify SRT features that improve usefulness (e.g. increased accuracy, saving time) or ease of use (e.g. lowering mental/physical effort, easy to remember tasks). The identified features will be used to create a low fidelity prototype that will be evaluated in future experiments.


Subject(s)
Community Health Nursing/statistics & numerical data , Information Storage and Retrieval/statistics & numerical data , Nurses, Community Health/statistics & numerical data , Nursing Records/statistics & numerical data , Speech Recognition Software/statistics & numerical data , Wounds and Injuries/nursing , Attitude of Health Personnel , Canada , Humans , Practice Patterns, Nurses'/statistics & numerical data , Technology Assessment, Biomedical/methods , Utilization Review
14.
Health Informatics J ; 22(3): 768-78, 2016 09.
Article in English | MEDLINE | ID: mdl-26187989

ABSTRACT

A replication survey of physicians' expectations and experience with speech recognition technology was conducted before and after its implementation. The expectations survey was administered to emergency medicine physicians prior to training with the speech recognition system. The experience survey consisting of similar items was administered after physicians gained speech recognition technology experience. In this study, 82 percent of the physicians were initially optimistic that the use of speech recognition technology with the electronic medical record was a good idea. After using the technology for 6 months, 87 percent of the physicians agreed that speech recognition technology was a good idea. In addition, 72 percent of the physicians in this study had an expectation that the use of speech recognition technology would save time. After use in the clinical environment, 51 percent of the participants reported time savings. The increased acceptance of speech recognition technology by physicians in this study was attributed to improvements in the technology and the electronic medical record.


Subject(s)
Electronic Health Records , Physicians/psychology , Speech Recognition Software/statistics & numerical data , Attitude of Health Personnel , Attitude to Computers , Emergency Medicine , Health Care Surveys , Humans , Surveys and Questionnaires , Time Factors
15.
J Med Internet Res ; 17(11): e247, 2015 Nov 03.
Article in English | MEDLINE | ID: mdl-26531850

ABSTRACT

BACKGROUND: Clinical documentation has undergone a change due to the usage of electronic health records. The core element is to capture clinical findings and document therapy electronically. Health care personnel spend a significant portion of their time on the computer. Alternatives to self-typing, such as speech recognition, are currently believed to increase documentation efficiency and quality, as well as satisfaction of health professionals while accomplishing clinical documentation, but few studies in this area have been published to date. OBJECTIVE: This study describes the effects of using a Web-based medical speech recognition system for clinical documentation in a university hospital on (1) documentation speed, (2) document length, and (3) physician satisfaction. METHODS: Reports of 28 physicians were randomized to be created with (intervention) or without (control) the assistance of a Web-based system of medical automatic speech recognition (ASR) in the German language. The documentation was entered into a browser's text area and the time to complete the documentation including all necessary corrections, correction effort, number of characters, and mood of participant were stored in a database. The underlying time comprised text entering, text correction, and finalization of the documentation event. Participants self-assessed their moods on a scale of 1-3 (1=good, 2=moderate, 3=bad). Statistical analysis was done using permutation tests. RESULTS: The number of clinical reports eligible for further analysis stood at 1455. Out of 1455 reports, 718 (49.35%) were assisted by ASR and 737 (50.65%) were not assisted by ASR. Average documentation speed without ASR was 173 (SD 101) characters per minute, while it was 217 (SD 120) characters per minute using ASR. The overall increase in documentation speed through Web-based ASR assistance was 26% (P=.04). Participants documented an average of 356 (SD 388) characters per report when not assisted by ASR and 649 (SD 561) characters per report when assisted by ASR. Participants' average mood rating was 1.3 (SD 0.6) using ASR assistance compared to 1.6 (SD 0.7) without ASR assistance (P<.001). CONCLUSIONS: We conclude that medical documentation with the assistance of Web-based speech recognition leads to an increase in documentation speed, document length, and participant mood when compared to self-typing. Speech recognition is a meaningful and effective tool for the clinical documentation process.


Subject(s)
Documentation/methods , Electronic Health Records/statistics & numerical data , Internet/statistics & numerical data , Speech Recognition Software/statistics & numerical data , Speech , Humans
16.
South Med J ; 108(7): 445-51, 2015 Jul.
Article in English | MEDLINE | ID: mdl-26192944

ABSTRACT

PURPOSE: To evaluate physician utilization of speech recognition technology (SRT) for medical documentation in two hospitals. METHODS: A quantitative survey was used to collect data in the areas of practice, electronic equipment used for documentation, documentation created after providing care, and overall thoughts about and satisfaction with the SRT. The survey sample was from one rural and one urban facility in central Missouri. In addition, qualitative interviews were conducted with a chief medical officer and a physician champion regarding implementation issues, training, choice of SRT, and outcomes from their perspective. RESULTS: Seventy-one (60%) of the anticipated 125 surveys were returned. A total of 16 (23%) participants were practicing in internal medicine and 9 (13%) were practicing in family medicine. Fifty-six (79%) participants used a desktop and 14 (20%) used a laptop (2%) computer. SRT products from Nuance were the dominant SRT used by 59 participants (83%). Windows operating systems (Microsoft, Redmond, WA) was used by more than 58 (82%) of the survey respondents. With regard to user experience, 42 (59%) participants experienced spelling and grammatical errors, 15 (21%) encountered clinical inaccuracy, 9 (13%) experienced word substitution, and 4 (6%) experienced misleading medical information. CONCLUSIONS: This study shows critical issues of inconsistency, unreliability, and dissatisfaction in the functionality and usability of SRT. This merits further attention to improve the functionality and usability of SRT for better adoption within varying healthcare settings.


Subject(s)
Delivery of Health Care/methods , Documentation/methods , Medical Records Systems, Computerized/instrumentation , Physicians/psychology , Speech Recognition Software , Consumer Behavior , Data Collection , Humans , Missouri , Needs Assessment , Professional Practice/standards , Speech Recognition Software/standards , Speech Recognition Software/statistics & numerical data , Surveys and Questionnaires , User-Computer Interface
17.
Brain Inj ; 29(7-8): 888-97, 2015.
Article in English | MEDLINE | ID: mdl-25955116

ABSTRACT

OBJECTIVE: This study's purpose was two-fold: (a) to confirm differences in silent reading rates of individuals with and without traumatic brain injury (TBI) and (b) to determine the effect of text-to-speech (TTS) on reading comprehension and efficiency by individuals with TBI. DESIGN AND METHODS: Ten adults with severe TBI answered comprehension questions about written passages presented in three conditions: reading only (RO), listening to TTS presentation only (LO) or reading and listening to TTS simultaneously (RL). The researchers compared reading rate, comprehension accuracy and comprehension rate (efficiency) across conditions. RESULTS: Analysis revealed significantly slower silent reading rates for the participants with TBI than for readers without TBI (n = 75). Also, participants with TBI achieved higher comprehension accuracy for factual than inferential questions; however, no significant main effect for comprehension accuracy emerged across reading conditions. In contrast, using comprehension rate as the dependent measure, analysis confirmed a significant main effect for reading condition and question type; post-hoc pairwise comparisons revealed that the RL condition yielded higher comprehension rate scores than the RO condition. CONCLUSIONS: As a group, adults with TBI appear to benefit in reading efficiency when simultaneously listening to and reading written passages; however, differences exist that reinforce the importance of individualizing treatment.


Subject(s)
Brain Injuries/rehabilitation , Communication Aids for Disabled , Comprehension , Reading , Adult , Auditory Perception , Brain Injuries/complications , Brain Injuries/physiopathology , Communication Aids for Disabled/statistics & numerical data , Female , Humans , Male , Nebraska , Precision Medicine , Reproducibility of Results , Speech Perception , Speech Recognition Software/statistics & numerical data
18.
BMC Med Imaging ; 15: 8, 2015 Mar 04.
Article in English | MEDLINE | ID: mdl-25879906

ABSTRACT

BACKGROUND: Speech recognition (SR) technology, the process whereby spoken words are converted to digital text, has been used in radiology reporting since 1981. It was initially anticipated that SR would dominate radiology reporting, with claims of up to 99% accuracy, reduced turnaround times and significant cost savings. However, expectations have not yet been realised. The limited data available suggest SR reports have significantly higher levels of inaccuracy than traditional dictation transcription (DT) reports, as well as incurring greater aggregate costs. There has been little work on the clinical significance of such errors, however, and little is known of the impact of reporter seniority on the generation of errors, or the influence of system familiarity on reducing error rates. Furthermore, there have been conflicting findings on the accuracy of SR amongst users with English as first- and second-language respectively. METHODS: The aim of the study was to compare the accuracy of SR and DT reports in a resource-limited setting. The first 300 SR and the first 300 DT reports generated during March 2010 were retrieved from the hospital's PACS, and reviewed by a single observer. Text errors were identified, and then classified as either clinically significant or insignificant based on their potential impact on patient management. In addition, a follow-up analysis was conducted exactly 4 years later. RESULTS: Of the original 300 SR reports analysed, 25.6% contained errors, with 9.6% being clinically significant. Only 9.3% of the DT reports contained errors, 2.3% having potential clinical impact. Both the overall difference in SR and DT error rates, and the difference in 'clinically significant' error rates (9.6% vs. 2.3%) were statistically significant. In the follow-up study, the overall SR error rate was strikingly similar at 24.3%, 6% being clinically significant. Radiologists with second-language English were more likely to generate reports containing errors, but level of seniority had no bearing. CONCLUSION: SR technology consistently increased inaccuracies in Tygerberg Hospital (TBH) radiology reports, thereby potentially compromising patient care. Awareness of increased error rates in SR reports, particularly amongst those transcribing in a second-language, is important for effective implementation of SR in a multilingual healthcare environment.


Subject(s)
Hospitals, Teaching/statistics & numerical data , Meaningful Use/statistics & numerical data , Medical Records Systems, Computerized/statistics & numerical data , Radiology Information Systems/statistics & numerical data , Speech Recognition Software/statistics & numerical data , Translating , Reproducibility of Results , Sensitivity and Specificity , South Africa
19.
Dan Med J ; 62(2)2015 Feb.
Article in English | MEDLINE | ID: mdl-25634503

ABSTRACT

INTRODUCTION: Dictation of scientific articles has been recognised as an efficient method for producing high-quality, first article drafts. However, standardised transcription service by a secretary may not be available for all researchers and voice recognition software (VRS) may therefore be an alternative. The purpose of this study was to evaluate the out-of-the-box accuracy of VRS. METHODS: Eleven young researchers without dictation experience dictated the first draft of their own scientific article after thorough preparation according to a pre-defined schedule. The dictate transcribed by VRS was compared with the same dictate transcribed by an experienced research secretary, and the effect of adding words to the vocabulary of the VRS was investigated. The number of errors per hundred words was used as outcome. Furthermore, three experienced researchers assessed the subjective readability using a Likert scale (0-10). Dragon Nuance Premium version 12.5 was used as VRS. RESULTS: The median number of errors per hundred words was 18 (range: 8.5-24.3), which improved when 15,000 words were added to the vocabulary. Subjective readability assessment showed that the texts were understandable with a median score of five (range: 3-9), which was improved with the addition of 5,000 words. CONCLUSION: The out-of-the-box performance of VRS was acceptable and improved after additional words were added. Further studies are needed to investigate the effect of additional software accuracy training.


Subject(s)
Research Report , Speech Recognition Software/statistics & numerical data , Comprehension , Humans , Medical Secretaries , Vocabulary, Controlled
SELECTION OF CITATIONS
SEARCH DETAIL
...