Assessing human factors during simulation: The development and preliminary validation of the rescue assessment tool
Unsworth, John, Melling, Andrew, Allan, Jaden, Tucker, Guy and Kelleher, Michael (2014) Assessing human factors during simulation: The development and preliminary validation of the rescue assessment tool. Journal of Nursing Education and Practice, 4 (5). pp. 52-63. ISSN 1925-4040
Item Type: | Article |
---|
Abstract
Background: Failure to rescue the deteriorating patient is a concern for all healthcare providers. In response to this problem providers have introduced a range of interventions to promote timely rescue. Human factors and non-technical skills play a part in both the recognition of ill patients and in the delivery of interventions associated with their successful rescue. Given the risks to patient safety which failure to rescue raises, simulation provides a vehicle for staff training and development in terms of both technical and non-technical skills. This paper describes the development and preliminary validation of a human factors rating tool specifically designed to assess the non-technical skills associated with the recognition and rescue of the deteriorating patient. Methods: Using high fidelity simulation scenarios related to patient deterioration Faculty independently rated student performance. Scoring took place using video footage of the students’ performance. Data were analyzed to establish the validity of the tool, internal consistency between categories and elements and inter-rater reliability. Results: Content validity was established through a process of review and by checking for duplicate or redundant items. The internal consistency of the tool was acceptable with a Cronbach’s alpha of 0.84. Factor analysis suggested that the tool assessed only two components rather than the three hypothesized during tool development. The components were labelled as recognizing and responding and leading and reassuring. Inter-rater reliability was initially poor at 0.21 but following training of raters this rose to above 0.8 for two videos related to the same scenario one which had been used during training. However, when the scenario changed the reliability dropped to 0.5. Conclusions: Rescue appears to be a well-structured tool with good levels of inter-rater reliability following intensive training related to the specific scenario being scored. Further work is required to establish all aspects of construct validity and to ensure test-retest reliability.
|
PDF
3894-14081-1-PB.pdf - Published Version Download (159kB) | Preview |
More Information
Depositing User: John Unsworth |
Identifiers
Item ID: 9011 |
Identification Number: https://doi.org/10.5430/jnep.v4n5p52 |
ISSN: 1925-4040 |
URI: http://sure.sunderland.ac.uk/id/eprint/9011 | Official URL: http://www.sciedu.ca/journal/index.php/jnep/articl... |
Users with ORCIDS
Catalogue record
Date Deposited: 26 Mar 2018 13:16 |
Last Modified: 25 Nov 2020 12:19 |
Author: | John Unsworth |
Author: | Andrew Melling |
Author: | Jaden Allan |
Author: | Guy Tucker |
Author: | Michael Kelleher |
University Divisions
Faculty of Health Sciences and WellbeingFaculty of Health Sciences and Wellbeing > School of Nursing and Health Sciences
Subjects
Sciences > NursingActions (login required)
View Item (Repository Staff Only) |