Assessing human factors during simulation: The development and preliminary validation of the rescue assessment tool

Unsworth, John, Melling, Andrew, Allan, Jaden, Tucker, Guy and Kelleher, Michael (2014) Assessing human factors during simulation: The development and preliminary validation of the rescue assessment tool. Journal of Nursing Education and Practice, 4 (5). pp. 52-63. ISSN 1925-4040

[img]
Preview
PDF
3894-14081-1-PB.pdf

Download (159kB) | Preview

Abstract

Background: Failure to rescue the deteriorating patient is a concern for all healthcare providers. In response to this problem providers have introduced a range of interventions to promote timely rescue. Human factors and non-technical skills play a part in both the recognition of ill patients and in the delivery of interventions associated with their successful rescue. Given the risks to patient safety which failure to rescue raises, simulation provides a vehicle for staff training and development in terms of both technical and non-technical skills. This paper describes the development and preliminary validation of a human factors rating tool specifically designed to assess the non-technical skills associated with the recognition and rescue of the deteriorating patient. Methods: Using high fidelity simulation scenarios related to patient deterioration Faculty independently rated student performance. Scoring took place using video footage of the students’ performance. Data were analyzed to establish the validity of the tool, internal consistency between categories and elements and inter-rater reliability. Results: Content validity was established through a process of review and by checking for duplicate or redundant items. The internal consistency of the tool was acceptable with a Cronbach’s alpha of 0.84. Factor analysis suggested that the tool assessed only two components rather than the three hypothesized during tool development. The components were labelled as recognizing and responding and leading and reassuring. Inter-rater reliability was initially poor at 0.21 but following training of raters this rose to above 0.8 for two videos related to the same scenario one which had been used during training. However, when the scenario changed the reliability dropped to 0.5. Conclusions: Rescue appears to be a well-structured tool with good levels of inter-rater reliability following intensive training related to the specific scenario being scored. Further work is required to establish all aspects of construct validity and to ensure test-retest reliability.

Item Type: Article
Subjects: Sciences > Nursing
Divisions: Faculty of Health Sciences and Wellbeing
Depositing User: John Unsworth
Date Deposited: 26 Mar 2018 13:16
Last Modified: 26 Mar 2018 14:40
URI: http://sure.sunderland.ac.uk/id/eprint/9011

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year