The Practicum Institute will present two studies at the Vienna AMEE 2019 conference: one on the project of the application of Practicum Script for medical students and another on the article recently published in Medical Education on script concordance testing. The first will have a poster format in the category of medical education, and the second will be a brief oral presentation in the category of assessment methodologies and continuous professional development. In charge of each study will be, respectively, Drs. Amir H. Sam, scientific director of EBMA, and Matthew Lineberry, director of simulation research at the University of Kansas (USA).
WE WILL BE PRESENTING ONE POSTER AND ONE ORAL COMMUNICATION
With a strong presence in Latin America, the Practicum Institute and its main asset, the Practicum Script clinical reasoning training simulator, begin to settle in Europe. Proof of this expansion are two studies to be presented at the conference of the Association for Medical Education in Europe (AMEE) 2019, to be held between August 24 and 28 in Vienna. In early July, it was the turn of the Association for the Study of Medical Education (ASME) in Glasgow. Both academic studies focus on the evaluation of applied knowledge, improvement of clinical reasoning, and management of uncertainty in clinical practice.
”Piloting Practicum Script, a clinical reasoning simulator, in a multi-centre European study” is the title of the poster authored by Dr. Amir H. Sam along with Drs. Eduardo Pleguezuelos, Carlos F. Collares, Eduardo Hornos, Adrian Freeman, and Cees Van der Vleuten. This multicenter pilot study, coordinated by the European Board of Medical Assessors (EBMA), aims to investigate the effectiveness of Practicum Script as a training tool in clinical reasoning in teaching and evaluation. The online simulation platform will be tested in at least five British universities (Imperial College London, Oxford, Exeter, Plymouth, and Newcastle) and more than a dozen centers in other countries.
The cases, 20 clinical Internal Medicine vignettes directed to senior students in clinical rotations, have already been created by an editorial team from Imperial College London led by Dr. Amir H. Sam, and are currently in the validation phase by a team of experts from most medical schools that have joined the project. As they progress, Rebecca Scott and Anjali Amin, specialists in Internal Medicine at Imperial College London, will compile the material and provide the contents of relevant literature. The objective is to perform a psychometric analysis of the students’ responses to the items in each case.
For each clinical scenario, medical students will be asked to create hypotheses in free text format and justify them, identifying the positive and negative findings in the case. Next, they must state, in five different clinical scenarios, if new data change their initial judgments. As for the feedback, students will be able to discuss among themselves, advise with a tutor, and consult the agreement between their answers and those of the panel of validating experts, which will be presented along with the foundations of scientific clinical evidence. As an added value, student satisfaction on the educational model will also be measured.
Validity of the response process
Practicum Script can be a valuable educational resource to evaluate the ability of medical students to solve problems in dilemmatic contexts. The continuous professional development cycle, in particular, has been successfully implemented for about 10 years. In February 2019, a group of researchers comprising Drs. Matthew Lineberry, Eduardo Hornos, Eduardo Pleguezuelos, José Mella, Carlos Brailovsky, and Georges Bordage published a study that echoed the divergence that naturally occurs between experts during decision making in complex situations, as well as the effectiveness of this pedagogical approach in medical education.
Ten Argentine gastroenterologists comprised an expert panel in an existing script concordance test, solving 15 clinical cases over 9 months. After their individual contributions, the same experts compared their responses with those of others. As a result, the response processes were not consistent with the use of the SCT scores, since their reasoning showed gaps in the classical interpretation. This study suggests a shift from the focus of learning from a correct, unique answer to better assimilation of controversy. “Even when participants do not change their beliefs, by knowing other points of view, they can gauge their certainty and resort to more open thinking in the future.”