Introduction

A student engagement approach in western higher education (HE) directly involves students in active decision-making through partnership to enhance educational experiences and outcomes (Lowe & El Hakim, 2020). The interest in HE student engagement practices, policies, and partnership models has increased sharply over the last decade (Sum, 2020). According to Cook-Sather et al. (2014), the benefits of this approach include: increased engagement and motivation; meta-cognitive awareness and sense of identity; and enhancement in teaching and learning. Staff within HE organizations should have the opportunity to develop skills and knowledge to understand the complexity of student engagement and locate the definition and values of this approach in their contexts.

The Postgraduate Certificate (PGCert) and Masters in Arts (MA) Student Engagement in Higher Education course, delivered at the University of Winchester, England, utilized sector-leading practices and research to explore student engagement in contemporary HE. While studying, the students were employed in roles that involved engaging students within the HE sectors. The long-term goals of the program were to provide learning that amplified student engagement as an emergent professional/practitioner field while also positively impacting participants’ personal and professional trajectories and their organizations of employment.

The evaluation described in this report began when the course had been running for several years and was partway through the fourth cohort of delivery. It aimed to gather evidence of effective processes and the impact of the PGCert/MA Student Engagement in Higher Education. As the taught content, teaching, and learning environment purposely demonstrated effective student engagement practices, there was a commitment to replicating this ethos, thus adopting a participatory approach to the evaluation design.

Exploring Participatory Evaluation

Early definitions of participatory evaluation (PE), or stakeholder model of evaluation, tend to focus on the partnership between an expert evaluator and practice-based decision makers (Cousins & Earl, 1992). PE is described as practical or transformative. The former involves broadening “decision-making and problem-solving through systematic inquiry” and the latter reallocates “power in the production of knowledge” and promotes social change (Cousins & Whitmore, 1998, p. 5). Theoretically linked to Freire’s (1978) notion of democratized knowledge production, transformative PE focuses on engaging target populations in meaningful partnership, dialogue, and decision-making.

A range of approaches to PE has been used in HE to evaluate the impact of taught courses, including community-based research, and action research, of which PE is seen as an extension. For example, Benson et al. (2009) evaluated an e-learning academic development course using PE, and Curtis et al. (2021) facilitated a student-led PE in their Masters program in Sustainability. Furthermore, the HE student engagement literature discusses staff-student partnerships in co-design or co-creation activities as an encouraged approach to contemporary educational development (Bovill & Woolmer, 2020). Generally, the evaluators in PE are the course delivery teams working alongside their current cohort of students. PE is also seen as subverting the culture of accountability, regulation, and quality assurance in UK HE which can induce performative evaluation, data gathering which lacks meaning, and continuous reflection which does not lead to learning (Austen et al., 2023).

The PE approach adopted in this evaluation valued the engagement of students and moved beyond their role as data subjects. Therefore, the approach was “geared towards planning and conducting the research process with those people whose life-world and meaningful actions are under study” (Bergold & Thomas, 2012, p. 1). Stakeholders were engaged at various stages of this impact evaluation process, including design, data collection, analysis, and reporting (Guijt, 2014). With similarity to the participatory program logic model of Sucha et al. (2021), this evaluation co-constructed a Theory of Change (ToC) to clarify assumptions, develop shared meaning, and iteratively test the relationship between the course delivery and proposed outcomes. Data gathering approaches used in PE should be appropriate to the participants, which means their perspectives and experiences must be addressed in this decision-making process (Bergold & Thomas, 2012). Methods do not tend to be standardized and a variety of approaches are encouraged to address the evaluation questions (Guijt, 2014).

The evaluators were known to the program team but independent of the course/institution, which can provide a lens of objectivity but also lacks immersion (Moores et al., 2023). The PE approach was therefore essential for bridging any contextual gaps. Working alongside students as “critical friends” enabled potential bias and assumptions to be challenged (Fleming, 2018).

Step 1: Ethical Approval

Ethical approval for this evaluation was constructed and submitted on commission of the funding but before detailing the data gathering approach, as this was to be guided by an Advisory Group. Previous literature has noted the challenges of obtaining ethical approval for participatory research, as “research is usually designed in advance, with methods identified and a timescale of milestones to be achieved” and placed emphasis on an “ethics of care” for participants (Banks & Brydon-Miller, 2018, p. 23). In addition to evidencing knowledge of PE, the application included a clear rationale, identification of the benefits of taking part, and a justification for flexibility in obtaining ethical approval. Approval was granted by the evaluator’s institution and the University of Winchester with no changes necessary.

Step 2: Construction of an Advisory Group

An Advisory Group of current students and graduates from the PGCert/MA Student Engagement in Higher Education was recruited to work collaboratively with the evaluators to co-design the evaluation. Advisory Group members were recruited using stratified purposeful sampling.

The program team at Winchester emailed all students who were studying or had previously studied the course, which amounted to 66 individuals, to raise awareness of the project and the opportunities for participation. After reading a participant information sheet, individuals were asked to submit an online form and share details about their affiliation with the course, year of study or graduation, current professional role, and current employer. Input from the program team identified these as factors that may influence the perspectives of the Advisory Group. This information was also used to ensure a spread of representation among the Group. Sixteen students applied and ten were selected. Of those selected, seven were current students, and three were graduates.

It was anticipated that the Advisory Group would benefit from the opportunity to apply their learning directly to a real project and to acquire an understanding of the value and influence of their course. Financial incentives were provided at £10 for every hour contributed to meetings or preparatory work. Members of the Advisory Group could also be participants in the data gathering activities.

Ethical Considerations for the Advisory Group

Members received an information sheet and a consent form emphasizing the voluntary nature of their participation and the confidential nature of discussions. A process of “checking in” (Petrova et al., 2014) assured consent was constantly sought during each interaction and that ethical practices were adhered to. Individuals were unable to withdraw their data from group discussions as their contribution to co-construction would be impossible to extricate (Sim & Waterfield, 2019). A condition of participation was not sharing information about others to protect their identities (Petrova et al., 2014). Individual members were offered the opportunity to be formally recognized for their contribution or remain anonymous (Banks & Armstrong, 2012). In the final evaluation, six of the ten members opted to be acknowledged in the final report.

Step 3: Designing Participatory Activities

Advisory Group members were engaged in a range of activities over a 3-month time period which aimed to: 1) build knowledge about evaluation; and 2) apply this learning to evaluation co-design. Developing the evaluative knowledge and skills of the Advisory Group and “demystifying evaluation” (Lennie, 2006) was necessary to support their participation in co-design and any future application (Guijt, 2014). The design and delivery of the activities to enable learning and co-design utilized a range of synchronous and asynchronous online activities which mirrored the approach to teaching and learning adopted in the course being evaluated. This approach was familiar to members and enabled engagement. The activities included:

 
Timeframe Activity Knowledge
Building
Co-Design
Theory of
Change
Method
Week 1 Introductory online video
Online interactive whiteboard
Reading
Week 2 Online Advisory Group meeting 1
Video
Online interactive whiteboard
Week 3 Online Advisory Group meeting 2
Week 9 Reading
Online Advisory Group meeting 3
Online interactive whiteboard

Step 4: Building Participant Knowledge of Evaluation

To build evaluative knowledge, Advisory Group members received taught content via a 20-minute introductory video on PE covering potential benefits and rationale for use, and an introduction to ToC including a definition and overview of the process and a guided example.

The first online two-hour meeting further developed members’ knowledge by discussing principles for effective evaluation research design and a more detailed introduction to ToC. Members had the opportunity to ask questions and discuss the content.

To develop project immersion, this first online meeting also discussed findings from the evaluators’ literature review which had been shared in advance. As there were gaps in the existing literature about professional development programs in student engagement, the review focused on the area of academic development, which included various models for evaluating the field, previously used indicators of impact, and gaps in the evidence base. A video summary of the discussions from meeting 1 was shared with members so there was a record of this foundational knowledge building.

In the second two-hour Advisory Group meeting, members were introduced to different types of evaluation and standards of evidence, a stakeholder analysis template to identify and differentiate groups by their level of influence and interest, and a range of options for data gathering.

Step 5: Facilitating Theory of Change Co-Design

The introductory video included content on the PGCert/MA evaluation aims, research questions, and objectives. At this stage, the expectations of Advisory Group membership, such as its role and means of engaging (Guijt, 2014), were discussed.

Before the first meeting, an online, interactive whiteboard (Padlet) was used. Members were asked to describe who they were, their current professional role, and what they hoped to gain by taking part in the evaluation, which provided insight into their motivations for participating (Guijt, 2014). This activity also introduced three questions to explore the context of the provision being evaluated: 1) What is the problem/issue that your course is trying to address?; 2) What does your course aim to do?; and 3) Why does it aim to do this?

These appreciatively framed reflections (Cooperrider & Whitney, 2005) were used by the evaluators to draft the first stages of ToC (situation, context, rationale, aims, and assumptions) which were shared and discussed with the Advisory Group in the first online meeting.

In addition to this facilitated discussion of responses to the whiteboard activity, in the first meeting members were also asked: 1) What would the perfect course look like? 2) What would the impact of this perfect course be?

The responses to these appreciative questions provided content for the proposed outcomes of the course. Following the first meeting, the evaluation team cross-referenced these outcomes with the literature review, which helped to determine the timescales (short-, medium-, and long-term) and the connections between outcomes. The outcome synthesis was shared with the Advisory Group via an online interactive whiteboard to provide asynchronous feedback.

In the second meeting, the co-constructed ToC model was presented and critiqued by the group. For example, members commented that the short-term outcome “Practitioners develop confidence with being an academic (e.g., academic writing)” should be made broader to reflect the professional services career context of many students. The outcome was revised to “Practitioners develop confidence in practice and other skills and competencies.” The discussion then turned to how outcomes could be measured/understood and the most suitable data-gathering methods.

The Co-Constructed Theory of Change[1]

The ToC used evidence from the range of sources and interactions described above. As with all taught HE courses, the PGCert/MA Student Engagement in Higher Education had learning outcomes which identified the knowledge and skills which students could obtain. These were pre-classified as short-term outcomes. Traditional course evaluations (student evaluations of teaching) or assessment grades tend to focus on these short-term outcomes. In this evaluation, it was the medium- or longer-term outcomes associated with course completion that required investigation. It was assumed that students would engage with all components and complete the course for the medium/longer outcomes to be achieved. Existing course documentation provided evidence of the assumed process outcomes of teaching delivery. This included a blended learning context and a student-centred pedagogy, underpinned by an ethos of collaboration and the assumption that course learnings would be applied to the working lives of the students (Lave & Wenger, 1991).

The ToC model was edited after each interaction until finally presented as a visual diagram and a narrative overview. Relevant student engagement literature has highlighted the multi-faceted, individualistic, and evolving nature of outcomes in HE (Picton et al., 2018), with Pownall (2020) writing that, “Being successful means different things to different people” (p. 248). The complexities of outcomes in HE contexts were also acknowledged in educational development literature, where outcomes are “not only unpredictable but often unknown” (Miller-Young & Poth, 2021, p. 2). There was an understanding among the Advisory Group that the outcomes suggested by individual members may differ from their own experiences but still be authentic. This enabled unique and shared outcomes to be accounted for within the ToC model. A consensus of agreement was reached with the Advisory Group before the co-design of the data gathering methods began. At this stage, the confirmed ToC model was also shared with the program team for their input, particularly regarding the design and processes of the course.

Step 6: Facilitating the Co-Design of Methods

The evaluators explained that a “post-test design” to evaluate the processes and impact of the course would be necessary. This unalterable aspect of the evaluation was made explicit in early interactions. As part of the discussion about data gathering in the second Advisory Group meeting, a stakeholder analysis template was used to identify which groups we needed to collect data from and to differentiate them by their level of influence and interest: current students; graduates; students who started but did not complete the course; and the program team. A discussion regarding existing data and how to address any gaps was also facilitated.

The Advisory Group was keen to adopt a qualitative method that embodied the ethos of the evaluated course and aligned with its theoretical and epistemological underpinnings. Therefore, engagement and empowerment were central. An approach that shifted the balance of power in data gathering to the participants was designed (Schäfer & Yarwood, 2008) and a peer-led interview for students and graduates was suggested by the Advisory Group. The evaluators then applied components of the “Listening Rooms” method to the overall design (Heron, 2020; Parkin & Heron, 2022). The Listening Rooms approach has been used in HE to research student experience and engagement. It engages friendship pairs in a prompted conversation with no researcher facilitation. The designed peer-led interviews invited two students or graduate participants to attend a one-hour online interview using Zoom video conferencing platform. Compared to “paired interviews” where the researcher/evaluator collects data from two people simultaneously (Wilson et al., 2016), the peer-led interview facilitated a conversation between participants with prompts and evaluator presence, but no evaluator participation. The evaluator’s role was to provide an introduction and ethical overview, share their screen to present the questions, and take notes of the conversation. The evaluator turned their camera off and remained silent for the duration of the interview to enable a participant-led conversation.

The interview utilized “Most Significant Change” (MSC) questioning (Davies & Dart, 2005), which is a “participatory, story-based method involving the collection and selection of significant change stories” (TASO, n.d.), along with direct reflections on the pre-determined outcomes. MSC techniques align with PE in providing opportunities for participants to control and share their own narratives to make sense of their own experiences (McClintock, 2004).

Mirroring the co-design activities, an online interactive whiteboard was also set up to gather anonymous reflections from the students’ and graduates’ samples and the program team.

Step 7: Participation in Data Gathering

With parity to Advisory Group members, student and graduate participants in the data gathering were offered a financial incentive. A gift voucher worth £10 was given to each participant who took part in a peer-led interview, while those who engaged in a reflective whiteboard were given a £5 voucher.

Due to the small size of the student and graduate population on the course, there was overlap for some of those involved in co-design and those participating in data gathering. Four Advisory Group members (out of ten) were also participants in the interviews, which was justified to enable an effective impact evaluation. The insights of the Advisory Group were necessary to address the gaps in knowledge in the existing literature about potentially relevant outcomes in student engagement professional development programs.

As this overlap in participation could call into question the overarching trustworthiness of the findings, it is important to outline mitigations. Participatory research has been commended for the high level of validity in data gathering, for example, “a respectful and trusting rapport between researchers and participants is assumed to make accounts more truthful, and therefore provide more accurate data” (van der Riet, 2008, p. 559). The same author also wrote that another “key assumption is that co-ownership leads to vested interest in the process, and therefore participants are unlikely to give false or misleading judgments” (van der Riet, 2008, p. 559). This evaluation also used multiple sources of data, multiple methods of participation, supported collaborative critical reflexivity, and engaged the Advisory Group in a critical review of preliminary data analysis, in line with the suggestions of Lennie (2006) for ensuring trustworthiness in PE.

Furthermore, one Advisory Group member who also took part in the data gathering commented on the strengths of their duel role: “At times it felt like I was repeating myself … however as the interview took place after the discussions with the Advisory Group, I felt I was able to reflect more critically and consider the impact of the course from various perspectives. Therefore, my comments during the interview were better informed by this self-reflection.”

Step 8: Analysis and Recommendations

The evaluation team was responsible for undertaking the analysis and triangulating the primary evidence with secondary data sources, such as course documentation, previous summative evaluations, and the views of the program team responsible for course design and delivery. In advance of the third and final Advisory Group meeting, a summary of anonymized emerging findings was shared with members, who were not provided with access to the raw data in order to safeguard confidentiality. After reflecting on the evidence prior to and during the meeting, the Advisory Group reviewed the ToC outcome statements and co-created the recommendations and actions (Guijt, 2014), which were subsequently added to an online whiteboard for further critique by members. An executive summary and a full report were written by the evaluators for senior leaders at the University of Winchester and these were also shared with those who contributed to the evaluation. It was agreed that the executive summary and ToC diagram would also be published to enable sector learning (Donnelly & Austen, 2022).

Conclusions

This report has outlined an approach to PE, the embedded activities, and interactions with an Advisory Group. The participatory activities aimed to develop members’ knowledge and skills and co-design a successful evaluation. This evaluation found that student engagement practitioners on the PGCert/MA Student Engagement in Higher Education, and their organizations of employment, have been positively impacted by their learning experience. The key mechanism of change was the course’s blended delivery, which enabled individuals to study alongside their professional roles.

In agreement with Puma et al. (2009) and Guijt (2014), participants’ contributions to decisions about outcomes, designing evaluation and the methods of data gathering, and exploring the synthesis and reporting of findings, are integral components of PE. The Advisory Group provided unique insights into the course and shaped the focus of the evaluation, its outcomes, and the methodology adopted. Collaboration from the outset enabled an appreciation of context (Bamber & Stefani, 2016) and a diversity of experiences that would not have been realized without embedded participation.

The Advisory Group’s decision to use peer-led interviews was an innovative method that was not foreseen. The exchange of stories between participants in the interviews became a learning experience for all involved. Advisory Group members commended our positioning of participants as “the experts in being a student” and the enabling process which moved beyond feedback to meaningful and constructive collaboration. In addition to exploring and reporting this impact, Advisory Group members were empowered to recognise their expertise and develop their roles as evaluators; members reported transformations in their own evaluation knowledge and skills that they subsequently applied to their ongoing assessments.


  1. The complete Theory of Change (Donnelly & Austen, 2022) can be viewed at https://shura.shu.ac.uk/30905/