The National Student Survey
The National Student Survey (NSS) is an official UK-wide survey for final year university students where they express their opinions on their experiences with their institution.
The Office for Students (OfS) states that the purpose of the NSS is to “gather students’ opinions on the quality of their courses which helps to:
- inform prospective students’ choices
- provide data that supports universities and colleges to improve the student experience
- support public accountability.”
Universities constantly try new initiatives to improve the student experience. To know whether these changes make a difference, many universities track the results of the NSS. It not only facilitates comparisons between different universities, but also internally within an institution with historical data.
Most of the previous analysis around the NSS has been on the final agree/disagree question: “Overall, I am satisfied with the quality of the course”. This question provides a litmus test for the rest of the survey. While a lot of weight is put on this question (many university guides use it as a key metric), university guides are also using other satisfaction scores to inform prospective students. For example, the Guardian University Guide has three metrics from the NSS: course satisfaction, teaching satisfaction, and feedback satisfaction.
We are particularly interested in improving feedback satisfaction scores since Graide is a specially developed tool to improve the speed and quality of feedback students receive for their work. Ensuring they get high quality feedback that supports learning and development means students are more likely to praise their institutions when surveyed.
Assessment & Feedback in More Detail
There are five key questions relating to assessment and feedback in historical NSS data. The scores over time for England are shown below - a cursory look at the rest of the UK shows a similar trend.
We see a general trend upwards, and the questions which were scoring the lowest find the most improvement in the same amount of time. From 2017 onwards, there has been a stagnation in progress, and in fact, impressions on fair assessment and knowing criteria in advance dropped that year. It is worth noting that one of the questions was removed in that year, but the wording on the remaining questions is generally identical.
The sudden dip in scores in 2021 can be attributed to the pandemic. As universities moved towards a blended approach to teaching and learning, this caused significant dissatisfaction amongst students. Looking forward, universities need to take the best aspects of digital learning and focus on ensuring assessment and learning meets students’ needs.
Improving all aspects of assessment and feedback
Graide, the AI-powered assessment and feedback platform, addresses many of these issues. Graide facilitates the delivery and grading of paper based or digital assignments. A grading interface for both is depicted below, which has a rubric built by educators.
The criteria used in marking have been clear in advance.
This is often overlooked due to the time constraints put on educators at universities. However, this is one of the more straightforward questions to improve. You can take your rubrics, remove sensitive/detailed information about the assessment and share this with students before they are assessed.
How Graide addresses this: Graide has a digital rubric that you build and edit as you mark. While this does not explicitly assist you in making the marking criteria clear in advance the first time an assessment is run. If the same assessment is run again in subsequent years, this rubric can be used as a starting point for explaining the marking criteria.
Assessment arrangements and marking have been fair.
Why this is hard: There are often hundreds of scripts per assignment and the huge volume of work needs to be tackled carefully. This process naturally causes inconsistencies as how one marks often changes slightly over time, requiring the marker to go back over previous marking to be retroactively consistent. If you try to manage the workload with multiple markers, you add the extra challenge of ensuring that each marker is consistent with the others.
How Graide addresses this: Graide has a digital rubric that you build and edit as you mark. Any feedback you add is tailored to your student’s work, and any edits are retroactively applied to all students with the same feedback. Finally, these rubrics are shared amongst the people marking the work to add another layer of consistency.
Feedback on my work has been prompt/timely.
Why this is hard: Marking hundreds of students' scripts takes time and requires management overheads. If it takes 10 minutes/script and you have 100 students, that's over 16 hours of marking work. These papers also have to be collated, distributed, marked, and distributed again, adding days to the turnaround time.
How Graide addresses this: Graide is an end-to-end assessment and feedback platform. It directly addresses the management overheads by creating one place where students can submit their work, teachers can mark it, and students can receive their feedback. Graide also uses AI to speed up the marking process, and it has been shown to reduce marking times by up to 74%. You can read more about this here.
I have received detailed comments on my work.
Why this is hard: Marking is repetitive. Detailed comments take time to write since you have to say the same thing over and over, delaying the timeliness of feedback.
How Graide addresses this: Graide’s marking interface allows you to build a detailed rubric. Each feedback item can be reused, which means you can write once and use it multiple times. If multiple students make the same error, this saves a huge amount of time.
Feedback on my work has helped me clarify things I did not understand.
Why this is hard: Due to the time constraints and repetitiveness of marking, markers are often left deciding which mistakes to prioritise. Often, markers just circle where the error is and move on to the following script. Again, being more detailed requires time, something markers just don’t have.
How Graide addresses this: Graide’s marking interface empowers you to be more detailed with your feedback because feedback is stored for re-use - the time saved by this can be used to write more useful feedback for the students. Additionally, the feedback is directed to where it matters most.
Prospective students use NSS scores as a key metric when seeking out a top institution. Graide directly addresses issues brought up by the questions in the assessment and feedback category meaning universities can attract high-achieving students looking for a quality education experience.
If you want to learn more about using Graide to improve your university’s NSS scores, book a meeting today.