A guide for crushing student survey participation rates

StudentPulse Team
March 27, 2023

The topic of raising student survey participation rates is one that is covered in a lot of articles and blog posts. So it's no surprise that you and all of your peers are trying to develop new and creative ways to increase response rates on your surveys. The only problem is that you're just like everyone else, so you probably aren't having much success. Fortunately, we're here to save the day!

This guide is intended for schools and universities to be comprehensive and complete. We have included all the information you need to know to get the highest survey response rates.

But not alone response rates; that is only the critical step to driving student engagement and creating an outstanding student experience.

We also provide sample survey questions, best practices, and other valuable information to help you maximize your response rates.

This guide also includes information on how technology can maximize survey response rates. It provides methods to use phones or embed your surveys in your course website using a plugin like Qualtrics or StudentPulse.

Getting the right mindset about collecting feedback (data) from students

Gartner calls this the "Voice of the Customer," but to us, it's just feedback -- plain and simple. But without understanding what the feedback means, it can easily be misinterpreted. This leads us to the first tip -- we need to stop calling it feedback and start calling it data. Data-driven decisions help institutions align resources to meet student needs through evidence-based measures. Research tells us that the sooner we get data from students, the more accurate and valuable it will be.

And the evidence is vital, and according to one study, half of Americans rely on their "gut" when making important decisions, even when the evidence confronted with them is contrary to their gut. This is why data collection is so vital to student success and retention.

So this leads us to the first conclusion: if not adopted the concept of Data-driven decision-making (DDDM) is, then the consequences can be dire, including demotivation, poor performance, and even increased dropout rates. Institutions needs from top to-bottom to embrace a data-driven approach because it's the best way to understand the student experience and help improve programs and services.

Making students participate in surveys

Students need to be educated about the value of conducting surveys. They need to know that their voices matter and that they have an opportunity to impact their education and future. We will later cover the basics of how to collect and analyze survey data. But for now, let's dive deeper into the three main areas where you can immediately have an impact to ensure your students stay engaged and contribute meaningful information to your evaluation efforts.

Communicate, communicate, communicate

Effective communication is key to increasing student participation rates in survey feedback. It's important to convey the significance of the feedback and how it can help in making informed decisions that impact students' present and future experiences. Make sure your message is consistent and aligned with the principles of data-driven decision-making (DDDM), as discussed in section one.

To encourage participation, students need to understand the value of their feedback and how it will be utilized to bring about positive changes in the institution. It is essential to communicate the immediate impact of their feedback and not just its impact on future generations. By doing so, students will feel heard, valued, and more willing to participate in future surveys. It's crucial to avoid any perception that their opinions are ignored or dismissed. Let them know that their feedback is highly valued and that it will contribute to meaningful changes in the institution. Remember, effective communication is key to achieving higher student participation rates in survey feedback.

Make it short and relevant

Time is precious, and students are busy. To increase participation rates in survey feedback, it's crucial to keep surveys short and relevant. A survey should ideally take no more than five minutes to complete. Avoid overwhelming students with lengthy surveys that take hours to complete. Limit your surveys to 5-10 questions per survey to make the time commitment manageable for students.

Consider using micro-surveys to gather feedback. Instead of asking for an overall satisfaction rating, ask specific questions about the areas that matter most to your students. This approach not only enables prompt action but also demonstrates that you value and hear their opinions. Bite-sized surveys are easier for students to take, and they allow you to gather more detailed information about the areas that matter most to them.

Make it personal

To show you what you should do if you want a 1% response rate, I'm borrowing a worst-case NPS survey from Growth.Design.

Asking the right questions

Asking the right questions is crucial to any survey campaign's success. In the below table, I have shared a best-practice framework to evaluate the quality of a survey question and how it relates to a given goal.

The three dimensions of a good student survey question

Every question should be evaluated against these key dimensions to ensure it hits the mark:

Student Perspective:

  1. Student importance: It has a negative/positive impact on my mood if the score is low/high.
  2. Student context: It's contextual: It's obvious when this is most relevant to me.
  3. Student easiness: The question is easy for me to understand.

Educator Fit:

  1. Inst. importance: It's important for me/us to have a high score on this criteria.
  2. Inst. action: If the score is low I/we can act upon it - fast..
  3. Inst. responsibility: If there is an issue, it is clear who is responsible for fixing it.

Tech Fit (if your survey platform allows it!):

If your survey platform supports self-directed actions, the following elements must be included:

  1. Connect: If the score is low or high, there is an obvious (online) solution to offer the student. - that we can direct immediately..
  2. Identification: If the score is low, it is important to identify the student to take action.

Exemplifying the different dimensions of a student feedback question

The three dimensions above are based on microsurveying together with more than 50 educational institutions in 2022; and speaking of that! Whereas most institutions we’ve worked together with are really good at relating to the importance of a question (all from a inst. perspective, most from a student perspective) a typical pitfall is that very few institutions have considered what happens before and after the question itself is asked.

By before I am talking about the student context: the student’s journey and experience leading up to that point in time where the question is asked. By after I am talking about the institutional work with the question feedback. Let me elaborate using a couple of examples.

Scoring the question in the context of point in time

For the simplicity let’s use a typical question as an example, asked by the majority of institutions we work with (yeah, we know, there are just as many formulations of that question as there are educational institutions…  though, all serve the purpose of uncovering whether a student understands what will happen - and is required - in regards to any given exam):

  • “Was it clear to you what was expected of you at the exam?”

Fine question, important as well. However, notice the wording, and imagine when that question is asked: it is a question that is asked after the exam, many institutions preferring that approach. Let’s try to score it based on the student and institution dimensions, assuming it is indeed an “after-exam” question:

So, what is that yellow 5.5-score indicating? In our scoring matrix (ranging from 0-10) it means that the question can work - but it also means that it isn’t a great question, for two reasons in particular. First, from the student perspective: time has passed since the student actually sat there, before the exam, preparing. On the one hand making it difficult to remember (decreasing validity), at the other hand making the question less important for the student - as the exam is over. Secondly, from the institutional perspective: because the exam is over not much is to do to improve the exam communication and student preparation now. The feedback can still be used to improve the experience in the future, but it will be a “next semester thing” for this exam in particular, not impacting those students actually replying to the question. So even though responsibilities are clear, and it is obvious what to do, from an actionable perspective, the question fails.

Good thing is, not much is needed to change the question for the better. Let’s try to rephrase it slightly, now assuming it is to be used before the exam:

  • “It is clear to me what is expected of me at the exam.”

Now that’s another thing. The subject changing from you to I (making it more personal) the division in time changing from past to present (making it more relevant). Let’s look at the scoring:

9.0-score! Why is that? Simply beause of three things: asking the student before the exam makes it more important to the student, makes the context highly topical, and makes it possible for the institution to act on, now. And speaking about action: in case your survey platform allows it, the third dimension, tech fit, could easily be brought into play on a question like this. Either connecting students lacking information/clarity with the right online exam preparation material, or identifying students in need of personal support.

No matter what, bearing in mind the point in time, a cross student-inst. perspective, and expected action, will help you to build questions, and surveys, whose results are more likely to move from the desk to actual improvements - without the need of putting more work on institutional stakeholders, or students.

Asking at the right time - is micro-suveys a big deal?

"When should you ask students for feedback?" can be answered in two parts: When is the best time to survey the students during the semester or course? And how often should you survey them to get the most accurate data? Here are a few things to consider when deciding on the best time to conduct your surveys.

Asking questions that are actually relevant for your students will encourage them to take the survey seriously.

Take a look at an example of a well-tried framework that will increase response rates while delivering more impact and meaningful insights!

First of all, it’s important to have an overview of the student's journey, therefore to determine when in that time, the best time to survey the students to retain relevant data for that time for relevant information and to act in time on the responses.

An example on a framework, where the student’s journey through a semester are broken into four impact points;

1. Start of the semester

- Here to ask if the expectations have lived up to the reality for all the students' thoughts on social inclusion, educational information and motivation.

2. Two-three weeks into the semester

- To follow up upon students' motivation for their studies, social inclusion and fulfillment of expectations . Also their experience on the student-teacher interaction and didactic, the knowledge about institutional guidance availability.

3. Before midterms

- Focusing on the student’s stress levels pending midterm exams and student-teacher interaction and teaching didactic. Still following up upon the student’s motivation.

4. Before final exams

- Focusing on the student’s stress levels upon final exams, the expectations for the exams, the obtained knowledge and skills and the overall satisfaction of the course.  

It’s important to know the purpose of the surveys, before determining the best impact points in time on the students' journeys. A focus on learning quality or a focus on student well-being can have different, yet some similar, best impact points. Therefore the purpose of the surveys and their focus are crucial to determining the impact points and asking the right questions at the right time.

Live-collecting the feedback is where the magic happens

E-mailing surveys to students will generally yield a 10% -15 % response rate at best. Collecting the feedback from a live venue/in-class with targeted reminders will greatly increase the response rate to upwards of 80%+!

Live-collection with a pin-code for student feedback has several benefits over traditional email for collecting and managing student feedback.

First and foremost, live collection with a pin-code ensures anonymity for students. Many students may be hesitant to share their thoughts and opinions about their education or instructors if they fear retribution or discrimination. By using a live-collection system with a pin-code, students can share their feedback without fear of being identified. This can lead to more honest and accurate feedback, as students are more likely to share their true thoughts and feelings.

Live-collection with a pin-code also allows for real-time feedback. With traditional email, students may have to wait for a response or for their feedback to be reviewed before any action is taken. This can lead to frustration and a sense of disconnection for students. With live collection, students can see their feedback being collected and aggregated in real-time, which can give them a sense of immediacy and connection to the process.

Live-collection with a pin-code also allows for easier management and analysis of student feedback. Traditional email can be difficult to organize and track, especially if there is a large volume of feedback. Live-collection systems can provide easy-to-read summaries and graphs that allow educators to quickly understand and act on student feedback.

Having the easiest possible access to the questionnaire, for example directly from a smartphone, would make feedback collection even easier because easy access saves a lot of time that teachers/educators and students might not have.

Overall, live collection with a pin-code offers a more anonymous, real-time, and easily manageable solution for collecting and managing student feedback. It can lead to more honest and accurate feedback, as well as a greater sense of connection and engagement for students.

Integration to Canvas, school Apps and other available systems

Integrating surveys into learning management systems (LMS) or school apps can increase response rates because it allows educators to utilize the notification system on students' phones. When a survey is made available through an LMS or school app, students can receive a notification on their phone reminding them to complete the survey. This can be an effective way to increase participation and ensure that students are aware of and have the opportunity to complete the survey.

In addition, integrating surveys into LMS or school apps can make it more convenient for students to access and complete the survey. Rather than having to search for a link or log in to a separate website, students can complete the survey directly through the app or LMS that they are already using. This can make it more likely that students will take the time to participate in the survey.

Acting part one; let the respondents self-help

Surveys with integrated action cards or opportunities to act are a more effective way to gather and address student feedback because they allow for immediate action and follow-up. For example, if a student is feeling overwhelmed, they can use an action card to request a meeting with a counselor. Or, if a student is feeling isolated on campus, they can use an action card to see an event overview from the student council and find ways to get involved in campus life. These personalized and targeted solutions address specific issues and concerns, and can lead to a greater sense of engagement and ownership among students. Overall, integrating action cards or opportunities to act into surveys can provide a more effective and personalized approach to addressing student feedback.

Analyse: know what the problem really is

Collecting student feedback through surveys is only useful if the data is actually used to make improvements. If a school or teacher collects data but doesn't take any action based on the feedback, it can be even worse than not collecting the data at all.

Students may feel like their time, opinions and concerns are not being taken seriously if they see no changes being made based on their feedback. This can lead to a sense of disengagement and frustration, and may even cause students to lose trust in the school or teacher.

One of the challenges with the current survey systems used in education is that it can take time to move from data collection and analysis to taking action based on the results. This is often due to the complexity of the data and the need for clearer and actionable insights that it provides.

However, there are ways in which artificial intelligence (AI) and good visual overviews can help to address this challenge and make it easier for educators to take action based on student feedback.

One of the main benefits of using AI in survey systems is that it can help to analyze and interpret large amounts of data in a more efficient and effective manner. By using machine learning algorithms, AI can identify patterns and trends in the data that may not be immediately apparent to humans. This can provide clear and actionable insights that can be used to inform improvements in the learning experience.

In addition, good visual overviews can help to make the data more accessible and understandable for educators. By presenting the data in a clear and visually appealing way, educators can more easily see trends and patterns and understand how to use the data to inform their actions.

Overall, AI and good visual overviews can help to address the challenge of moving from data collection and analysis to action in education survey systems. By providing clear and actionable insights and making the data more accessible and understandable, these tools can help educators to effectively use student feedback to improve the learning experience and their clarity and simplicity enable them to save time and act immediately.

Act on the feedback

Acting on student feedback in surveys is the most  important because it can have a significant impact on student engagement, satisfaction, and achievement. Research has shown that students who feel heard and valued and whose feedback is used to inform improvements in the learning environment are more likely to be engaged and motivated to learn.

One study found that students who participated in a survey that was used to inform changes in their school reported higher levels of satisfaction and engagement compared to those who did not participate in the survey (Bryk & Schneider, 2002). Another study found that students who felt that their opinions were valued and that their feedback was used to inform changes in their school had higher academic achievement (Hoy & Woolfolk, 1993).

Additionally, research has shown that students who have a sense of agency and ownership over their learning experience are more likely to be engaged and motivated to learn (Fredricks, Blumenfeld, & Paris, 2004). By acting on student feedback, educators can help students to feel more connected and invested in their own learning, which can lead to improved outcomes.

Overall, the research supports the importance of acting on student feedback in surveys. By using student feedback to inform improvements in the learning environment, educators can increase student engagement, satisfaction, and achievement.

Bonus: Will incentives do the trick?

Incentives can increase response rates in education surveys, but there are pros and cons to consider when using them. Research has shown that offering incentives can be effective in increasing participation in surveys, particularly in cases where response rates are low (Dillman, Smyth, & Christian, 2014). One study found that offering a small cash prize increased response rates in a school survey from 21% to 42% (Schuman & Presser, 1981).

However, it's important to be aware of the potential downsides of using incentives. Some research has found that offering incentives can lead to less honest and accurate responses, as students may be more motivated by the incentive than by the opportunity to provide feedback (Dillman et al., 2014). Additionally, offering incentives can create a bias in the sample, as some students may be more likely to participate in the survey if they stand to gain something from it (Gosling, Vazire, Srivastava, & John, 2004).

Overall, while incentives can be a useful tool for increasing response rates in education surveys, it's important to carefully consider the pros and cons and to use them in a way that minimizes potential biases and encourages honest and accurate responses.