The Teaching Effectiveness Taskforce offers the following Guiding Principles that guided the Taskforce on the development of the common set of survey questions. These principles will also assist departments that would like to add to the UMSL Student Feedback Survey. The Supplemental Questions List will include nationally recognized and validated survey questions that you may find helpful; questions not on that list should follow the guidance here.
The fewer questions there are the more responses you are likely to receive. In addition to the common question set, departments can add up to 4 questions with a range of closed-response and open-response questions. See question pool here.
Focus on relevant and actionable concepts. Students taking a full load of 5 courses may receive as many as 150 questions in a semester. Making sure that questions are focused and relevant will increase student likelihood of answering them.
Also consider if an end of course survey is the best place to ask your question. Questions about specific content, assignments, etc. might be better addressed early in the semester or at mid-semester when adjustments can still be made.
Each item should address one construct. Double barrelled items do not adhere to psychometric standards to achieve accurate results.
UMSL has decided to move away from items that ask for an overall rating. These types of questions have been found difficult to anchor in specific relevant elements of the course experience and as a result often introduce substantial bias into the ranking.
Items asking broad questions not anchored in specific course details have been found subject to bias. They should be avoided and new items should be worded in a way that limits bias. For example, avoid asking broad questions such as “I would recommend this instructor”, “I enjoyed this course”, or “I would take this class again.” Instead focus on specific issues/topics/assignments, e.g. “The required readings contributed to my learning.”
Students rarely have sufficient knowledge to rate the teaching skill of their faculty or the quality and appropriateness of the course content. Therefore questions asking for assessment on these areas of faculty performance and course content should be avoided. Rather, focus on where students can provide valuable feedback regarding their perceptions and lived experience in the course. For example, instead of “The instructor was an excellent teacher”, a better question would be “My instructor’s explanations about course concepts were clear and organized.” We value student voices and their feedback.
The Taskforce and campus have worked to develop a definition of Teaching Effectiveness. Helpful questions that best allow you to reflect on your own teaching are anchored in, and can easily be tied back to, the key elements of this definition.
Students are hesitant to complete surveys if they feel they may be identified by their responses. Avoid asking demographic questions of student participants to protect their anonymity. For example, responding to “level” or “year” when they are the only graduate student or undergraduate senior in a course.
Avoid asking questions that imply specific answers or are yes/no questions. For example, instead of asking, “Did you learn a great amount from this course?”, a better question would be “To what extent do you feel you mastered the content in this course?” This allows students to use a scale to indicate their learning and doesn’t lead them to a specific answer.
Survey research methods suggest that to maximize accuracy and minimize bias, survey questions and response options should be designed in ways that lower the cognitive load on participants as much as possible. Two ways to lower the cognitive load are to use standardized likert scales (e.g., strongly agree to strongly disagree) and to keep the scale consistent throughout a survey.
*"Cognitive load" relates to the amount of information that working memory can hold at one time.
Summaries of the Research Literature on Student Evaluations:
Survey Research Methods References: