Dear Hopeless About Evaluations

Dear Ms. Scholar, I’ve just gotten my student evaluations – which were not as kind as I would like. How much should I trust my student evaluations? I’m feeling hopeless about my student evaluations.

Ms. Scholar at work.

Ms. Scholar at work.

Dear Hopeless About Evaluations, Student evaluations – and peer evaluations for that matter – can be difficult. Most of us don’t like receiving feedback that is less than positive. Some of us have difficulty reading either positive or negative feedback, fearing that it may be less positive than we’d wish.

At my university, our student evaluations are a series of questions rated on a Likert scale, as well as several open-ended questions (e.g., What did the student like? What didn’t the student like?). Evaluation, tenure, and promotion committees receive responses to the Likert questions, but not to the open-ended questions. These questions and peer observations play significant – and often unpleasant – roles in the tenure and promotion process.

As student evaluations at our university don’t have norms, it’s not clear what are “good enough” responses. Within a department, senior faculty may see student evaluations from similar courses to their own and obtain a gut level feeling for whether they’ve done well or not. Most new faculty, however, don’t have such information. In fact, Ms. Scholar has a colleague who seems to have very respectable evaluations, yet thought they were weak. Regardless, it seems very likely that teaching some kinds of courses will be followed by much lower evaluations than others: required lower level Mathematics courses, for example, as opposed to upper level electives within the major.

There’s a lot of debate and discussion about student evaluations, some supported, some not. There’s also a lot of anxiety among faculty. When Ms. Scholar was new to the faculty, one belief was that males earned higher evaluations than females, and that men wearing ties performed better, as did women wearing dresses. Basow and Martin (2012) support those suspicions, suggesting that people using evaluations for decisions of promotion and tenure should be aware of their inherent bias (see also Boring, Ottoboni, & Stark, 2016). Ambady and Rosenthal’s (1993) classic research on using “thin slices” of nonverbal behavior from early in the semester to predict student evaluations has been interpreted to mean that student evaluations carry little more information than momentary first-impressions – and the prejudices and stereotypes that drive those. Performing optional mid-term evaluations probably yield better evaluations at semester’s end – and better teaching (Wilson & Ryan, 2012), as they seem to indicate to students that faculty care about their learning. Finally, there is that age-old fear that faculty will be dinged if they are perceived as tough (Greenwald & Gillmore, 1997), thus probably contributing to the well-documented grade inflation occurring nationally over the last 70 years. Unfortunately, Greenwald and Gillmore concluded that tough faculty, on average, earn somewhat poorer student evaluations.

When she was here in January 2016, Barbara Walvoord argued that faculty could be hard and also receive very positive student evaluations. How? Walvoord suggested that we should be clear, prepared and well-organized, enthusiastic, and ready to help our students learn. Other research suggests that faculty warmth is also helpful (Best & Addison, 2000).

Ms. Scholar was talking with a colleague who had taught elsewhere before coming here. That colleague described a very different process of using student evaluations. That university had departmental and university norms for their instrument. His department looked at changes in an instructor’s average student evaluations across time as an indicator of improving effectiveness in teaching. Both the individual faculty member and the mentor considered that faculty member’s teaching and how teaching could become stronger. While our student evaluations are summative in nature, that department adopted a formative emphasis that Ms. Scholar finds both anxiety-provoking and exciting. Consistent with the literature, they used student evaluations as only one tool from among many to assess teaching effectiveness (McCarthy, 2012).

Just because we are excited about something, doesn’t mean that we are effective in how we are approaching it. On the other hand, that students don’t like an assignment or an approach is not a sufficient reason for removing it. Instead, after reading evaluations, faculty might well decide to retain objectionable aspects of a course, but change how they discuss them, perhaps emphasizing to a larger degree why they are important. For example:

Analyzing one’s writing and making multiple revisions, while often perceived as onerous tasks, are skills that my students and alumni report to have been one of the most important skills learned in this course. Remember that you’re not alone: we are working together to help you become a stronger writer.

What does all this research suggest? Don’t assume that what you perceive as poor evaluations means you did a poor job in teaching a particular course, but do use them as one of the tools you use to strengthen your teaching. Use student evaluations as an opportunity to grow in your teaching effectiveness: consider what you are doing well and where you could be more effective. As you look at them, adopt a growth mindset (Dweck, 2006): learn from this set of evaluations and the other data you obtain over the course of the term.

Use your evaluations to help you reflect, but don’t let them discourage you. Become a stronger teacher next semester. Ms. Scholar knows you can. – Ms. Scholar

References

Ambady, N., & Rosenthal, R.  (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology, 64, 431-441.

Basow, S. A., & Martin, J. L. (2012). Bias in student evaluations. Effective evaluation of teaching: A guide for faculty and administrators. Retrieved from Society for the Teaching of Psychology: http://teachpsych.org/ebooks/evals2012/index.php

Best, J. B., & Addison, W. E. (2000). A preliminary study of perceived warmth of professor and student evaluations. Teaching of Psychology, 27, 60-62.

Boring, A., Ottoboni, K., & Stark, P. B. (2016). Student evaluations of teaching (mostly) do not measure teaching effectiveness. ScienceOpen Research. Retrieved from https://www.scienceopen.com/document_file/25ff22be-8a1b-4c97-9d88-084c8d98187a/ScienceOpen/3507_XE6680747344554310733.pdf

Dweck, C. S. (2006). Mindset: The new psychology of success. New York, NY: Ballantine.

Greenwald, A. G., & Gillmore, G. M. (1997). Grading leniency is a removable contaminant of student ratings. American Psychologist, 52, 1209-1217.

McCarthy, M. E. (2012). Using student feedback as one measure of faculty teaching effectiveness. Effective evaluation of teaching: A guide for faculty and administrators. Retrieved from Society for the Teaching of Psychology: http://teachpsych.org/ ebooks/evals2012/index.php

Wilson, J. H., & Ryan, R. G. (2012). Formative teaching evaluations: Is student input useful? Effective evaluation of teaching: A guide for faculty and administrators. Retrieved from Society for the Teaching of Psychology: http://teachpsych.org/ebooks/evals2012/index.php


If you have questions regarding teaching, student/faculty issues, or other comments/suggestions, please write to: Ms. Scholar c/o MsScholarCU@gmail.com

Advertisements
This entry was posted in Professional development, Teaching and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s