General Education survey: What’s most important?

— Jeanne M. Slattery, Mark Mitchell, Randy Potter, Melissa K. Downes

Figure 1. Faculty and student responses to General Education Disposition / Skills Survey about where students need to improve the most.

Figure 1. Faculty and student responses to General Education Disposition / Skills Survey about where students most need to improve.

Recently, the Institutional and Student Learning Assessment Committee (ISLAC), reported the results of the General Education Disposition/Skills survey. Briefly, faculty respondents believe our students most need to improve their writing skills, while student respondents most believe they need to improve their critical thinking skills. (See Figure 1.) Writing was ranked 5th (at 12%) by students surveyed. The second place where faculty and students diverged is in their perception of students’ oral communication skills, where 19% of students perceived this as their most important problem, but only .76% of faculty perceived oral communication skills as the skill needing the most attention.

We strongly believe it is worthwhile to investigate faculty and student perceptions of skills, teaching, and learning, and appreciate ISLAC’s efforts to do so; however, as we will outline below, doing so can be quite difficult. The concerns we describe are not meant as personal attacks on the people who have taken leadership roles in moving our assessment process to the next level, but are our efforts to help us develop stronger assessment strategies in the future. As members of this university, we have the responsibility to practice the critical thinking skills our students say they want to gain.

Figure 2. Ordinal data — like ISLAC's — do not help us identify whether there is a problem or how serious it is.

Figure 2. Ordinal data — like ISLAC’s — do not help us identify whether there is a problem or how serious it is.

1. How weak is weak? Suppose we ask Rosa what her worst subject is. She says “Math.” Assuming she is correct, to what extent should we intervene to help her with math? Obviously, that depends on how much worse she does in math than in her other subjects.

To illustrate, imagine three different Rosas and their grades in English, Logic, and Mathematics. See Figure 2. Obviously, our diagnosis and intervention would be different depending on which Rosa we studied. Unfortunately, ISLAC didn’t get either grades or ratings of perceived performance from their sample, so we do not know whether we have a scenario comparable to Rosa 1, 2, or 3.

Keeping to a simple design, ISLAC could have asked students and faculty to rank order their top three concerns. This would have also addressed our concern that we were being forced to identify only a single problem — and did so with great difficulty. How do we choose among these skills, each of which we see as important?

Figure 3. Ordinal data are particularly problematic when we are attempting to combine scores from many people.

Figure 3. Ordinal data are particularly weak when we attempt to combine scores from many people.

2. Some things don’t add up. Although there are problems with looking at Rosa’s data, those problems are multiplied when we try to combine her data with that from other respondents. If we only know students’ worst subject (in red), we would see mathematics as the subject students needed to work on (two out of three students rate it as their worst subject), even though their average grade is much lower in English. See Figure 3.

The ISLAC data could be problematic — even if individual data are accurate. Unfortunately,  faculty data may be misleading because: (a) faculty more easily thought of cases of poor performance in one area than another, even though the latter is actually weaker, (b) faculty saw weaknesses in “important” skills as needing more attention than bigger weaknesses in areas they saw as less important, or (c) the order of the response options influenced rankings.

1. Availability heuristic. If you ask faculty what students are weakest on, faculty who assign many papers will be quite conscious of the weak writing they encounter. Grading a poorly written paper is time-consuming, painful, and memorable. Professors are also likely to complain about weak writing, which not only raises their own awareness of this problem but also raises the awareness of colleagues hearing these complaints. That writing intensive classes are required (but not critical thinking classes) makes faculty even more aware of the writing problem. Similarly, we suspect that the 12% of college faculty identifying Quantitative Reasoning as the most important problem our students face is overly represented by those teaching Mathematics, Statistics, and mathematically-dependent courses, who thus regularly see the (poor) use of quantitative reasoning.

A koan: Is a problem a problem if we don’t see it?

2. Values. Of course, what you identify as the most serious problem depends on your values. In the case of this survey, faculty may value writing (and reading good writing), whereas students may believe we live in a post-literate age.

3. Order effects. Even when asking about topics about which people have strong preferences for one option over another, the order of the options matters. In this survey in which respondents may have had difficulty choosing among several vital skills, the options’ order may have had an enormous effect.

If faculty data should be interpreted cautiously, student data should also be studied with care. All of the problems afflicting faculty data (the availability bias [e.g., noticing problems in oral communication because they talk to each other more than they compare quantitative reasoning skills], conflating importance and competence [e.g., perhaps ranking critical reasoning as a more serious weakness than writing because they see critical reasoning as more serious than writing], and picking a response option based on the order of those options) afflict the student data. In addition,  student data have two additional problems:

  • Not all students have the metacognitive skills to make these judgments. This survey asked students to assess their critical reasoning skills when they may not know what critical reasoning entails. This problem may be even more true of ethical reasoning and intercultural competence. Further, the survey asked students to assess their writing skills, when one problem weak writers have is that they have difficulty recognizing problems with their writing, which results in their failing to revise poorly written sections (Sitko, 1998). Ironically, it often requires some level of competence to recognize one’s lack of competence.
  • Even if students have the metacognitive skills to evaluate their abilities, we do not know how students decide that they are better at writing than oral communication. To do so they probably need to compare themselves with others. This leads to two problems. First, they are probably not poring over each other’s math homework and critical reading assignments, so how do they know how well others are doing in those areas? Second, a student’s ratings will depend on their competence relative to others they compare themselves to. For example, if everyone in their comparison group has poor mathematical reasoning, the person who is weak at mathematical reasoning is “average”; if everyone else has good math skills, the student who has adequate math skills is “below average.” Probably for this reason, some cross-cultural studies have reported a negative correlation between students’ perceived math skills and their actual skills (Wang & Lin, 2008).

* * * * *

Despite the problems we’ve identified with this study, we can use these data to engage our students in a variety of ways. Each of us advocates discussing general education skills (and this study) in our classrooms. For example, many students have misperceptions of who faculty are and what they expect. We can use this study as a springboard to a fruitful discussion of possible differences between teacher and student beliefs, perceptions, and expectations.

When I (JMS) teach our capstone course, for example, I regularly talk about the education process, and ask my students to consider their experiences in school and in their learning. They also work on a research project over the course of the semester. As I present and integrate these two themes, I plan on giving them these data. How might they interpret these data? Why might faculty and students perceive their skills so differently? What problems with this study do they see? How might they frame these questions more usefully?

We encourage all faculty to find ways to reflect on the critical skills that students need in order to be engaged, thoughtful, and ethical students, professionals, and citizens. We encourage faculty to help their students reflect on these skills as part of an intentional strategy for meeting our goals — and theirs.

References

Sitko, B. M. (1998). Knowing how to write: Metacognition and writing instruction. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.),  Metacognition in educational theory and practice (pp. 93-115). Mahwah, NJ: Lawrence Erlbaum.

Wang, J., & Lin, E. (2008). An alternative interpretation of the relationship between self-concept and mathematics achievement: Comparison of Chinese and US students as a context. Evaluation and Research in Education, 21, 154-174.

We appreciate ISLAC for raising these issues and making us think about these and related questions.


Jeanne M. Slattery is a professor of psychology at Clarion University. She is interested in thinking about what makes teaching and learning successful, and generally describes herself as a learner-centered teacher. She has written two books, Counseling diverse clients: Bringing context into therapy, an Empathic counseling: Meaning, context, ethics, and skill (with C. Park), and is writing Trauma, meaning, and spirituality: Research and clinical perspectives. She can be contacted at jslattery@clarion.edu

Mark L. Mitchell is professor and chair of psychology at Clarion University. He has written several books including Research design explained (now in its 8th edition), Writing for psychology (in its 4th edition), and Lifespan development: A topical approach. He can be contacted at mmitchell@clarion.edu

Randall M. Potter is a professor of psychology at Clarion University. He is currently obsessed with learning R, exploring all kinds of Mac software, beer making, and bicycling. He can be contacted at rpotter@clarion.edu

Melissa K. Downes is an associate professor of English at Clarion University. She loves teaching.  She is interested in talking about how people teach and enjoys sharing how she teaches. She is an 18th century specialist, an Anglophile, a cat lover, and a poet. She can be contacted at mdownes@clarion.edu

Advertisements
This entry was posted in Teaching and tagged , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s