I am a member of the higher education assessment police force. Only I don’t have a badge, uniform or authority. My official title is Assessment Facilitator. But my unofficial title is pain in the ass. We are a small but mighty force of faculty who care about measuring learning outcomes, or making sure students learn what they are supposed to learn in a particular course, program or degree. Our force is only mighty in our sense of responsibility, our duty to make sure the institution of higher education remains a place where learning is distributed equitably. With access and success in mind, we peer at learning data and extrapolate meaningful conclusions for the purpose of improving educational systems’ impact on learning.
If this job sounds noble—you have never worked in higher education.
A friend recently sent me a link to this NYT opinion piece complaining about the trend of requiring faculty to engage in ‘official’ learning assessment. I read the first few lines of the article, The Misguided Drive to Measure ‘Learning Outcomes,’ and then replied: I don’t need to read this. I live this.
The opinion championed in the op-ed article might as well be made into a theme song, perhaps set to the worn out rhythm of Fur Elise. Familiar, pervasive, unifying in the commonality of tone and pitch—the author, Molly Worthen, utters the familiar banter muttered in faculty lounges all across America.
In fact, I heard the abridged version of this article last week in an Assessment Committee meeting. “No body wants to be here,” a tenured faculty declared as the meeting adjourned. “This is the worst assignment you can get.”
Another agreed, citing the commonly shared sentiment: “Everyone hates assessment.”
Universally deplored and yet—according to Worthen’s excellent linked source, the National Institute for Learning Outcomes Assessment (NILO) report, beyond a few rogue outliers—universally required. Erecting these “compliance” based—Sputnik inspired?—systems began in the 1960s with the birth of national accreditation systems. Accreditation systems like WASC first required faculty to develop measurable learning outcomes for all of their courses/programs; these metrics must also align with broad University learning goals and therefore required a tremendous amount of work to construct. By dangling the threat of accreditation, these agencies mostly succeeded in pushing administrators to push college faculty to tackle this gargantuan feat throughout the 80s. And, of course, once that work was done—then the work of measuring began. If one course has 10 learning outcomes and 500 students, then, well, that means a whole lot of measuring. But measuring is no longer adequate—now it’s how does this data drive change? What are you going to do with the data? And on…and on…
So a tremendous amount of faculty labor drove this resentment fueled assessment beast onto university campuses in the first place, and that’s exactly how faculty continue to view assessment today: a beastly chore that they resent being forced to do.
One of the commenters to Worthen’s article writes: Wait. I’m confused, what exactly is assessment?
Isn’t that what grades do?
It’s my job to answer that reasonable question. And the answer is: nope.
Grades can’t be used for assessment—according to WASC. Outcomes are measurable, but grades do not necessarily equate to outcomes and therefore grades are not valuable data (according to accreditors). This argument causes that internalized resentment beast to paw at the ground in rage.
Faculty: So let me get this straight—what you are saying is that what I was hired to do—teach and build assignment-based assessments, then review, evaluate and score these assignments—is suddenly not good enough? You want me to measure some arbitrary thing—that you developed and deemed an appropriate thing that I should be assessing and then you want me to work even harder to build some sort of device that accurately measures that thing that you are saying is what my class should teach and then you want me to again collect student work to measure this thing, even though I already collected, assessed and GRADED—you want me to do that again? Is that right?
Assessment Facilitator: Yes.
Faculty never verbalize this to me directly, but the sideways glance says it all: Screw you.
Worthen does an excellent job of putting this sentiment into Beethoven-like cadence, with just enough academic bravado that inspires everyone round the committee table to chant: hear! hear!
And yet the assessment beast continues to rampage campuses. Requiring meetings, data collection and analysis, quantitative and qualitative measurements, metrics, baselines, critical thinking, reasoning, curiosity, creativity and all of the other ‘ings’ and ‘itys’ that higher education so frequently touts that it cultivates—but instead assessment demands these traits from faculty. And the old saying proves true: teachers make the worst students.
To make matters worse, a large portion of this process is mind-boggling, requiring a high degree of awareness about learning science, cognition and reflection—concepts that faculty have not been trained to undertake. Assessment plans resemble Gantt Charts. Reports read like technical documents—each highly specialized and unique. In fact, assessment often takes faculty so far outside of their area of expertise that many, rightly, complain: I am not a bean counter! I am a professor of … and that’s what I am focusing on in my classroom, and if you wanted me to ‘bean-count’ for you, then we should have talked about bean-counting during the exhaustive interview process that I just survived. No one mentioned bean counting on the hiring committee. This feels like the ultimate bait and switch—hire me as an expert in my field and then lock me in a room counting beans.
And so we weary Assessment Facilitators soldier on. All the while the NILO report portrays a picturesque view of assessment, and recommends MORE work from faculty (albeit cleverly disguised as support) : “Professional development could be more meaningfully integrated with assessment efforts, supporting faculty use of results, technology implementation, and integration of efforts across an institution.” This is like saying let’s make the beast dance too.
Oh yes, I think the beast will be happier if we just give it some dancing shoes and make it dance.
Can I just tell you from the front lines to all of you who think this is possible: If you have a beast, you can’t make it dance, no matter what tune or what shoe you put out there. The beast ain’t gonna dance. The idea that enforcing professional development “with assessment efforts” overlooks the fact that higher education faculty don’t want to be developed in this way—in fact they want to enact a full on retreat from this beast.
So what’s the solution? There is no easy answer. I do know this: Increasing faculty workload (AKA “professional development”) related to assessment—the act that created the beast in the first place—is not the the answer. I also know that if Administrators care about accreditation (which obviously they should), then Administrators need to take on more of the burden of assessment by designing and implementing meaningful and authentic assessment systems that better support faculty. Therefore, “compliance” based assessment systems need to go the way of Sputnik.
Take it from the beat cop without a badge, uniform or authority—we’ve got to do more to support faculty, not the other way around.