Fourth Sector Ecosystem

As any start-up entrepreneur knows—there’s a whole world to figure out between idea and profit.

Who knew there was a name for this bewildering state of discovery? Apparently, “for-benefit” organizations (that’s me) need a more cohesive support structure. I agree! Thanks for this visualization that illustrates all the elements that for-benefit business models need to tackle.

Screen Shot 2019-11-09 at 9.23.54 AM.png
Click Here for More

Training Review

A review for my M.S. class…

  Serious eLearning Manifesto 

For the Student’s Choice Assignment, I completed an online training developed by the National Center On Safe Supportive Learning Environments. This online, interactive training was quite extensive with three separate online modules: Understanding Trauma and Its Impacts, Building Trauma Sensitive Schools and Leading Trauma Sensitive Schools. Each training module took about an hour to complete and included lots of performance support, including checklists, handouts and leader’s guides.

Screen Shot 2019-07-15 at 10.03.53 PM.png
The graphic elements and images balanced the text and narration throughout the training.

All three modules adhere to most of the Principles outlined in the Serious eLearning Manifesto, including Provide Realistic Practice, Motivate Meaningful Involvement, Provide Guidance and Feedback, Use Interactivity to Promote Engagement. However,  Provide Support for Post Training Follow Through and Provide Realistic Practice could have been improved to better Target Improved Performance. All three of these principles were only marginally introduced in the training and without more practice, and post-training support the Principle of Targeting Improved Performance will not likely be met. 

The purpose of the training is to create trauma-informed school systems. The authors overviewed these required elements of a trauma-sensitive school system:

■ Educating all school staff about trauma and its effects

■ Promoting physical and emotional safety in relationships and in the environment

■ Reducing trauma-related triggers in the school environment and eliminating potentially retraumatizing practices, such as harsh or punitive responses

■ Considering trauma in all assessment protocol and behavior plans Trauma-Sensitive Schools Training Package: Implementation Guide 3

■ Ensuring youth and family voice, choice, and empowerment

■ Addressing the secondary effects on educators that can occur when working with trauma survivors

The audience for the training is explained as “Although it was prepared for school and district administrators and staff, the Trauma-Sensitive Schools Training Package includes recommendations for involving students and families” (Implementation Guide, p.1). There appears to be federal or state-based school district funding that supports the dissemination of this material through professional development trainings. This training module just completed a pilot test test in 2018, and includes extensive level-1 assessment surveys in the training materials.

The training allows for self-paced review with guided narration and next and back buttons in an interactive module similar to Adobe Captivate. There were several effective ways the training created interactivity with the audience. For example, buttons were utilized for the audience to “compare and contrast the differences between a traditional school to a trauma-sensitive” school. A short scenario was provided and learners had the choice of selecting whether or not they viewed a ‘traditional’ or ‘trauma-sensitive’ approach to this scenario,  and this selection prompted feedback explaining the differences.

Another effective interactivity were Scenarios that profiled a teacher interacting with a student. Leaners needed to choose which aspects of her interaction were ‘trauma-sensitive’ and which were not. Both wrong and right answers received feedback. These practice elements were helpful, but did not extend into asking the learner to practice the skills presented in the material and this strategy would have better aligned with the transfer elements of of the Serious eLearning Manifesto, such as Target Performance Improvement and Aim for Long Term Impact.

One effective way the training did support the Provide Realistic Practice principle was by designing a discussion prompt that asks learners to discuss how to apply the information in the previous slides. This discussion slide supports the Motivating Meaningful Involvement Principle, and two-three “How-To” discussion slides were included in each of the three training modules. These slides asked the learners to “Pause and Reflect” with “your team” on the previous topic. The slide used a consistent visual pattern of a red octagon shape with a hand in the middle to signal this activity. This visual cue broke up the extensive narration and information and asked the audience to think collectively about applying the information they  just learned. However, if the training was taken individually, and the learner self-paced through the material, then this aspect that requires a group discussion would not be effective and the authentic practice environment would be lost on the learner.

Overall, the training was highly polished and professional. The left side menu-bar with clickable table of contents made the navigation very easy and encouraged learners to review and re-click areas of interest. Despite a large amount of information in the product, cognitive overload was kept to a minimum with a variety of thoughtfully placed interactive elements, scenarios, and user-choice buttons. The combination of narration, text and graphic images made for a well-paced and interesting training.

As mentioned previously, the only Principle that was less developed was the opportunity for learners to practice some of the recommended strategies, and there did not seem to be any follow through in regards to post-training communication. From the perspective of the Serious eLearning Manifesto, the training was highly successful and supported each principle at least in part, but could be improved by creating a community of knowledge or other communication-based strategies. In doing so, the training would both improve the number of opportunities for authentic practice, but also improve the follow up and therefore sharpen the Manifesto’s “Aim for Long Term Impact.”

Thank you for a great learning experience. This was an interesting and informative assignment. Great to see all of the Principles in action in this training.


Link to Implementation Guide

Link to All Training Materials

Screen Shot 2019-07-15 at 10.02.45 PM.png
This slide shows the interactive delivery. The buttons along the top allow the learner to choose the material by category of interest. These buttons allow learners to move in multiple directions through the information.

Making Change

In honor of the New Year, I hereby declare my new 2019 mantra: Change for good. 

After all, there are many changes that are not so good. Take aging, for example. One can succumb, and simply accept the inevitability of aging as though life were nothing more than a half-eaten loaf of bread forgotten at the bottom of the basket —destined to become moldy and undesirable. Or aging can bring a resurrection of the self—equipped with a new perspective, an all-seeing view like what you can only get from the top of the mountain— far reaching, expansive.

So which do you choose? Moldy? Or open to the 360-degree panorama of possibility presented by the first day of the rest of your life.

Change is really two parts then: 1) accepting change into your heart and 2) consciously deciding to change for good.

This two-step solution gets us only so far when we are talking about changing systems, however. Since we all operate in a system—this system can be a formidable foe to our change-for-good mantra. WE may decide to embrace change, but the residents of the system in which we reside may resist change, fear change, fight change.

So where does that built in blockade of resistance leave a change-maker?

In a battleground.

This is where the classic New Year’s resolution falls short—because change requires three steps, not just the two listed above.

Step three: Change-makers need to equip themselves with a solid plan. Because change is not just about a conscious decision, but rather about a series of new actions. Things that are new are harder, the learning curve is steep. And by the way, that view from the top—the expansive one–goes away as you drop down into the long slog through the valley of real change. Times can seem rough and doubt will fall like long shadows darkening the way.

Any change seeker needs a map out of that gloom—a reminder of the route. For without a serious plan, the change-maker risks becoming lost along the way and that, my friend, is the death of change-for-good.

So make sure you don’t get lost—develop your battle plan—the how-to change manual, a guide from point A (I am here) to point B (I will be there). Spend some time on that mental work—until you can clearly see your way through the valley and back up to the mountain top.  Best of luck to all my fellow change-for-gooders in 2019!

Here are my favorite systems-approach, change-maker graphics:

human performance technology model.pngdesign develop process.png

Chart Source: Reiser and Dempsey (2017) Trends and Issues in Instructional Design
Change requires more than just management, however. Change often involves deep learning of new systems and processes. So how do we design online environments that teach complex skills? ADDIE and Gagne are the big go-to’s for instructional design, but a lesser known model for developing learning systems was developed by Kirschner and van Merrienboer (2007). This model is design to fit inside the A and D segments of the ADDIE model. Great for complex task transfer, the authors propose a holistic view of the system, then a deep dive into “the analysis of a to-be-trained complex skill or professional competency in an integrated process of task and content analysis and the conversion of the results of this analysis into a training blueprint that is ready for development and implementation” (p. 252).


Link to article:

To aid in this process, the authors developed Four Components and 10 Steps as shown in the table taken from p. 246 of the article. This blueprint improves on former theories, according to the authors, because of its focus on successful transfer of skillsets. Kirschner and van Merrienboer (2007) explain, “Instructional design (ID) theory needs to support the design and development of programs that will help students acquire and transfer professional competencies or complex cognitive skills to an increasingly varied set of real-world contexts and settings.” Their goal, then, in proposing this “blueprint” was not to develop a complete training system, but rather to zero in on requisite skillsets required for complex tasks that must be transferred to the learner in order to participate effectively in this larger system. 

Blueprint Components of 4C-ID 10 Steps to Complex Learning
Learning Tasks 1. Design Learning Task

2. Sequence Task Classes

3. Set Performance Objectives

Supportive Information 4. Design Supportive Information

5. Analyze Cognitive Strategies

6. Analyze Mental Models

Procedural Information 7. Design Procedural Information

8. Analyze Cognitive Rules

9. Analyze Prerequisite Knowledge

Part-Task Practice 10. Design Part-Task Practice
The authors explain:
“There are many examples of theoretical design models that have been developed to promote complex learning: cognitive apprenticeship (Collins, Brown, & Newman, 1989), 4-Mat (McCarthy, 1996), instructional episodes (Andre, 1997), collaborative proble­m solving (Nelson, 1999), constructivism and constructivist learning environments (Jonassen, 1999), learning by doing (Schank, Berman, & MacPerson, 1999), multiple approaches to understanding (Gardner, 1999), star legacy Schwartz, Lin, Brophy, & Bransford, 1999), as well as the subject of this contribution, the Four-Component Instructional Design model (van Merriënboer, 1997; van Merriënboer, Clark, & de Croock, 2002). These approaches all focus on authentic learning tasks as the driving force for teaching and learning because such tasks are instrumental in helping learners to integrate knowledge, skills, and attitudes (often referred to as competences), stimulate the coordination of skills constituent to solving problems or carrying out tasks, and facilitate the transfer of what has been learned to new and often unique tasks and problem situations (Merrill, 2002b; van Merriënboer, 2007; van Merriënboer & Kirschner, 2001).”

Becoming Trauma-Informed

What a journey! After 16 weeks of reading and research, I have learned so much about Trauma-Informed educational models and Social Emotional Learning this semester! And yet if I were to estimate how much I have read compared to the amount of information out there—it would probably be about .001% of the total body of knowledge created by researchers and clinicians over the last couple of decades. The word burgeoning comes to mind when considering all the brain and learning science that has grown out of…well, technology, really. Because researchers before technology (BT) couldn’t measure the brain, map the brain or observe the inner-workings of the brain in action.

Despite all this empirical energy, enthusiasm, expertise, and technology, however, the revolution needed to transform educational models into trauma-informed (AKA Enlightened) models has yet to occur.

Why, is this the case, you ask?

It certainly isn’t that the need for these systems has decreased; there are no signs of traumatic events in American culture decreasing anytime soon. Perhaps, it’s that people have reached a higher plane of enlightenment through advanced practice of social emotional skills on their own, and so they are now better able to cope with trauma, get on with their lives freed from the bursts of emotion-based behavior that not only disrupts their own lives but the lives of everyone around them to the point of becoming toxic to self and others?

Currently trending Twitter feeds prove otherwise.

And so do the facts. According to Michael Moe, et al. (2018), in A 2 Apple News “Beary Merry Christmas” post: “In the United States, suicide rates are up 30% over the past twenty years. Opioid deaths increased 45% to 75,000 casualties last year alone. That’s more than the number of people who died in traffic accidents. Add it up, and life expectancy for U.S. citizens actually fell last year.”

The bottom line: Kids who have experienced trauma will continue to be in our classrooms for a long time to come.  These kids may grow up to be well adjusted adults capable of managing the impacts of trauma because they have adequate family support systems, healthy community-based relationships and enough emotional and financial resources to get back on their feet— or not. Some may get to the point of being high functioning, productive, valued members of society—that is for sure. Others may live on the brink of the abyss. Never certain what the next day may bring.

And what happens when you add climate refugees to our long list of traumatic woes? Alaska, Texas, Florida, North Carolina, California—all know the trauma that inevitably accompanies natural disasters. Never forget Paradise, California. Home to 26,000 people, Paradise burned to the ground in less time than most people spend on air travel. How can resilience thrive when your whole town is erased?

All of this indicates that our educational systems need to address the impacts of trauma, and in order to do this we need to 1) acknowledge the widespread impacts of trauma and 2) help educators build emotional safety, relationships and self-regulatory skills in the classroom.

Volumes of Learning Science research shows that the emotional regulatory capacity of the learner may be one of the greatest predictors of success—and also one of the key tools that we have to improve equity in our educational systems—so how come we see so few educational systems focus on building these self-regulatory skills?

Simple answer: we put too much of the burden on teachers. Even if we want to classify teachers as Saints there are practical pedagogical limits to what they can do in the classroom. I hate to beat the same drum here, but what we need to do instead of asking educators to do more in the classroom is to build better systems that will help educators: help educators, help students.

These systems should require only quick, micro-training or no training at all. The technology must be interactive and user friendly. Think iPhone. Think television. Plug, then play. Gaming environments, mobile devices, ipads can all deliver what we know teachers need. Teachers should benefit from these tools as much as students.

Like this one:

This is an example of a practical “help educators, help students” tool that provides a research-based trauma-informed toolkit—only you would never know it. Because it does something useful! It provides the framework that all kids need to build planning and self-regulation skills without calling attention to the other benefits embedded in its research-based instructional design. Love Sown to Grow! Hurray!

Or this one:

Love the awareness and inclusion built into this tool! Instead of receiving a one-way ticket to the principal’s office, KidConnect helps connect the behavior to the emotion behind that behavior allowing the teacher to better intervene and allowing the child to start self-regulating for learning. A win-win!

This one’s a bit pricey and extravagant for young kids, but older students and teachers would like this real time self-regulation gadget:

Heart Math

Did you know the heart sends more messages to the brain than the other way around? Hearts are the first responders when it comes to emotions apparently—so happy hearts lead the way to learning. Heart rate is a key indicator for dis-regulation, according to researchers, so thinking of heart happy activities in the first few minutes of class (instead of a quiz!) can get the brain ready for learning. What makes your heart happy? Movement, music, deep breathing, visualizations, visual imagery, massage, stretching, positive relationships, healthy conversations,  animals—rabbits, turtles, fish, dogs, cats, horses (!)—all make our hearts happy. Once our hearts feel calm and regulated then we are ready to learn.

Ultimately, only a systems approach can address the widespread impacts of trauma in education. So leaving economics out of the equation simply won’t work. Financial stability equals freedom and all the emotional skills in the world will only support half of  a career in a global, knowledge-based economy. So apprenticeships are a vital part of the trauma-informed equation. Kids need to see a path upward toward freedom and that motivation will drive change. Which is why programs like those featured in Doc Maker’s

Job Centered Learning:

are so important. Learning science confirms motivation and resilience are linked to learning. The power of knowing that your path forward promises financial stability cannot be overlooked.

Wrong Do It Again!

So proud of my collaboration with amazingly smart Team One in the MIST Program @CSUMB. We had less than a week to read multiple theoretical research articles and formulate an opinion on Behaviorism theory in Education. We hammered out this collaboratively written op-ed in less than three days despite hectic schedules. We require that you listen to The Wall while reading, however.

By Karin Pederson, Sondre Hammer Fossness, Shwetha Prahlad, Russell Fleming and Stacey Knapp

“Wrong, Do it again!”
“If you don’t eat yer meat, you can’t have any pudding. How can you
have any pudding if you don’t eat yer meat?”
“You! Yes, you behind the bikesheds, stand still laddy!”
Lyrics from The Wall, Pink Floyd (1979)

Pink Floyd’s lyrics memorialized a behavioristic educational perspective in this description of an English boarding school: “Wrong, Do it Again! If you don’t eat your meat, you can’t have any pudding.” Early educational interpretations of the behaviorist model went as far as to include physical punishment as a behavioral deterrent—whether “stand still laddy” in Floyd’s lyrics imply a schoolyard paddling or not is unclear, but what is clear now is that, in its most extreme form—physical punishment— the behaviorist model no longer has a place in American educational systems. Despite this controversial past, instructional designers should take a second look at this historical framework in order to understand the powerful impacts and implications of the “conditioned response,” a central tenet of behaviorism. Without this understanding, educational technology products and instructional design could inadvertently be delivering a deleterious effect on learning.

While the 21st century educational landscape has erased physically “unpleasant” consequences, the behaviorist model is alive and well as demonstrated by the rampancy of meritocracy throughout our educational landscape. Skinner’s (1938) premise that an individual makes an association between a particular behavior and a consequence, and unpleasant consequences are not likely to be repeated (Thorndike, 1898) continues to hold value today. So if we leave a trail of positive feedbacks, then the learner will follow the path paved by rewards (and badges!) and avoid the pathways that lead to failure.

Not so fast! Designers should consider a few key behavioristic concepts, especially when creating merit-based learning environments:

Rewarding Laziness
The quiz is nothing new to classrooms, but the instant-feedback customizations possible in online educational environments require a deeper consideration of behavioralism than pen and paper quizzes of yesterday. Learners answer wrong, ‘and then what happens?’ For example, many online quizzing systems give students a chance to correct their own answer immediately after their first response. In practice, this means that it is fairly easy to click through a test and achieve high scores. From a behavioristic perspective, the student will experience a positive reinforcement in the form of a good grade regardless of their preparation. (Why read, if I get an A without reading?) As a result, negative reinforcement (the bad grade) is weakened and the potential for a positive reinforcement for under-preparation is strengthened.

Skinner introduced the principle of “operant conditioning,” which was based on Thorndike’s “Law of Effect” that states: a behavior followed by pleasing consequences is likely to be repeated. So what if incorrect behavior leads to pleasing consequences? By removing all negative reinforcement, it can be argued that, from a behavioristic perspective, the design encourages the “try and fail” method instead of making sure that the answer submitted is correct by reading over the material once more. In other words, students get pleasant consequences from lazy behavior.

Not a One-Size-Fits-All Solution
Behaviorism in practice will ultimately be influenced by learners’ intrinsic motivation. And identifying positive rewards in context specific learning scenarios can be challenging. (While first grade students may find a trip to the candy jar or reading bean bag motivating, what motivates a highschool student?) Therefore, the learning outcome achieved by learners may vary greatly from student to student depending on intrinsic motivation. Such an uncertain variable makes behaviorism fall short as an all encompassing tool for learning, either in classroom or online.

Another important consideration for instructional designers to consider is whether or not receiving extrinsic rewards or punishments might become the rule of life for students. Researchers point out that students may require validation for every task, or expect positive reinforcements for even minor tasks which might not always come with a reward. In this situation, a student might stop caring or feel unmotivated to finish the homework if s/he does not get a reward.

Undesirable Rewards
Morrison (2007) explains that an individual may not particularly be interested in certain kinds of positive reinforcements. If “candies” are used as rewards for every correct response, then if the student was not “particularly interested in candies,” (p. 211) it might not be the best motivation for students to strive for (and, ideally obtain) correct answers. The author further argues that unless the student “could be given the choice between a number of different reinforcements so that they could choose one that was desirable for them” using particular positive reinforcements might not produce the intended result (Morrison, 2007, p. 211).

Pink Floyd’s famous refrain “We don’t need no education” was an unintended consequence of behavioralism in Britain’s 20th century educational system, and the album reached number 1 on the U.S. Billboard by 1980, eventually becoming one the top five greatest selling albums of all time. Instructional designers need to take a second look at behaviorist theory and consider unintended consequences when designing merit-based systems, or risk becoming as Floyd’s lyrics warn: “just another brick in the wall.”

Pink Floyd. (1979). The Wall. Los Angeles: EMI. (1979)

Reiser, R. A., & Dempsey, J. V. (2017). Trends and Issues in Instructional Design and Technology (4th Edition). New York : Pearson.
Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Monographs: General and Applied, 2(4), i-109.

Morrison, Aubrey (2007): The Relative Effects of Positive Reinforcement, Response-Cost, and a Combination Procedure on Task Performance with Variable Task Difficulty. The Huron University College Journal of Learning and Motivation: Vol. 45: Iss. 1, Article 12.

Full Stack Apprenticeship Programs

So long ago, Bloom’s (1984) MIT research demonstrated the efficacy of tutoring + mastery instructional design. At the time, the only way to connect students with tutors was in face-to-face sessions—possible at some small schools—but too expensive for large public schools. By the 90s tutoring was mostly outsourced to parents, wealthy parents, and the rest of us just did our best. Technologically “smart” tools can fill some of this void, but are educators willing to use them? Teachers know better than most that we cannot offload “personalized instruction” to machines because it’s the human-to-human relationship that is at the heart of teaching/learning and most educators view claims to the contrary as dubious.

One way to integrate tutoring (and these vital human-to-human relationships) is to launch more peer-to-peer mentoring/tutoring opportunities on college campuses. A recent article by Ryan Craig explains how the full stack model connects employers with human capital on college campuses. But why don’t colleges follow this model to develop mentoring/tutoring jobs that fit inside the existing college infrastructure? Craig’s investment company supports start-ups that couple entry-level staffing needs with broader talent acquisition systems for employersbut post-secondary institutions can also adopt this “full stack” model by investing in programs designed to address high-need “gap” skills through campus-based employment opportunities using existing financial-aid based “Work Study” programs as a basis for creating skills training, paid jobs and certifications. These platforms can also help showcase student talent developed inside the institution to outside employers. Perhaps even the federal government could chip in for the development of these training modules.

Large public institutions provide especially fertile terrain for this full-stack apprenticeship system as these institutions routinely vet 100s of entry-level job applicants every year. Because of this constant supply and demand, many higher education institutions in California have recently invested in staffing software platforms, like Handshake, that provide enough technological enhancements to ensure a steady supply of talent for developing a workforce inside the university.

In semi-automating job training for internal postings and adding a certification incentive, the university can streamline its own human capital resources and build a student population more equipped to tackle the realities of a 21st century workforce. To do this, universities should partner with regional employers, create certification tracks that provide training and allow students to showcase their work history, certifications and accomplishments with the goal of raising future employers’ confidence in recent graduates and students’ confidence in post-graduate full time employment prospects.

Increasing diversity, decreasing equity barriers and providing much needed support through classroom based mentoring, tutoring and lab assistant positions, technical support call centers, and teaching assistant positions could be up and running and scaled without much structural change.  In addition to a regular paycheck, hirees benefit by building connections on campus through networking with hiring cohorts as well as by directly supporting peers in a variety of educational settings. One metric to watch in assessing this pilot would be to see if this collection of skills training, financial incentive and relationship building improves success rates for first and second year students as they matriculate through GE courses—a sinkhole from which many never emerge and others wallow in for years. Another must watch metric would be: time to full workforce employment post graduation. This full stack view could prove to be a win-win for students and universities struggling to improve four-year graduation rates. If students can easily see the financial incentive that full-time careers provide after graduation—and employers can observe a proven track record of student success in a set of high-demand “gap” skills—everyone wins.

Creating staffing and talent pools in-house (on campus) in programs specifically designed to provide students with paying jobs while learning essential high need “gap” skills, such as communication and technology may be the missing ingredient in existing “student success” campaigns.  By adding a certificate of completion aligned with a employer identified “gap” skills, such as basic web/software programming, corporate technical skills (Excel, SaasS), and essential “soft skills,” such as  communication and conflict resolution, universities are going the extra mile to set their graduates up for immediate success in today’s competitive workforce.

MORE HERE: Click to see overview of Linked Ins Skills Gap

How do we address “skills gap” from inside the classroom?

These skills can also be embedded in teaching curriculum.

Flipping the curriculum (by assigning instructional modules outside of the classroom) and asking students to apply their learning inside the classroom may bring us closer to the high outcomes associated with Bloom’s one-to-one tutoring model—while addressing employers complaints about high-need skills gaps in technology and communication.

One simple solution is to have students solve problems in teams during class time. Not only does team problem solving leverage Vygotsky’s Zone of Proximal Development during collaboration, but adding reporting and whole class Q and A feedback loop ensures broad peer-to-peer learning. Peers working toward a clear objective during class closely simulates a classic tutoring model without the associated expenses of hiring/training qualified tutors. However, this model of personalized learning puts faculty in the position of facilitator, rather than leader, and this side-lined position doesn’t come naturally in the world of higher education.

A natural fit for this instructional design can be found within the existing infrastructure of the lab. Since this place has already been carved out of the bedrock that is higher education curriculum in most STEM courses, the question then becomes how can we transform the classic lab to better emulate the tutoring + mastery model as explained in Bloom’s research?

“Flipping” the instructional model gets us pretty darn close as the “lecture” becomes a self-paced tool outside the class, freeing up the instructor to support hands-on learning inside the lab. In the lab, the instructor provides the on-demand, one-on-one “tutoring” support as problems crop up in the problem solving. Add a few challenging student-centered projects to the mix and you’ve got a pretty inexpensive, built-in tutoring+plus mastery model.

Assessment Blues

I am a member of the higher education assessment police force. Only I don’t have a badge, uniform or authority. My official title is Assessment Facilitator. But my unofficial title is pain in the ass. We are a small but mighty force of faculty who care about measuring learning outcomes, or making sure students learn what they are supposed to learn in a particular course, program or degree. Our force is only mighty in our sense of responsibility, our duty to make sure the institution of higher education remains a place where learning is distributed equitably. With access and success in mind, we peer at learning data and extrapolate meaningful conclusions for the purpose of improving educational systems’ impact on learning.

If this job sounds noble—you have never worked in higher education.

A friend recently sent me a link to this NYT opinion piece complaining about the trend of requiring faculty to engage in ‘official’ learning assessment. I read the first few lines of the article, The Misguided Drive to Measure ‘Learning Outcomes,’ and then replied: I don’t need to read this. I live this.

The opinion championed in the op-ed article might as well be made into a theme song, perhaps set to the worn out rhythm of Fur Elise. Familiar, pervasive, unifying in the commonality of tone and pitch—the author, Molly Worthen, utters the familiar banter muttered in faculty lounges all across America.

In fact, I heard the abridged version of this article last week in an Assessment Committee meeting. “No body wants to be here,” a tenured faculty declared as the meeting adjourned. “This is the worst assignment you can get.”

Another agreed, citing the commonly shared sentiment: “Everyone hates assessment.”

Universally deplored and yet—according to Worthen’s excellent linked source, the National Institute for Learning Outcomes Assessment (NILO) report, beyond a few rogue outliers—universally required. Erecting these “compliance” based—Sputnik inspired?—systems began in the 1960s with the birth of national accreditation systems. Accreditation systems like WASC first required faculty to develop measurable learning outcomes for all of their courses/programs; these metrics must also align with broad University learning goals and therefore required a tremendous amount of work to construct. By dangling the threat of accreditation, these agencies mostly succeeded in pushing administrators to push college faculty to tackle this gargantuan feat throughout the 80s. And, of course, once that work was done—then the work of measuring began. If one course has 10 learning outcomes and 500 students, then, well, that means a whole lot of measuring. But measuring is no longer adequate—now it’s how does this data drive change? What are you going to do with the data? And on…and on…

So a tremendous amount of faculty labor drove this resentment fueled assessment beast onto university campuses in the first place, and that’s exactly how faculty continue to view assessment today: a beastly chore that they resent being forced to do.

One of the commenters to Worthen’s article writes: Wait. I’m confused, what exactly is assessment?

Isn’t that what grades do?

It’s my job to answer that reasonable question. And the answer is: nope.

Grades can’t be used for assessment—according to WASC. Outcomes are measurable, but grades do not necessarily equate to outcomes and therefore grades are not valuable data (according to accreditors). This argument causes that internalized resentment beast to paw at the ground in rage.

Faculty: So let me get this straight—what you are saying is that what I was hired to do—teach and build assignment-based assessments, then review, evaluate and score these assignments—is suddenly not good enough? You want me to measure some arbitrary thing—that you developed and deemed an appropriate thing that I should be assessing and then you want me to work even harder to build some sort of device that accurately measures that thing that you are saying is what my class should teach and then you want me to again collect student work to measure this thing, even though I already collected, assessed and GRADED—you want me to do that again? Is that right?

Assessment Facilitator: Yes.

Faculty never verbalize this to me directly, but the sideways glance says it all: Screw you.

Worthen does an excellent job of putting this sentiment into Beethoven-like cadence, with just enough academic bravado that inspires everyone round the committee table to chant: hear! hear!

And yet the assessment beast continues to rampage campuses. Requiring meetings, data collection and analysis, quantitative and qualitative measurements, metrics, baselines, critical thinking, reasoning, curiosity, creativity and all of the other ‘ings’ and ‘itys’ that higher education so frequently touts that it cultivates—but instead assessment demands these traits from faculty. And the old saying proves true: teachers make the worst students.

To make matters worse, a large portion of this process is mind-boggling, requiring a high degree of awareness about learning science, cognition and reflection—concepts that faculty have not been trained to undertake. Assessment plans resemble Gantt Charts. Reports read like technical documents—each highly specialized and unique. In fact, assessment often takes faculty so far outside of their area of expertise that many, rightly, complain: I am not a bean counter! I am a professor of … and that’s what I am focusing on in my classroom, and if you wanted me to ‘bean-count’ for you, then we should have talked about bean-counting during the exhaustive interview process that I just survived. No one mentioned bean counting on the hiring committee. This feels like the ultimate bait and switch—hire me as an expert in my field and then lock me in a room counting beans.

Screw this.

And so we weary Assessment Facilitators soldier on. All the while the NILO report portrays a picturesque view of assessment, and recommends MORE work from faculty (albeit cleverly disguised as support) : “Professional development could be more meaningfully integrated with assessment efforts, supporting faculty use of results, technology implementation, and integration of efforts across an institution.” This is like saying let’s make the beast dance too.

Oh yes, I think the beast will be happier if we just give it some dancing shoes and make it dance.

Can I just tell you from the front lines to all of you who think this is possible: If you have a beast, you can’t make it dance, no matter what tune or what shoe you put out there. The beast ain’t gonna dance. The idea that enforcing professional development “with assessment efforts” overlooks the fact that higher education faculty don’t want to be developed in this way—in fact they want to enact a full on retreat from this beast.

So what’s the solution? There is no easy answer. I do know this: Increasing faculty workload (AKA “professional development”) related to assessment—the act that created the beast in the first place—is not the only answer. I also know that if Administrators care about accreditation (which obviously they should), then Administrators need to take on more of the burden of assessment by designing and implementing meaningful and authentic assessment systems that better support faculty. Therefore, “compliance” based assessment systems need to go the way of Sputnik.

Strebel’s (1996) personal compacts article does a good job of explaining one of the key elements of “change management” that could have impact. The alignment of assessment goals at the time of hiring in order to build in personal and professional contractual obligations would be a great place to start at the contract stage of hiring, before these new faculty encounter the influence of “laggards” which remain the dominate cultural influence on our campus in regards to assessment. If the personal compacts of these new hires were clearly articulated by “upper-management” and the efforts of these new hires in this arena are championed, and Human Performance Technology (HPT) supports this effort by targeted training (such as how do I create the data visualizations required by these assessment reports?), then we can begin the much needed forward momentum. Follow this up with excellent communication support systems, like Slack, that help build a community of assessment and showcase the efforts of these new faculty in this area—and we are well on our way to a much-needed change, change that is possible, at a deliberative pace, even in large government run institutions.

Take it from the beat cop without a badge, uniform or authority—we’ve got to do more change-management in higher education and better support those early adopters or we will be singing the assessment blues for a long time to come.