Blog

Making Change

In honor of the New Year, I hereby declare my new 2019 mantra: Change for good. 

After all, there are many changes that are not so good. Take aging, for example. One can succumb, and simply accept the inevitability of aging as though life were nothing more than a half-eaten loaf of bread forgotten at the bottom of the basket —destined to become moldy and undesirable. Or aging can bring a resurrection of the self—equipped with a new perspective, an all-seeing view like what you can only get from the top of the mountain— far reaching, expansive.

So which do you choose? Moldy? Or open to the 360-degree panorama of possibility presented by the first day of the rest of your life.

Change is really two parts then: 1) accepting change into your heart and 2) consciously deciding to change for good.

This two-step solution gets us only so far when we are talking about changing systems, however. Since we all operate in a system—this system can be a formidable foe to our change-for-good mantra. WE may decide to embrace change, but the residents of the system in which we reside may resist change, fear change, fight change.

So where does that built in blockade of resistance leave a change-maker?

In a battleground.

This is where the classic New Year’s resolution falls short—because change requires three steps, not just the two listed above.

Step three: Change-makers need to equip themselves with a solid plan. Because change is not just about a conscious decision, but rather about a series of new actions. Things that are new are harder, the learning curve is steep. And by the way, that view from the top—the expansive one–goes away as you drop down into the long slog through the valley of real change. Times can seem rough and doubt will fall like long shadows darkening the way.

Any change seeker needs a map out of that gloom—a reminder of the route. For without a serious plan, the change-maker risks becoming lost along the way and that, my friend, is the death of change-for-good.

So make sure you don’t get lost—develop your battle plan—the how-to change manual, a guide from point A (I am here) to point B (I will be there). Spend some time on that mental work—until you can clearly see your way through the valley and back up to the mountain top.  Best of luck to all my fellow change-for-gooders in 2019!

Here are my favorite systems-approach, change-maker graphics:

human performance technology model.pngdesign develop process.png

Chart Source: Reiser and Dempsey (2017) Trends and Issues in Instructional Design
Change requires more than just management, however. Change often involves deep learning of new systems and processes. So how do we design online environments that teach complex skills? ADDIE and Gagne are the big go-to’s for instructional design, but a lesser known model for developing learning systems was developed by Kirschner and van Merrienboer (2007). This model is design to fit inside the A and D segments of the ADDIE model. Great for complex task transfer, the authors propose a holistic view of the system, then a deep dive into “the analysis of a to-be-trained complex skill or professional competency in an integrated process of task and content analysis and the conversion of the results of this analysis into a training blueprint that is ready for development and implementation” (p. 252).

 

Link to article: https://pdfs.semanticscholar.org/8972/359c6b192ab479e81416cd725918babf4df4.pdf

To aid in this process, the authors developed Four Components and 10 Steps as shown in the table taken from p. 246 of the article. This blueprint improves on former theories, according to the authors, because of its focus on successful transfer of skillsets. Kirschner and van Merrienboer (2007) explain, “Instructional design (ID) theory needs to support the design and development of programs that will help students acquire and transfer professional competencies or complex cognitive skills to an increasingly varied set of real-world contexts and settings.” Their goal, then, in proposing this “blueprint” was not to develop a complete training system, but rather to zero in on requisite skillsets required for complex tasks that must be transferred to the learner in order to participate effectively in this larger system. 

Blueprint Components of 4C-ID 10 Steps to Complex Learning
Learning Tasks 1. Design Learning Task

2. Sequence Task Classes

3. Set Performance Objectives

Supportive Information 4. Design Supportive Information

5. Analyze Cognitive Strategies

6. Analyze Mental Models

Procedural Information 7. Design Procedural Information

8. Analyze Cognitive Rules

9. Analyze Prerequisite Knowledge

Part-Task Practice 10. Design Part-Task Practice
The authors explain:
“There are many examples of theoretical design models that have been developed to promote complex learning: cognitive apprenticeship (Collins, Brown, & Newman, 1989), 4-Mat (McCarthy, 1996), instructional episodes (Andre, 1997), collaborative proble­m solving (Nelson, 1999), constructivism and constructivist learning environments (Jonassen, 1999), learning by doing (Schank, Berman, & MacPerson, 1999), multiple approaches to understanding (Gardner, 1999), star legacy Schwartz, Lin, Brophy, & Bransford, 1999), as well as the subject of this contribution, the Four-Component Instructional Design model (van Merriënboer, 1997; van Merriënboer, Clark, & de Croock, 2002). These approaches all focus on authentic learning tasks as the driving force for teaching and learning because such tasks are instrumental in helping learners to integrate knowledge, skills, and attitudes (often referred to as competences), stimulate the coordination of skills constituent to solving problems or carrying out tasks, and facilitate the transfer of what has been learned to new and often unique tasks and problem situations (Merrill, 2002b; van Merriënboer, 2007; van Merriënboer & Kirschner, 2001).”

Becoming Trauma-Informed

What a journey! After 16 weeks of reading and research, I have learned so much about Trauma-Informed educational models and Social Emotional Learning this semester! And yet if I were to estimate how much I have read compared to the amount of information out there—it would probably be about .001% of the total body of knowledge created by researchers and clinicians over the last couple of decades. The word burgeoning comes to mind when considering all the brain and learning science that has grown out of…well, technology, really. Because researchers before technology (BT) couldn’t measure the brain, map the brain or observe the inner-workings of the brain in action.

Despite all this empirical energy, enthusiasm, expertise, and technology, however, the revolution needed to transform educational models into trauma-informed (AKA Enlightened) models has yet to occur.

Why, is this the case, you ask?

It certainly isn’t that the need for these systems has decreased; there are no signs of traumatic events in American culture decreasing anytime soon. Perhaps, it’s that people have reached a higher plane of enlightenment through advanced practice of social emotional skills on their own, and so they are now better able to cope with trauma, get on with their lives freed from the bursts of emotion-based behavior that not only disrupts their own lives but the lives of everyone around them to the point of becoming toxic to self and others?

Currently trending Twitter feeds prove otherwise.

And so do the facts. According to Michael Moe, et al. (2018), in A 2 Apple News “Beary Merry Christmas” post: “In the United States, suicide rates are up 30% over the past twenty years. Opioid deaths increased 45% to 75,000 casualties last year alone. That’s more than the number of people who died in traffic accidents. Add it up, and life expectancy for U.S. citizens actually fell last year.”

The bottom line: Kids who have experienced trauma will continue to be in our classrooms for a long time to come.  These kids may grow up to be well adjusted adults capable of managing the impacts of trauma because they have adequate family support systems, healthy community-based relationships and enough emotional and financial resources to get back on their feet— or not. Some may get to the point of being high functioning, productive, valued members of society—that is for sure. Others may live on the brink of the abyss. Never certain what the next day may bring.

And what happens when you add climate refugees to our long list of traumatic woes? Alaska, Texas, Florida, North Carolina, California—all know the trauma that inevitably accompanies natural disasters. Never forget Paradise, California. Home to 26,000 people, Paradise burned to the ground in less time than most people spend on air travel. How can resilience thrive when your whole town is erased?

All of this indicates that our educational systems need to address the impacts of trauma, and in order to do this we need to 1) acknowledge the widespread impacts of trauma and 2) help educators build emotional safety, relationships and self-regulatory skills in the classroom.

Volumes of Learning Science research shows that the emotional regulatory capacity of the learner may be one of the greatest predictors of success—and also one of the key tools that we have to improve equity in our educational systems—so how come we see so few educational systems focus on building these self-regulatory skills?

Simple answer: we put too much of the burden on teachers. Even if we want to classify teachers as Saints there are practical pedagogical limits to what they can do in the classroom. I hate to beat the same drum here, but what we need to do instead of asking educators to do more in the classroom is to build better systems that will help educators: help educators, help students.

These systems should require only quick, micro-training or no training at all. The technology must be interactive and user friendly. Think iPhone. Think television. Plug, then play. Gaming environments, mobile devices, ipads can all deliver what we know teachers need. Teachers should benefit from these tools as much as students.

Like this one:

https://www.sowntogrow.com/

This is an example of a practical “help educators, help students” tool that provides a research-based trauma-informed toolkit—only you would never know it. Because it does something useful! It provides the framework that all kids need to build planning and self-regulation skills without calling attention to the other benefits embedded in its research-based instructional design. Love Sown to Grow! Hurray!

Or this one:

https://www.teachemotionalregulation.com/kidconnect

Love the awareness and inclusion built into this tool! Instead of receiving a one-way ticket to the principal’s office, KidConnect helps connect the behavior to the emotion behind that behavior allowing the teacher to better intervene and allowing the child to start self-regulating for learning. A win-win!

This one’s a bit pricey and extravagant for young kids, but older students and teachers would like this real time self-regulation gadget:

Heart Math

Did you know the heart sends more messages to the brain than the other way around? Hearts are the first responders when it comes to emotions apparently—so happy hearts lead the way to learning. Heart rate is a key indicator for dis-regulation, according to researchers, so thinking of heart happy activities in the first few minutes of class (instead of a quiz!) can get the brain ready for learning. What makes your heart happy? Movement, music, deep breathing, visualizations, visual imagery, massage, stretching, positive relationships, healthy conversations,  animals—rabbits, turtles, fish, dogs, cats, horses (!)—all make our hearts happy. Once our hearts feel calm and regulated then we are ready to learn.

Ultimately, only a systems approach can address the widespread impacts of trauma in education. So leaving economics out of the equation simply won’t work. Financial stability equals freedom and all the emotional skills in the world will only support half of  a career in a global, knowledge-based economy. So apprenticeships are a vital part of the trauma-informed equation. Kids need to see a path upward toward freedom and that motivation will drive change. Which is why programs like those featured in Doc Maker’s

Job Centered Learning: http://docmakeronline.com/job_centered_learning.html

are so important. Learning science confirms motivation and resilience are linked to learning. The power of knowing that your path forward promises financial stability cannot be overlooked.

Wrong Do It Again!

So proud of my collaboration with amazingly smart Team One in the MIST Program @CSUMB. We had less than a week to read multiple theoretical research articles and formulate an opinion on Behaviorism theory in Education. We hammered out this collaboratively written op-ed in less than three days despite hectic schedules. We require that you listen to The Wall while reading, however.

By Karin Pederson, Sondre Hammer Fossness, Shwetha Prahlad, Russell Fleming and Stacey Knapp

“Wrong, Do it again!”
“If you don’t eat yer meat, you can’t have any pudding. How can you
have any pudding if you don’t eat yer meat?”
“You! Yes, you behind the bikesheds, stand still laddy!”
Lyrics from The Wall, Pink Floyd (1979)

Pink Floyd’s lyrics memorialized a behavioristic educational perspective in this description of an English boarding school: “Wrong, Do it Again! If you don’t eat your meat, you can’t have any pudding.” Early educational interpretations of the behaviorist model went as far as to include physical punishment as a behavioral deterrent—whether “stand still laddy” in Floyd’s lyrics imply a schoolyard paddling or not is unclear, but what is clear now is that, in its most extreme form—physical punishment— the behaviorist model no longer has a place in American educational systems. Despite this controversial past, instructional designers should take a second look at this historical framework in order to understand the powerful impacts and implications of the “conditioned response,” a central tenet of behaviorism. Without this understanding, educational technology products and instructional design could inadvertently be delivering a deleterious effect on learning.

While the 21st century educational landscape has erased physically “unpleasant” consequences, the behaviorist model is alive and well as demonstrated by the rampancy of meritocracy throughout our educational landscape. Skinner’s (1938) premise that an individual makes an association between a particular behavior and a consequence, and unpleasant consequences are not likely to be repeated (Thorndike, 1898) continues to hold value today. So if we leave a trail of positive feedbacks, then the learner will follow the path paved by rewards (and badges!) and avoid the pathways that lead to failure.

Not so fast! Designers should consider a few key behavioristic concepts, especially when creating merit-based learning environments:

Rewarding Laziness
The quiz is nothing new to classrooms, but the instant-feedback customizations possible in online educational environments require a deeper consideration of behavioralism than pen and paper quizzes of yesterday. Learners answer wrong, ‘and then what happens?’ For example, many online quizzing systems give students a chance to correct their own answer immediately after their first response. In practice, this means that it is fairly easy to click through a test and achieve high scores. From a behavioristic perspective, the student will experience a positive reinforcement in the form of a good grade regardless of their preparation. (Why read, if I get an A without reading?) As a result, negative reinforcement (the bad grade) is weakened and the potential for a positive reinforcement for under-preparation is strengthened.

Skinner introduced the principle of “operant conditioning,” which was based on Thorndike’s “Law of Effect” that states: a behavior followed by pleasing consequences is likely to be repeated. So what if incorrect behavior leads to pleasing consequences? By removing all negative reinforcement, it can be argued that, from a behavioristic perspective, the design encourages the “try and fail” method instead of making sure that the answer submitted is correct by reading over the material once more. In other words, students get pleasant consequences from lazy behavior.

Not a One-Size-Fits-All Solution
Behaviorism in practice will ultimately be influenced by learners’ intrinsic motivation. And identifying positive rewards in context specific learning scenarios can be challenging. (While first grade students may find a trip to the candy jar or reading bean bag motivating, what motivates a highschool student?) Therefore, the learning outcome achieved by learners may vary greatly from student to student depending on intrinsic motivation. Such an uncertain variable makes behaviorism fall short as an all encompassing tool for learning, either in classroom or online.

Another important consideration for instructional designers to consider is whether or not receiving extrinsic rewards or punishments might become the rule of life for students. Researchers point out that students may require validation for every task, or expect positive reinforcements for even minor tasks which might not always come with a reward. In this situation, a student might stop caring or feel unmotivated to finish the homework if s/he does not get a reward.

Undesirable Rewards
Morrison (2007) explains that an individual may not particularly be interested in certain kinds of positive reinforcements. If “candies” are used as rewards for every correct response, then if the student was not “particularly interested in candies,” (p. 211) it might not be the best motivation for students to strive for (and, ideally obtain) correct answers. The author further argues that unless the student “could be given the choice between a number of different reinforcements so that they could choose one that was desirable for them” using particular positive reinforcements might not produce the intended result (Morrison, 2007, p. 211).

Pink Floyd’s famous refrain “We don’t need no education” was an unintended consequence of behavioralism in Britain’s 20th century educational system, and the album reached number 1 on the U.S. Billboard by 1980, eventually becoming one the top five greatest selling albums of all time. Instructional designers need to take a second look at behaviorist theory and consider unintended consequences when designing merit-based systems, or risk becoming as Floyd’s lyrics warn: “just another brick in the wall.”

References
Pink Floyd. (1979). The Wall. Los Angeles: EMI. (1979)

Reiser, R. A., & Dempsey, J. V. (2017). Trends and Issues in Instructional Design and Technology (4th Edition). New York : Pearson.
Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Monographs: General and Applied, 2(4), i-109.

Morrison, Aubrey (2007): The Relative Effects of Positive Reinforcement, Response-Cost, and a Combination Procedure on Task Performance with Variable Task Difficulty. The Huron University College Journal of Learning and Motivation: Vol. 45: Iss. 1, Article 12.

Full Stack Apprenticeship Programs

So long ago, Bloom’s (1984) MIT research demonstrated the efficacy of tutoring + mastery instructional design. At the time, the only way to connect students with tutors was in face-to-face sessions—possible at some small schools—but too expensive for large public schools. By the 90s tutoring was mostly outsourced to parents, wealthy parents, and the rest of us just did our best. Technologically “smart” tools can fill some of this void, but are educators willing to use them? Teachers know better than most that we cannot offload “personalized instruction” to machines because it’s the human-to-human relationship that is at the heart of teaching/learning and most educators view claims to the contrary as dubious.

One way to integrate tutoring (and these vital human-to-human relationships) is to launch more peer-to-peer mentoring/tutoring opportunities on college campuses. A recent article by Ryan Craig explains how the full stack model connects employers with human capital on college campuses. But why don’t colleges follow this model to develop mentoring/tutoring jobs that fit inside the existing college infrastructure? Craig’s investment company supports start-ups that couple entry-level staffing needs with broader talent acquisition systems for employersbut post-secondary institutions can also adopt this “full stack” model by investing in programs designed to address high-need “gap” skills through campus-based employment opportunities using existing financial-aid based “Work Study” programs as a basis for creating skills training, paid jobs and certifications. These platforms can also help showcase student talent developed inside the institution to outside employers. Perhaps even the federal government could chip in for the development of these training modules.

Large public institutions provide especially fertile terrain for this full-stack apprenticeship system as these institutions routinely vet 100s of entry-level job applicants every year. Because of this constant supply and demand, many higher education institutions in California have recently invested in staffing software platforms, like Handshake, that provide enough technological enhancements to ensure a steady supply of talent for developing a workforce inside the university.

In semi-automating job training for internal postings and adding a certification incentive, the university can streamline its own human capital resources and build a student population more equipped to tackle the realities of a 21st century workforce. To do this, universities should partner with regional employers, create certification tracks that provide training and allow students to showcase their work history, certifications and accomplishments with the goal of raising future employers’ confidence in recent graduates and students’ confidence in post-graduate full time employment prospects.

Increasing diversity, decreasing equity barriers and providing much needed support through classroom based mentoring, tutoring and lab assistant positions, technical support call centers, and teaching assistant positions could be up and running and scaled without much structural change.  In addition to a regular paycheck, hirees benefit by building connections on campus through networking with hiring cohorts as well as by directly supporting peers in a variety of educational settings. One metric to watch in assessing this pilot would be to see if this collection of skills training, financial incentive and relationship building improves success rates for first and second year students as they matriculate through GE courses—a sinkhole from which many never emerge and others wallow in for years. Another must watch metric would be: time to full workforce employment post graduation. This full stack view could prove to be a win-win for students and universities struggling to improve four-year graduation rates. If students can easily see the financial incentive that full-time careers provide after graduation—and employers can observe a proven track record of student success in a set of high-demand “gap” skills—everyone wins.

Creating staffing and talent pools in-house (on campus) in programs specifically designed to provide students with paying jobs while learning essential high need “gap” skills, such as communication and technology may be the missing ingredient in existing “student success” campaigns.  By adding a certificate of completion aligned with a employer identified “gap” skills, such as basic web/software programming, corporate technical skills (Excel, SaasS), and essential “soft skills,” such as  communication and conflict resolution, universities are going the extra mile to set their graduates up for immediate success in today’s competitive workforce.

MORE HERE: Click to see overview of Linked Ins Skills Gap

How do we address “skills gap” from inside the classroom?

These skills can also be embedded in teaching curriculum.

Flipping the curriculum (by assigning instructional modules outside of the classroom) and asking students to apply their learning inside the classroom may bring us closer to the high outcomes associated with Bloom’s one-to-one tutoring model—while addressing employers complaints about high-need skills gaps in technology and communication.

One simple solution is to have students solve problems in teams during class time. Not only does team problem solving leverage Vygotsky’s Zone of Proximal Development during collaboration, but adding reporting and whole class Q and A feedback loop ensures broad peer-to-peer learning. Peers working toward a clear objective during class closely simulates a classic tutoring model without the associated expenses of hiring/training qualified tutors. However, this model of personalized learning puts faculty in the position of facilitator, rather than leader, and this side-lined position doesn’t come naturally in the world of higher education.

A natural fit for this instructional design can be found within the existing infrastructure of the lab. Since this place has already been carved out of the bedrock that is higher education curriculum in most STEM courses, the question then becomes how can we transform the classic lab to better emulate the tutoring + mastery model as explained in Bloom’s research?

“Flipping” the instructional model gets us pretty darn close as the “lecture” becomes a self-paced tool outside the class, freeing up the instructor to support hands-on learning inside the lab. In the lab, the instructor provides the on-demand, one-on-one “tutoring” support as problems crop up in the problem solving. Add a few challenging student-centered projects to the mix and you’ve got a pretty inexpensive, built-in tutoring+plus mastery model.

Assessment Blues

I am a member of the higher education assessment police force. Only I don’t have a badge, uniform or authority. My official title is Assessment Facilitator. But my unofficial title is pain in the ass. We are a small but mighty force of faculty who care about measuring learning outcomes, or making sure students learn what they are supposed to learn in a particular course, program or degree. Our force is only mighty in our sense of responsibility, our duty to make sure the institution of higher education remains a place where learning is distributed equitably. With access and success in mind, we peer at learning data and extrapolate meaningful conclusions for the purpose of improving educational systems’ impact on learning.

If this job sounds noble—you have never worked in higher education.

A friend recently sent me a link to this NYT opinion piece complaining about the trend of requiring faculty to engage in ‘official’ learning assessment. I read the first few lines of the article, The Misguided Drive to Measure ‘Learning Outcomes,’ and then replied: I don’t need to read this. I live this.

The opinion championed in the op-ed article might as well be made into a theme song, perhaps set to the worn out rhythm of Fur Elise. Familiar, pervasive, unifying in the commonality of tone and pitch—the author, Molly Worthen, utters the familiar banter muttered in faculty lounges all across America.

In fact, I heard the abridged version of this article last week in an Assessment Committee meeting. “No body wants to be here,” a tenured faculty declared as the meeting adjourned. “This is the worst assignment you can get.”

Another agreed, citing the commonly shared sentiment: “Everyone hates assessment.”

Universally deplored and yet—according to Worthen’s excellent linked source, the National Institute for Learning Outcomes Assessment (NILO) report, beyond a few rogue outliers—universally required. Erecting these “compliance” based—Sputnik inspired?—systems began in the 1960s with the birth of national accreditation systems. Accreditation systems like WASC first required faculty to develop measurable learning outcomes for all of their courses/programs; these metrics must also align with broad University learning goals and therefore required a tremendous amount of work to construct. By dangling the threat of accreditation, these agencies mostly succeeded in pushing administrators to push college faculty to tackle this gargantuan feat throughout the 80s. And, of course, once that work was done—then the work of measuring began. If one course has 10 learning outcomes and 500 students, then, well, that means a whole lot of measuring. But measuring is no longer adequate—now it’s how does this data drive change? What are you going to do with the data? And on…and on…

So a tremendous amount of faculty labor drove this resentment fueled assessment beast onto university campuses in the first place, and that’s exactly how faculty continue to view assessment today: a beastly chore that they resent being forced to do.

One of the commenters to Worthen’s article writes: Wait. I’m confused, what exactly is assessment?

Isn’t that what grades do?

It’s my job to answer that reasonable question. And the answer is: nope.

Grades can’t be used for assessment—according to WASC. Outcomes are measurable, but grades do not necessarily equate to outcomes and therefore grades are not valuable data (according to accreditors). This argument causes that internalized resentment beast to paw at the ground in rage.

Faculty: So let me get this straight—what you are saying is that what I was hired to do—teach and build assignment-based assessments, then review, evaluate and score these assignments—is suddenly not good enough? You want me to measure some arbitrary thing—that you developed and deemed an appropriate thing that I should be assessing and then you want me to work even harder to build some sort of device that accurately measures that thing that you are saying is what my class should teach and then you want me to again collect student work to measure this thing, even though I already collected, assessed and GRADED—you want me to do that again? Is that right?

Assessment Facilitator: Yes.

Faculty never verbalize this to me directly, but the sideways glance says it all: Screw you.

Worthen does an excellent job of putting this sentiment into Beethoven-like cadence, with just enough academic bravado that inspires everyone round the committee table to chant: hear! hear!

And yet the assessment beast continues to rampage campuses. Requiring meetings, data collection and analysis, quantitative and qualitative measurements, metrics, baselines, critical thinking, reasoning, curiosity, creativity and all of the other ‘ings’ and ‘itys’ that higher education so frequently touts that it cultivates—but instead assessment demands these traits from faculty. And the old saying proves true: teachers make the worst students.

To make matters worse, a large portion of this process is mind-boggling, requiring a high degree of awareness about learning science, cognition and reflection—concepts that faculty have not been trained to undertake. Assessment plans resemble Gantt Charts. Reports read like technical documents—each highly specialized and unique. In fact, assessment often takes faculty so far outside of their area of expertise that many, rightly, complain: I am not a bean counter! I am a professor of … and that’s what I am focusing on in my classroom, and if you wanted me to ‘bean-count’ for you, then we should have talked about bean-counting during the exhaustive interview process that I just survived. No one mentioned bean counting on the hiring committee. This feels like the ultimate bait and switch—hire me as an expert in my field and then lock me in a room counting beans.

Screw this.

And so we weary Assessment Facilitators soldier on. All the while the NILO report portrays a picturesque view of assessment, and recommends MORE work from faculty (albeit cleverly disguised as support) : “Professional development could be more meaningfully integrated with assessment efforts, supporting faculty use of results, technology implementation, and integration of efforts across an institution.” This is like saying let’s make the beast dance too.

Oh yes, I think the beast will be happier if we just give it some dancing shoes and make it dance.

Can I just tell you from the front lines to all of you who think this is possible: If you have a beast, you can’t make it dance, no matter what tune or what shoe you put out there. The beast ain’t gonna dance. The idea that enforcing professional development “with assessment efforts” overlooks the fact that higher education faculty don’t want to be developed in this way—in fact they want to enact a full on retreat from this beast.

So what’s the solution? There is no easy answer. I do know this: Increasing faculty workload (AKA “professional development”) related to assessment—the act that created the beast in the first place—is not the only answer. I also know that if Administrators care about accreditation (which obviously they should), then Administrators need to take on more of the burden of assessment by designing and implementing meaningful and authentic assessment systems that better support faculty. Therefore, “compliance” based assessment systems need to go the way of Sputnik.

Strebel’s (1996) personal compacts article does a good job of explaining one of the key elements of “change management” that could have impact. The alignment of assessment goals at the time of hiring in order to build in personal and professional contractual obligations would be a great place to start at the contract stage of hiring, before these new faculty encounter the influence of “laggards” which remain the dominate cultural influence on our campus in regards to assessment. If the personal compacts of these new hires were clearly articulated by “upper-management” and the efforts of these new hires in this arena are championed, and Human Performance Technology (HPT) supports this effort by targeted training (such as how do I create the data visualizations required by these assessment reports?), then we can begin the much needed forward momentum. Follow this up with excellent communication support systems, like Slack, that help build a community of assessment and showcase the efforts of these new faculty in this area—and we are well on our way to a much-needed change, change that is possible, at a deliberative pace, even in large government run institutions.

Take it from the beat cop without a badge, uniform or authority—we’ve got to do more change-management in higher education and better support those early adopters or we will be singing the assessment blues for a long time to come.

Inside Out

Flipped instruction is essentially inside out teaching, reversed instruction. Homework takes place inside class time, and lectures go outside.

Guiding principals: cognitive learning science.

One of the advantages of this model is that problem-solving during class time provides instant feedback. Whether they are working on a team or not, allowing students to interact, to ask questions, as they problem solve provides multiple support systems. Instead of just the instructor, peers become valuable resources. In my flipped classrooms, even when students are working individually, they always have the option to ask questions with their peers. I am not the only valuable source of information in the class.

However, I do offer my guidance directly during this time by circling the room.

Listening or looking over students’ shoulders, and commenting directly on their work, provides individualized instruction during class time. This direction occurs during production and therefore asks students to apply targeted learning in context and thus may lead to a higher likelihood of success.

This connection—between students and teacher and student— in the act of solving a problem—is an irreplaceable and valuable resource, perhaps the greatest resource in face-to-face education. Master-apprentice models data back to the ancients, for good reason. They work. This model can also be enacted in fully online courses, but with much more effort and intention in the instructional design of the course. Making it quite likely that, in some disciplines, mastering the technical aspects of this modality may exceed justifiable limits in investments of money, time, and/or expertise.

I use Bloom’s Taxonomy as a Guide to decide what goes outside and inside the classroom,

images The top half of the pyramid goes inside and the bottom half goes outside.

Step One

Assign Homework

Watch Lecture; Take Notes. (Remember, Understand) or Watch Lecture and complete activity, such as a diagram, quiz or reflection assignment (Remember, Understand, Apply)

Read Textbook; Complete online “Adaptive” quiz series—Remember, Understand, Apply.

Step Two

Facilitate Problem-Solving Classroom

Design lessons wherein students Analyze, Evaluate and/or Create with the material covered in the homework.

Step Three

Repeat

Step Four

Create Check Points  (Graded but Low Stakes)

Have students complete a Low-Stakes Assessment. This can be done either outside or inside of class, but must be a chance to practice and receive feedback prior to formal assessment. Homework assignments work great for this, but these formative assessments can also be part of the active learning taking place in the class.

Step Five

Administer Formal Assessment

This assessment should have a rubric that clearly aligns with learning outcomes associated with the assignment. Unlike the formative assessment, where feedback is crucial, the formal assessment should be light on manual instructor feedback because the rubric/score should send a clear message to students as to where they excelled and where they need to improve. Frequent communication with Instructors should be encouraged. If there are questions from students about these scores, instructors should look at student models either during class time and via one-on-one video conferences, or in-person office hour setting. This feedback loop is one of the most valuable aspects of instruction and cannot be duplicated by machines—so affording time for this discussion should be a top faculty priority and flipped environments afford faculty this time.

 

Adaptive Learning Study

Beacon.JPGJust read an interesting study on Adaptive Learning conducted by SRI Education in higher education. (Yes, I admit it, I am actively avoiding the depressing Twitter-infested news cycle)

I conducted an informal analysis of students outcomes using adaptive learning during my first year of implementing McGraw Hill’s Connect, and found similar results to SRI’s study: no significant difference in learning outcomes, course completion and course grades between the control group and those using the “machine.”  So there is no magic bullet here. In fact, in the case of ASU’s adoption of these tools for a basic skill math course, the adaptive product slightly lowered performance compared to a “blended” course control group. However, after reading this study and reconsidering the results of my own informal study, I do believe that a longitudinal, controlled study with properly calibrated metrics may, over time, show better results for some adaptive technology products.

Which is why I continue to use the adaptive learning software in my classes despite studies like these. Why? There are several reasons for my persistence—and persistence is key here. Because there has been struggle all along the way, from technology to sales reps, these products are not plug and play—some day maybe, but not today.

Reason #1 for Adopting Adaptive Technology

There is nothing worse than asking a classroom full of students a question that was thoroughly covered in the required reading—and seeing that collective, vacant stare. Or worse, seeing the eyes drop from view with the classic look of  “please don’t call on me panic.”  For years before trying adaptive technology, it had become very clear to me that more and more students were getting away with not reading the text, and many were not even buying the text.

The dashboard that tracks students progress through the material proved to be valuable enough to keep me hooked in. Working as my “reading police,” this feature allows me to ‘see’ where students are, identify at-risk students, and align classroom plans with progress. According to the SRI study, higher education faculty agreed with me that this feature is highly valuable. In the age of “just Google it,” educators need to hold students accountable for required course reading—this tool puts the advantage back in favor of faculty. As I tell students, you can find any answer you want on the internet—it just might not be the right answer. We carefully select college textbooks for a reason and holding students accountable for reading them is the number one reason for adopting textbooks with progress tracking dashboard features.

Reason #2 for Adopting Adaptive Technology

The study also points out that these technologies were implemented, in several cases, along with the push to reformulate the traditional lecture model into a student-centered pedagogical model. SRI authors write, “Both bachelor’s-degree-granting institutions were seeking to reduce the amount of lecture and increase active student learning in the classroom, so they used the adaptive courseware to support self-study of basic material usually presented in class lectures.” However, what SRI researchers found was that despite adding the adaptive learning, lectures and presentation times were not decreased significantly.

Decreasing time spent lecturing on the textbook material helped me immensely—and continues to be a motivating factor as I continue to work through myriad issues involved with implementing this technology. The adaptive learning helps me to focus on the higher-level concepts and active learning in the classroom and leave the lower-level “information gathering” to the reading police.  But if educators don’t make this shift—more active engagement, less lecture—then the adoption of this technology seems pointless. This shift requires extra work as faculty must take a holistic view of the course and adjust accordingly—and that hurdle probably explains the results of the SRI study. Humans don’t automatically adjust to the presence of machines—it’s a bit like putting together a jigsaw puzzle. Until the big picture becomes clear, connecting all the little pieces takes time.

Reason #3 for Giving Adaptive Technology a Chance

I agree with the study authors’ recommendation for the next wave of research on these products: “The ALMAP evaluation focused on time devoted to lecture and presentations, but future work should examine (1) how adaptive courseware affects the relative balance between low-level and high-level content interactions between instructors and students and (2) how the automated dashboards in adaptive courseware affect instructors’ sensitivity to individual and whole-class learning needs.” Both of these examinations look into important adjustments that faculty make in response to the machine. Again, these adjustments don’t automatically happen, educators make them happen and this takes time and effort.

Final Musings…

These technologies are disruptive—which I believe is a good thing. Thinking about the impact these technologies have on 21st century instructional design and pedagogy really is the new mental space we all need to rent. How do I adjust my lecture/teaching/activities—time— to take advantage of what the machine can do for me? How can the machine help me better serve the needs of ALL students? What value can I place on these tools given my particular challenges in the classroom? How can the machine serve my needs and therefore better serve students’ needs? Entering this new space where we deeply consider emerging technologies can be daunting but also invigorating—familiar territory for 21st century pedagogical pioneers.

Bigbeacon