Blog

Becoming Trauma-Informed

What a journey! After 16 weeks of reading and research, I have learned so much about Trauma-Informed educational models and Social Emotional Learning this semester! And yet if I were to estimate how much I have read compared to the amount of information out there—it would probably be about .001% of the total body of knowledge created by researchers and clinicians over the last couple of decades. The word burgeoning comes to mind when considering all the learning science that has grown out of…well, technology, really. Because researchers before technology (BT) couldn’t measure the brain, map the brain or observe the inner-workings of the brain in action.

Despite all this empirical energy, enthusiasm and expertise and technology, however, the revolution needed to transform educational models into trauma-informed models has yet to occur.

Why, is this the case, you ask?

It certainly isn’t that the need for these systems has decreased; there are no signs of traumatic events in American culture decreasing anytime soon. Perhaps, it’s that people have reached a higher plane of enlightenment through advanced practice of social emotional skills, and so they are now better able to cope with trauma, get on with their lives freed from the bursts of emotion-based behavior that not only disrupt their own lives but the lives of everyone around them, to the point of becoming toxic to self and others. Naaa. We’re stuck with that for awhile, I’m afraid.

The bottom line: Kids who have experienced trauma will continue to be in our classrooms for a long time to come.  In fact, we can now add a larger number of climate refugees to our long list of woes. Never forget Paradise, California. Home to 27,000 people, Paradise  burned to the ground in less time than most people spend on domestic air travel.

So if volumes of Learning Science research shows that the emotional regulatory capacity of the learner may be one of the greatest predictors of success—and also one of the key tools that we have to improve equity in our educational systems—then how come we see so few educational systems actually apply these models effectively?

Simple answer: the solutions proposed by these models put too much of the burden on teachers. Even if we want to classify teachers as Saints there are practical pedagogical limits to what they can do in the classroom. I hate to beat the same drum here, but what we need to do instead of asking educators to do more is to build better tools that will help educators: help educators, help students.

These systems need should require only quick, micro-training or no training at all. Think iPhone. Think television. Plug, then play. Gaming environments, mobile devices, ipads can all deliver what we know teachers need. Not as a lesson that teachers need to learn—I love how everyone’s an expert on what teachers need to learn—not! We don’t need more training, we need more tools.

Like this one:

https://www.sowntogrow.com/

This is an example of a practical “help educators, help students” tool that provides a research-based trauma-informed toolkit—only you would never know it. Because it does something useful! It provides the framework that all kids need to build planning and self-regulation skills without calling attention to the other benefits embedded in its research-based instructional design. Love Sown to Grow! Hurray!

Wrong Do It Again!

So proud of my collaboration with amazingly smart Team One in the MIST Program @CSUMB. We had less than a week to read multiple theoretical research articles and formulate an opinion on Behaviorism theory in Education. We hammered out this collaboratively written op-ed in less than three days despite hectic schedules. We require that you listen to The Wall while reading, however.

By Karin Pederson, Sondre Hammer Fossness, Shwetha Prahlad, Russell Fleming and Stacey Knapp

“Wrong, Do it again!”
“If you don’t eat yer meat, you can’t have any pudding. How can you
have any pudding if you don’t eat yer meat?”
“You! Yes, you behind the bikesheds, stand still laddy!”
Lyrics from The Wall, Pink Floyd (1979)

Pink Floyd’s lyrics memorialized a behavioristic educational perspective in this description of an English boarding school: “Wrong, Do it Again! If you don’t eat your meat, you can’t have any pudding.” Early educational interpretations of the behaviorist model went as far as to include physical punishment as a behavioral deterrent—whether “stand still laddy” in Floyd’s lyrics imply a schoolyard paddling or not is unclear, but what is clear now is that, in its most extreme form—physical punishment— the behaviorist model no longer has a place in American educational systems. Despite this controversial past, instructional designers should take a second look at this historical framework in order to understand the powerful impacts and implications of the “conditioned response,” a central tenet of behaviorism. Without this understanding, educational technology products and instructional design could inadvertently be delivering a deleterious effect on learning.

While the 21st century educational landscape has erased physically “unpleasant” consequences, the behaviorist model is alive and well as demonstrated by the rampancy of meritocracy throughout our educational landscape. Skinner’s (1938) premise that an individual makes an association between a particular behavior and a consequence, and unpleasant consequences are not likely to be repeated (Thorndike, 1898) continues to hold value today. So if we leave a trail of positive feedbacks, then the learner will follow the path paved by rewards (and badges!) and avoid the pathways that lead to failure.

Not so fast! Designers should consider a few key behavioristic concepts, especially when creating merit-based learning environments:

Rewarding Laziness
The quiz is nothing new to classrooms, but the instant-feedback customizations possible in online educational environments require a deeper consideration of behavioralism than pen and paper quizzes of yesterday. Learners answer wrong, ‘and then what happens?’ For example, many online quizzing systems give students a chance to correct their own answer immediately after their first response. In practice, this means that it is fairly easy to click through a test and achieve high scores. From a behavioristic perspective, the student will experience a positive reinforcement in the form of a good grade regardless of their preparation. (Why read, if I get an A without reading?) As a result, negative reinforcement (the bad grade) is weakened and the potential for a positive reinforcement for under-preparation is strengthened.

Skinner introduced the principle of “operant conditioning,” which was based on Thorndike’s “Law of Effect” that states: a behavior followed by pleasing consequences is likely to be repeated. So what if incorrect behavior leads to pleasing consequences? By removing all negative reinforcement, it can be argued that, from a behavioristic perspective, the design encourages the “try and fail” method instead of making sure that the answer submitted is correct by reading over the material once more. In other words, students get pleasant consequences from lazy behavior.

Not a One-Size-Fits-All Solution
Behaviorism in practice will ultimately be influenced by learners’ intrinsic motivation. And identifying positive rewards in context specific learning scenarios can be challenging. (While first grade students may find a trip to the candy jar or reading bean bag motivating, what motivates a highschool student?) Therefore, the learning outcome achieved by learners may vary greatly from student to student depending on intrinsic motivation. Such an uncertain variable makes behaviorism fall short as an all encompassing tool for learning, either in classroom or online.

Another important consideration for instructional designers to consider is whether or not receiving extrinsic rewards or punishments might become the rule of life for students. Researchers point out that students may require validation for every task, or expect positive reinforcements for even minor tasks which might not always come with a reward. In this situation, a student might stop caring or feel unmotivated to finish the homework if s/he does not get a reward.

Undesirable Rewards
Morrison (2007) explains that an individual may not particularly be interested in certain kinds of positive reinforcements. If “candies” are used as rewards for every correct response, then if the student was not “particularly interested in candies,” (p. 211) it might not be the best motivation for students to strive for (and, ideally obtain) correct answers. The author further argues that unless the student “could be given the choice between a number of different reinforcements so that they could choose one that was desirable for them” using particular positive reinforcements might not produce the intended result (Morrison, 2007, p. 211).

Pink Floyd’s famous refrain “We don’t need no education” was an unintended consequence of behavioralism in Britain’s 20th century educational system, and the album reached number 1 on the U.S. Billboard by 1980, eventually becoming one the top five greatest selling albums of all time. Instructional designers need to take a second look at behaviorist theory and consider unintended consequences when designing merit-based systems, or risk becoming as Floyd’s lyrics warn: “just another brick in the wall.”

References
Pink Floyd. (1979). The Wall. Los Angeles: EMI. (1979)

Reiser, R. A., & Dempsey, J. V. (2017). Trends and Issues in Instructional Design and Technology (4th Edition). New York : Pearson.
Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative processes in animals. Psychological Monographs: General and Applied, 2(4), i-109.

Morrison, Aubrey (2007): The Relative Effects of Positive Reinforcement, Response-Cost, and a Combination Procedure on Task Performance with Variable Task Difficulty. The Huron University College Journal of Learning and Motivation: Vol. 45: Iss. 1, Article 12.

Full Stack Apprenticeship Programs

So long ago, Bloom’s (1984) MIT research demonstrated the efficacy of tutoring + mastery instructional design. At the time, the only way to connect students with tutors was in face-to-face sessions—possible at some small schools—but too expensive for large public schools. By the 90s tutoring was mostly outsourced to parents, wealthy parents, and the rest of us just did our best. Technologically “smart” tools can fill some of this void, but are educators willing to use them? Teachers know better than most that we cannot offload “personalized instruction” to machines because it’s the human-to-human relationship that is at the heart of teaching/learning and most educators view claims to the contrary as dubious.

One way to integrate tutoring (and these vital human-to-human relationships) is to launch more peer-to-peer mentoring/tutoring opportunities on college campuses. A recent article by Ryan Craig explains how the full stack model connects employers with human capital on college campuses. But why don’t colleges follow this model to develop mentoring/tutoring jobs that fit inside the existing college infrastructure? Craig’s investment company supports start-ups that couple entry-level staffing needs with broader talent acquisition systems for employersbut post-secondary institutions can also adopt this “full stack” model by investing in programs designed to address high-need “gap” skills through campus-based employment opportunities using existing financial-aid based “Work Study” programs as a basis for creating skills training, paid jobs and certifications. These platforms can also help showcase student talent developed inside the institution to outside employers. Perhaps even the federal government could chip in for the development of these training modules.

Large public institutions provide especially fertile terrain for this full-stack apprenticeship system as these institutions routinely vet 100s of entry-level job applicants every year. Because of this constant supply and demand, many higher education institutions in California have recently invested in staffing software platforms, like Handshake, that provide enough technological enhancements to ensure a steady supply of talent for developing a workforce inside the university.

In semi-automating job training for internal postings and adding a certification incentive, the university can streamline its own human capital resources and build a student population more equipped to tackle the realities of a 21st century workforce. To do this, universities should partner with regional employers, create certification tracks that provide training and allow students to showcase their work history, certifications and accomplishments with the goal of raising future employers’ confidence in recent graduates and students’ confidence in post-graduate full time employment prospects.

Increasing diversity, decreasing equity barriers and providing much needed support through classroom based mentoring, tutoring and lab assistant positions, technical support call centers, and teaching assistant positions could be up and running and scaled without much structural change.  In addition to a regular paycheck, hirees benefit by building connections on campus through networking with hiring cohorts as well as by directly supporting peers in a variety of educational settings. One metric to watch in assessing this pilot would be to see if this collection of skills training, financial incentive and relationship building improves success rates for first and second year students as they matriculate through GE courses—a sinkhole from which many never emerge and others wallow in for years. Another must watch metric would be: time to full workforce employment post graduation. This full stack view could prove to be a win-win for students and universities struggling to improve four-year graduation rates. If students can easily see the financial incentive that full-time careers provide after graduation—and employers can observe a proven track record of student success in a set of high-demand “gap” skills—everyone wins.

Creating staffing and talent pools in-house (on campus) in programs specifically designed to provide students with paying jobs while learning essential high need “gap” skills, such as communication and technology may be the missing ingredient in existing “student success” campaigns.  By adding a certificate of completion aligned with a employer identified “gap” skills, such as basic web/software programming, corporate technical skills (Excel, SaasS), and essential “soft skills,” such as  communication and conflict resolution, universities are going the extra mile to set their graduates up for immediate success in today’s competitive workforce.

MORE HERE: Click to see overview of Linked Ins Skills Gap

How do we address “skills gap” from inside the classroom?

These skills can also be embedded in teaching curriculum.

Flipping the curriculum (by assigning instructional modules outside of the classroom) and asking students to apply their learning inside the classroom may bring us closer to the high outcomes associated with Bloom’s one-to-one tutoring model—while addressing employers complaints about high-need skills gaps in technology and communication.

One simple solution is to have students solve problems in teams during class time. Not only does team problem solving leverage Vygotsky’s Zone of Proximal Development during collaboration, but adding reporting and whole class Q and A feedback loop ensures broad peer-to-peer learning. Peers working toward a clear objective during class closely simulates a classic tutoring model without the associated expenses of hiring/training qualified tutors. However, this model of personalized learning puts faculty in the position of facilitator, rather than leader, and this side-lined position doesn’t come naturally in the world of higher education.

A natural fit for this instructional design can be found within the existing infrastructure of the lab. Since this place has already been carved out of the bedrock that is higher education curriculum in most STEM courses, the question then becomes how can we transform the classic lab to better emulate the tutoring + mastery model as explained in Bloom’s research?

“Flipping” the instructional model gets us pretty darn close as the “lecture” becomes a self-paced tool outside the class, freeing up the instructor to support hands-on learning inside the lab. In the lab, the instructor provides the on-demand, one-on-one “tutoring” support as problems crop up in the problem solving. Add a few challenging student-centered projects to the mix and you’ve got a pretty inexpensive, built-in tutoring+plus mastery model.

Assessment Blues

I am a member of the higher education assessment police force. Only I don’t have a badge, uniform or authority. My official title is Assessment Facilitator. But my unofficial title is pain in the ass. We are a small but mighty force of faculty who care about measuring learning outcomes, or making sure students learn what they are supposed to learn in a particular course, program or degree. Our force is only mighty in our sense of responsibility, our duty to make sure the institution of higher education remains a place where learning is distributed equitably. With access and success in mind, we peer at learning data and extrapolate meaningful conclusions for the purpose of improving educational systems’ impact on learning.

If this job sounds noble—you have never worked in higher education.

A friend recently sent me a link to this NYT opinion piece complaining about the trend of requiring faculty to engage in ‘official’ learning assessment. I read the first few lines of the article, The Misguided Drive to Measure ‘Learning Outcomes,’ and then replied: I don’t need to read this. I live this.

The opinion championed in the op-ed article might as well be made into a theme song, perhaps set to the worn out rhythm of Fur Elise. Familiar, pervasive, unifying in the commonality of tone and pitch—the author, Molly Worthen, utters the familiar banter muttered in faculty lounges all across America.

In fact, I heard the abridged version of this article last week in an Assessment Committee meeting. “No body wants to be here,” a tenured faculty declared as the meeting adjourned. “This is the worst assignment you can get.”

Another agreed, citing the commonly shared sentiment: “Everyone hates assessment.”

Universally deplored and yet—according to Worthen’s excellent linked source, the National Institute for Learning Outcomes Assessment (NILO) report, beyond a few rogue outliers—universally required. Erecting these “compliance” based—Sputnik inspired?—systems began in the 1960s with the birth of national accreditation systems. Accreditation systems like WASC first required faculty to develop measurable learning outcomes for all of their courses/programs; these metrics must also align with broad University learning goals and therefore required a tremendous amount of work to construct. By dangling the threat of accreditation, these agencies mostly succeeded in pushing administrators to push college faculty to tackle this gargantuan feat throughout the 80s. And, of course, once that work was done—then the work of measuring began. If one course has 10 learning outcomes and 500 students, then, well, that means a whole lot of measuring. But measuring is no longer adequate—now it’s how does this data drive change? What are you going to do with the data? And on…and on…

So a tremendous amount of faculty labor drove this resentment fueled assessment beast onto university campuses in the first place, and that’s exactly how faculty continue to view assessment today: a beastly chore that they resent being forced to do.

One of the commenters to Worthen’s article writes: Wait. I’m confused, what exactly is assessment?

Isn’t that what grades do?

It’s my job to answer that reasonable question. And the answer is: nope.

Grades can’t be used for assessment—according to WASC. Outcomes are measurable, but grades do not necessarily equate to outcomes and therefore grades are not valuable data (according to accreditors). This argument causes that internalized resentment beast to paw at the ground in rage.

Faculty: So let me get this straight—what you are saying is that what I was hired to do—teach and build assignment-based assessments, then review, evaluate and score these assignments—is suddenly not good enough? You want me to measure some arbitrary thing—that you developed and deemed an appropriate thing that I should be assessing and then you want me to work even harder to build some sort of device that accurately measures that thing that you are saying is what my class should teach and then you want me to again collect student work to measure this thing, even though I already collected, assessed and GRADED—you want me to do that again? Is that right?

Assessment Facilitator: Yes.

Faculty never verbalize this to me directly, but the sideways glance says it all: Screw you.

Worthen does an excellent job of putting this sentiment into Beethoven-like cadence, with just enough academic bravado that inspires everyone round the committee table to chant: hear! hear!

And yet the assessment beast continues to rampage campuses. Requiring meetings, data collection and analysis, quantitative and qualitative measurements, metrics, baselines, critical thinking, reasoning, curiosity, creativity and all of the other ‘ings’ and ‘itys’ that higher education so frequently touts that it cultivates—but instead assessment demands these traits from faculty. And the old saying proves true: teachers make the worst students.

To make matters worse, a large portion of this process is mind-boggling, requiring a high degree of awareness about learning science, cognition and reflection—concepts that faculty have not been trained to undertake. Assessment plans resemble Gantt Charts. Reports read like technical documents—each highly specialized and unique. In fact, assessment often takes faculty so far outside of their area of expertise that many, rightly, complain: I am not a bean counter! I am a professor of … and that’s what I am focusing on in my classroom, and if you wanted me to ‘bean-count’ for you, then we should have talked about bean-counting during the exhaustive interview process that I just survived. No one mentioned bean counting on the hiring committee. This feels like the ultimate bait and switch—hire me as an expert in my field and then lock me in a room counting beans.

Screw this.

And so we weary Assessment Facilitators soldier on. All the while the NILO report portrays a picturesque view of assessment, and recommends MORE work from faculty (albeit cleverly disguised as support) : “Professional development could be more meaningfully integrated with assessment efforts, supporting faculty use of results, technology implementation, and integration of efforts across an institution.” This is like saying let’s make the beast dance too.

Oh yes, I think the beast will be happier if we just give it some dancing shoes and make it dance.

Can I just tell you from the front lines to all of you who think this is possible: If you have a beast, you can’t make it dance, no matter what tune or what shoe you put out there. The beast ain’t gonna dance. The idea that enforcing professional development “with assessment efforts” overlooks the fact that higher education faculty don’t want to be developed in this way—in fact they want to enact a full on retreat from this beast.

So what’s the solution? There is no easy answer. I do know this: Increasing faculty workload (AKA “professional development”) related to assessment—the act that created the beast in the first place—is not the only answer. I also know that if Administrators care about accreditation (which obviously they should), then Administrators need to take on more of the burden of assessment by designing and implementing meaningful and authentic assessment systems that better support faculty. Therefore, “compliance” based assessment systems need to go the way of Sputnik.

Strebel’s (1996) personal compacts article does a good job of explaining one of the key elements of “change management” that could have impact. The alignment of assessment goals at the time of hiring in order to build in personal and professional contractual obligations would be a great place to start at the contract stage of hiring, before these new faculty encounter the influence of “laggards” which remain the dominate cultural influence on our campus in regards to assessment. If the personal compacts of these new hires were clearly articulated by “upper-management” and the efforts of these new hires in this arena are championed, and Human Performance Technology (HPT) supports this effort by targeted training (such as how do I create the data visualizations required by these assessment reports?), then we can begin the much needed forward momentum. Follow this up with excellent communication support systems, like Slack, that help build a community of assessment and showcase the efforts of these new faculty in this area—and we are well on our way to a much-needed change, change that is possible, at a deliberative pace, even in large government run institutions.

Take it from the beat cop without a badge, uniform or authority—we’ve got to do more change-management in higher education and better support those early adopters or we will be singing the assessment blues for a long time to come.

Inside Out

Flipped instruction is essentially inside out teaching, reversed instruction. Homework takes place inside class time, and lectures go outside.

Guiding principals: cognitive learning science.

One of the advantages of this model is that problem-solving during class time provides instant feedback. Whether they are working on a team or not, allowing students to interact, to ask questions, as they problem solve provides multiple support systems. Instead of just the instructor, peers become valuable resources. In my flipped classrooms, even when students are working individually, they always have the option to ask questions with their peers. I am not the only valuable source of information in the class.

However, I do offer my guidance directly during this time by circling the room.

Listening or looking over students’ shoulders, and commenting directly on their work, provides individualized instruction during class time. This direction occurs during production and therefore asks students to apply targeted learning in context and thus may lead to a higher likelihood of success.

This connection—between students and teacher and student— in the act of solving a problem—is an irreplaceable and valuable resource, perhaps the greatest resource in face-to-face education. Master-apprentice models data back to the ancients, for good reason. They work. This model can also be enacted in fully online courses, but with much more effort and intention in the instructional design of the course. Making it quite likely that, in some disciplines, mastering the technical aspects of this modality may exceed justifiable limits in investments of money, time, and/or expertise.

I use Bloom’s Taxonomy as a Guide to decide what goes outside and inside the classroom,

images The top half of the pyramid goes inside and the bottom half goes outside.

Step One

Assign Homework

Watch Lecture; Take Notes. (Remember, Understand) or Watch Lecture and complete activity, such as a diagram, quiz or reflection assignment (Remember, Understand, Apply)

Read Textbook; Complete online “Adaptive” quiz series—Remember, Understand, Apply.

Step Two

Facilitate Problem-Solving Classroom

Design lessons wherein students Analyze, Evaluate and/or Create with the material covered in the homework.

Step Three

Repeat

Step Four

Create Check Points  (Graded but Low Stakes)

Have students complete a Low-Stakes Assessment. This can be done either outside or inside of class, but must be a chance to practice and receive feedback prior to formal assessment. Homework assignments work great for this, but these formative assessments can also be part of the active learning taking place in the class.

Step Five

Administer Formal Assessment

This assessment should have a rubric that clearly aligns with learning outcomes associated with the assignment. Unlike the formative assessment, where feedback is crucial, the formal assessment should be light on manual instructor feedback because the rubric/score should send a clear message to students as to where they excelled and where they need to improve. Frequent communication with Instructors should be encouraged. If there are questions from students about these scores, instructors should look at student models either during class time and via one-on-one video conferences, or in-person office hour setting. This feedback loop is one of the most valuable aspects of instruction and cannot be duplicated by machines—so affording time for this discussion should be a top faculty priority and flipped environments afford faculty this time.

 

Adaptive Learning Study

Beacon.JPGJust read an interesting study on Adaptive Learning conducted by SRI Education in higher education. (Yes, I admit it, I am actively avoiding the depressing Twitter-infested news cycle)

I conducted an informal analysis of students outcomes using adaptive learning during my first year of implementing McGraw Hill’s Connect, and found similar results to SRI’s study: no significant difference in learning outcomes, course completion and course grades between the control group and those using the “machine.”  So there is no magic bullet here. In fact, in the case of ASU’s adoption of these tools for a basic skill math course, the adaptive product slightly lowered performance compared to a “blended” course control group. However, after reading this study and reconsidering the results of my own informal study, I do believe that a longitudinal, controlled study with properly calibrated metrics may, over time, show better results for some adaptive technology products.

Which is why I continue to use the adaptive learning software in my classes despite studies like these. Why? There are several reasons for my persistence—and persistence is key here. Because there has been struggle all along the way, from technology to sales reps, these products are not plug and play—some day maybe, but not today.

Reason #1 for Adopting Adaptive Technology

There is nothing worse than asking a classroom full of students a question that was thoroughly covered in the required reading—and seeing that collective, vacant stare. Or worse, seeing the eyes drop from view with the classic look of  “please don’t call on me panic.”  For years before trying adaptive technology, it had become very clear to me that more and more students were getting away with not reading the text, and many were not even buying the text.

The dashboard that tracks students progress through the material proved to be valuable enough to keep me hooked in. Working as my “reading police,” this feature allows me to ‘see’ where students are, identify at-risk students, and align classroom plans with progress. According to the SRI study, higher education faculty agreed with me that this feature is highly valuable. In the age of “just Google it,” educators need to hold students accountable for required course reading—this tool puts the advantage back in favor of faculty. As I tell students, you can find any answer you want on the internet—it just might not be the right answer. We carefully select college textbooks for a reason and holding students accountable for reading them is the number one reason for adopting textbooks with progress tracking dashboard features.

Reason #2 for Adopting Adaptive Technology

The study also points out that these technologies were implemented, in several cases, along with the push to reformulate the traditional lecture model into a student-centered pedagogical model. SRI authors write, “Both bachelor’s-degree-granting institutions were seeking to reduce the amount of lecture and increase active student learning in the classroom, so they used the adaptive courseware to support self-study of basic material usually presented in class lectures.” However, what SRI researchers found was that despite adding the adaptive learning, lectures and presentation times were not decreased significantly.

Decreasing time spent lecturing on the textbook material helped me immensely—and continues to be a motivating factor as I continue to work through myriad issues involved with implementing this technology. The adaptive learning helps me to focus on the higher-level concepts and active learning in the classroom and leave the lower-level “information gathering” to the reading police.  But if educators don’t make this shift—more active engagement, less lecture—then the adoption of this technology seems pointless. This shift requires extra work as faculty must take a holistic view of the course and adjust accordingly—and that hurdle probably explains the results of the SRI study. Humans don’t automatically adjust to the presence of machines—it’s a bit like putting together a jigsaw puzzle. Until the big picture becomes clear, connecting all the little pieces takes time.

Reason #3 for Giving Adaptive Technology a Chance

I agree with the study authors’ recommendation for the next wave of research on these products: “The ALMAP evaluation focused on time devoted to lecture and presentations, but future work should examine (1) how adaptive courseware affects the relative balance between low-level and high-level content interactions between instructors and students and (2) how the automated dashboards in adaptive courseware affect instructors’ sensitivity to individual and whole-class learning needs.” Both of these examinations look into important adjustments that faculty make in response to the machine. Again, these adjustments don’t automatically happen, educators make them happen and this takes time and effort.

Final Musings…

These technologies are disruptive—which I believe is a good thing. Thinking about the impact these technologies have on 21st century instructional design and pedagogy really is the new mental space we all need to rent. How do I adjust my lecture/teaching/activities—time— to take advantage of what the machine can do for me? How can the machine help me better serve the needs of ALL students? What value can I place on these tools given my particular challenges in the classroom? How can the machine serve my needs and therefore better serve students’ needs? Entering this new space where we deeply consider emerging technologies can be daunting but also invigorating—familiar territory for 21st century pedagogical pioneers.

Bigbeacon

Formulating Teams

download-e1495818599258.jpg Huddled on the floor with scissors, I used to cut student names from the roster, and shuffle them around like tea leaves, hoping to see a bright future—productivity, friendship, personal growth and fulfillment! After years of practicing alchemy, I decided to get real, so I turned to math. By creating numeric teams—for example, count off by 5, students were at least in equitably sized groups—and a reasonably sized team IS easier to manage. What I finally learned about creating teams harkens back to that familiar Hallmark greeting card slogan: When you love something set it free…

That’s right: Chalk one up for democracy when it comes to successful team creation. For major, graded collaborative projects, I have found student choice— not alchemy, not size, but the simple act of letting students choose—produces the best results. However, like most choices, wise decisions stem from a clear understanding of what you are getting yourself in to—so clear objectives are crucial here, and successful democracies depend on a framework of law and order—so the way these choices are set up in the classroom with clear parameters and guidelines—sound instructional design—is really the hidden key to success.

Student-Centered Choice

Now, I give students a couple of weeks to come up with a great idea for their final project, and they pitch their idea during a whole class networking session. They decide which idea they support and join that team.

Sounds simple right? It is, but there are a few tips that I will share after a couple of years of conducting this process and learning from my mistakes. I know from my own experience of working on teams that even the best need an infrastructure that supports productivity. Supportive infrastructure helps me too because I want my focus to be on reviewing the actual projects—not on team dynamics, so here again student-centered pedagogy reigns supreme. However, effective student-centered instructional design doesn’t leave students in the dark without any guiding lights. Effective instructional design provides students with explicit instruction on the strategies they need and therefore equips them with essential tools that will help them navigate their way to success.

What is a networking session?

A networking session simply means that students circulate the room, meeting and greeting their peers. Prior to arriving at the networking session, students view a short video on “the elevator pitch,” complete some reading designed to build knowledge in the subject and then they write out a three-five sentence project idea pitch. All of this takes place online prior to coming to the physical classroom on networking day.

There is a skill to networking that I explain is similar to navigating a family gathering—you don’t want to get cornered talking to Aunt May for too long, but you do want to at least greet Aunt May because you don’t want to be rude, so you need to say just enough and then enact your exit strategy. What is an Exit Strategy? I ask students. They all know the answer in this context: it’s a polite way to move along without offending Aunt May.

I ask students to share suggested Exit Strategies, and I suggest phrases like, “Thanks for your time” and “It’s been great talking with you” or “Sound like an interesting idea, thanks for sharing.” I explain to students that a networking session means that you circulate the room, and it is customary to move along and therefore awkward when people don’t move along—so don’t be an Aunt May and corner people, keep moving along. The goal is to meet and greet EVERY student to hear every idea because this is a big deal—”remember, you are choosing your partners for the next 10-12 weeks.” That’s about all the direction these 20 year olds need to hear in order to start networking, and majority seem to really enjoy the opportunity to chat with everyone in the class—even the introverts do a good job of at least faking enjoyment—mission accomplished!

How are teams created?

After shaking hands and pitching to every student in the class, they write down their top three choices—by name and idea— on a sheet of paper.

While they watch a 20 minute writing strategy video, I sort students by first choice. A handful may get their second choice. Then I call each group into an area of the room for their first stand-up meeting. These meetings proceed the field trip to the library so students brainstorm possible areas for their research using the campus databases.

Can students switch teams?

I allow students to change teams until the Planning Memorandum, but after that, I explain, they are locked in contract.

How is mutual accountability built into the team structure?

Not only do students utilize a table to identify who is doing what in the planning memorandum, but this document also asks teams to create and list their own deadlines for all of the required components of the final project.

In the case of the research-based proposal, these assignments include:

1) Collaborative Annotated Bibliography (two sources per person minimum), 2)Individual Draft Deadline, 2) Collaborative Draft Peer Review, 3) Individual Presentation Script (for final presentation) and 4) Final Proposal Draft Deadline.

How are students held accountable for team deadlines?

Just after the draft deadlines, I ask teams to individually submit a progress report about how well they did on meeting their draft and peer review deadlines (self-reflection), but other than that I do not “police” students’ deadlines except at the time I score their Planning Memo. I wait to grade this document until after the draft deadlines pass. If the students have met the deadline, they earn more points. I make a comment on each Planning Memo either commending the team or noting that points were lost due to drafts not being submitting by the team deadline.

How do the team jobs reinforce productivity?

Since I have added the team jobs video that clarifies the role facilitators play in getting drafts collected and communicating to scribes when deadlines dates need to be updated—in other words, if facilitators know teammates are going to miss the deadline, then they need to advise the scribe to reset the deadline to accommodate teammates’ needs—all teams have done a much better job of actually paying attention to their Planning Memorandum which prevents students from than waiting until the last moment to try to complete their proposal section. Hence the dramatic decrease I have observed in the number of panicked emails I receive the week of the final deadline.

Procrastination and Busy Schedules Exist

Even though students chose to work on the idea, selected the team, and they may even be highly interested in the succeeding on the project, the disease of procrastination is no where near cured. Many have a long history of waiting until the last minute—but creating/enforcing draft deadlines, clear communication, and knowing that the team counting is on each individual in the team in order to succeed goes a long way toward breaking this cycle. In this case, peer pressure is a good thing.

download