Blog

Full Stack Apprenticeship Programs

So long ago, Bloom’s (1984) MIT research demonstrated the efficacy of tutoring + mastery instructional design. At the time, the only way to connect students with tutors was in face-to-face sessions—possible at some small schools—but too expensive for large public schools. By the 90s tutoring was mostly outsourced to parents, wealthy parents, and the rest of us just did our best. Technologically “smart” tools can fill some of this void, but are educators willing to use them? Teachers know better than most that we cannot offload “personalized instruction” to machines because it’s the human-to-human relationship that is at the heart of teaching/learning and most educators view claims to the contrary as dubious.

One way to integrate tutoring (and these vital human-to-human relationships) is to launch more peer-to-peer mentoring/tutoring opportunities on college campuses. A recent article by Ryan Craig explains how the full stack model connects employers with human capital on college campuses. But why don’t colleges follow this model to develop mentoring/tutoring jobs that fit inside the existing college infrastructure? Craig’s investment company supports start-ups that couple entry-level staffing needs with broader talent acquisition systems for employersbut post-secondary institutions can also adopt this “full stack” model by investing in programs designed to address high-need “gap” skills through campus-based employment opportunities using existing financial-aid based “Work Study” programs as a basis for creating skills training, paid jobs and certifications. These platforms can also help showcase student talent developed inside the institution to outside employers. Perhaps even the federal government could chip in for the development of these training modules.

Large public institutions provide especially fertile terrain for this full-stack apprenticeship system as these institutions routinely vet 100s of entry-level job applicants every year. Because of this constant supply and demand, many higher education institutions in California have recently invested in staffing software platforms, like Handshake, that provide enough technological enhancements to ensure a steady supply of talent for developing a workforce inside the university.

In semi-automating job training for internal postings and adding a certification incentive, the university can streamline its own human capital resources and build a student population more equipped to tackle the realities of a 21st century workforce. To do this, universities should partner with regional employers, create certification tracks that provide training and allow students to showcase their work history, certifications and accomplishments with the goal of raising future employers’ confidence in recent graduates and students’ confidence in post-graduate full time employment prospects.

Increasing diversity, decreasing equity barriers and providing much needed support through classroom based mentoring, tutoring and lab assistant positions, technical support call centers, and teaching assistant positions could be up and running and scaled without much structural change.  In addition to a regular paycheck, hirees benefit by building connections on campus through networking with hiring cohorts as well as by directly supporting peers in a variety of educational settings. One metric to watch in assessing this pilot would be to see if this collection of skills training, financial incentive and relationship building improves success rates for first and second year students as they matriculate through GE courses—a sinkhole from which many never emerge and others wallow in for years. Another must watch metric would be: time to full workforce employment post graduation. This full stack view could prove to be a win-win for students and universities struggling to improve four-year graduation rates. If students can easily see the financial incentive that full-time careers provide after graduation—and employers can observe a proven track record of student success in a set of high-demand “gap” skills—everyone wins.

Creating staffing and talent pools in-house (on campus) in programs specifically designed to provide students with paying jobs while learning essential high need “gap” skills, such as communication and technology may be the missing ingredient in existing “student success” campaigns.  By adding a certificate of completion aligned with a employer identified “gap” skills, such as basic web/software programming, corporate technical skills (Excel, SaasS), and essential “soft skills,” such as  communication and conflict resolution, universities are going the extra mile to set their graduates up for immediate success in today’s competitive workforce.

MORE HERE: Click to see overview of Linked Ins Skills Gap

How do we address “skills gap” from inside the classroom?

These skills can also be embedded in teaching curriculum.

Flipping the curriculum (by assigning instructional modules outside of the classroom) and asking students to apply their learning inside the classroom may bring us closer to the high outcomes associated with Bloom’s one-to-one tutoring model—while addressing employers complaints about high-need skills gaps in technology and communication.

One simple solution is to have students solve problems in teams during class time. Not only does team problem solving leverage Vygotsky’s Zone of Proximal Development during collaboration, but adding reporting and whole class Q and A feedback loop ensures broad peer-to-peer learning. Peers working toward a clear objective during class closely simulates a classic tutoring model without the associated expenses of hiring/training qualified tutors. However, this model of personalized learning puts faculty in the position of facilitator, rather than leader, and this side-lined position doesn’t come naturally in the world of higher education.

A natural fit for this instructional design can be found within the existing infrastructure of the lab. Since this place has already been carved out of the bedrock that is higher education curriculum in most STEM courses, the question then becomes how can we transform the classic lab to better emulate the tutoring + mastery model as explained in Bloom’s research?

“Flipping” the instructional model gets us pretty darn close as the “lecture” becomes a self-paced tool outside the class, freeing up the instructor to support hands-on learning inside the lab. In the lab, the instructor provides the on-demand, one-on-one “tutoring” support as problems crop up in the problem solving. Add a few challenging student-centered projects to the mix and you’ve got a pretty inexpensive, built-in tutoring+plus mastery model.

Assessment Blues

I am a member of the higher education assessment police force. Only I don’t have a badge, uniform or authority. My official title is Assessment Facilitator. But my unofficial title is pain in the ass. We are a small but mighty force of faculty who care about measuring learning outcomes, or making sure students learn what they are supposed to learn in a particular course, program or degree. Our force is only mighty in our sense of responsibility, our duty to make sure the institution of higher education remains a place where learning is distributed equitably. With access and success in mind, we peer at learning data and extrapolate meaningful conclusions for the purpose of improving educational systems’ impact on learning.

If this job sounds noble—you have never worked in higher education.

A friend recently sent me a link to this NYT opinion piece complaining about the trend of requiring faculty to engage in ‘official’ learning assessment. I read the first few lines of the article, The Misguided Drive to Measure ‘Learning Outcomes,’ and then replied: I don’t need to read this. I live this.

The opinion championed in the op-ed article might as well be made into a theme song, perhaps set to the worn out rhythm of Fur Elise. Familiar, pervasive, unifying in the commonality of tone and pitch—the author, Molly Worthen, utters the familiar banter muttered in faculty lounges all across America.

In fact, I heard the abridged version of this article last week in an Assessment Committee meeting. “No body wants to be here,” a tenured faculty declared as the meeting adjourned. “This is the worst assignment you can get.”

Another agreed, citing the commonly shared sentiment: “Everyone hates assessment.”

Universally deplored and yet—according to Worthen’s excellent linked source, the National Institute for Learning Outcomes Assessment (NILO) report, beyond a few rogue outliers—universally required. Erecting these “compliance” based—Sputnik inspired?—systems began in the 1960s with the birth of national accreditation systems. Accreditation systems like WASC first required faculty to develop measurable learning outcomes for all of their courses/programs; these metrics must also align with broad University learning goals and therefore required a tremendous amount of work to construct. By dangling the threat of accreditation, these agencies mostly succeeded in pushing administrators to push college faculty to tackle this gargantuan feat throughout the 80s. And, of course, once that work was done—then the work of measuring began. If one course has 10 learning outcomes and 500 students, then, well, that means a whole lot of measuring. But measuring is no longer adequate—now it’s how does this data drive change? What are you going to do with the data? And on…and on…

So a tremendous amount of faculty labor drove this resentment fueled assessment beast onto university campuses in the first place, and that’s exactly how faculty continue to view assessment today: a beastly chore that they resent being forced to do.

One of the commenters to Worthen’s article writes: Wait. I’m confused, what exactly is assessment?

Isn’t that what grades do?

It’s my job to answer that reasonable question. And the answer is: nope.

Grades can’t be used for assessment—according to WASC. Outcomes are measurable, but grades do not necessarily equate to outcomes and therefore grades are not valuable data (according to accreditors). This argument causes that internalized resentment beast to paw at the ground in rage.

Faculty: So let me get this straight—what you are saying is that what I was hired to do—teach and build assignment-based assessments, then review, evaluate and score these assignments—is suddenly not good enough? You want me to measure some arbitrary thing—that you developed and deemed an appropriate thing that I should be assessing and then you want me to work even harder to build some sort of device that accurately measures that thing that you are saying is what my class should teach and then you want me to again collect student work to measure this thing, even though I already collected, assessed and GRADED—you want me to do that again? Is that right?

Assessment Facilitator: Yes.

Faculty never verbalize this to me directly, but the sideways glance says it all: Screw you.

Worthen does an excellent job of putting this sentiment into Beethoven-like cadence, with just enough academic bravado that inspires everyone round the committee table to chant: hear! hear!

And yet the assessment beast continues to rampage campuses. Requiring meetings, data collection and analysis, quantitative and qualitative measurements, metrics, baselines, critical thinking, reasoning, curiosity, creativity and all of the other ‘ings’ and ‘itys’ that higher education so frequently touts that it cultivates—but instead assessment demands these traits from faculty. And the old saying proves true: teachers make the worst students.

To make matters worse, a large portion of this process is mind-boggling, requiring a high degree of awareness about learning science, cognition and reflection—concepts that faculty have not been trained to undertake. Assessment plans resemble Gantt Charts. Reports read like technical documents—each highly specialized and unique. In fact, assessment often takes faculty so far outside of their area of expertise that many, rightly, complain: I am not a bean counter! I am a professor of … and that’s what I am focusing on in my classroom, and if you wanted me to ‘bean-count’ for you, then we should have talked about bean-counting during the exhaustive interview process that I just survived. No one mentioned bean counting on the hiring committee. This feels like the ultimate bait and switch—hire me as an expert in my field and then lock me in a room counting beans.

Screw this.

And so we weary Assessment Facilitators soldier on. All the while the NILO report portrays a picturesque view of assessment, and recommends MORE work from faculty (albeit cleverly disguised as support) : “Professional development could be more meaningfully integrated with assessment efforts, supporting faculty use of results, technology implementation, and integration of efforts across an institution.” This is like saying let’s make the beast dance too.

Oh yes, I think the beast will be happier if we just give it some dancing shoes and make it dance.

Can I just tell you from the front lines to all of you who think this is possible: If you have a beast, you can’t make it dance, no matter what tune or what shoe you put out there. The beast ain’t gonna dance. The idea that enforcing professional development “with assessment efforts” overlooks the fact that higher education faculty don’t want to be developed in this way—in fact they want to enact a full on retreat from this beast.

So what’s the solution? There is no easy answer. I do know this: Increasing faculty workload (AKA “professional development”) related to assessment—the act that created the beast in the first place—is not the the answer. I also know that if Administrators care about accreditation (which obviously they should), then Administrators need to take on more of the burden of assessment by designing and implementing meaningful and authentic assessment systems that better support faculty. Therefore, “compliance” based assessment systems need to go the way of Sputnik.

Take it from the beat cop without a badge, uniform or authority—we’ve got to do more to support faculty, not the other way around.

Inside Out

Flipped instruction is essentially inside out teaching, reversed instruction. Homework takes place inside class time, and lectures go outside.

Guiding principals: cognitive learning science.

One of the advantages of this model is that problem-solving during class time provides instant feedback. Whether they are working on a team or not, allowing students to interact, to ask questions, as they problem solve provides multiple support systems. Instead of just the instructor, peers become valuable resources. In my flipped classrooms, even when students are working individually, they always have the option to ask questions with their peers. I am not the only valuable source of information in the class.

However, I do offer my guidance directly during this time by circling the room.

Listening or looking over students’ shoulders, and commenting directly on their work, provides individualized instruction during class time. This direction occurs during production and therefore asks students to apply targeted learning in context and thus may lead to a higher likelihood of success.

This connection—between students and teacher and student— in the act of solving a problem—is an irreplaceable and valuable resource, perhaps the greatest resource in face-to-face education. Master-apprentice models data back to the ancients, for good reason. They work. This model can also be enacted in fully online courses, but with much more effort and intention in the instructional design of the course. Making it quite likely that, in some disciplines, mastering the technical aspects of this modality may exceed justifiable limits in investments of money, time, and/or expertise.

I use Bloom’s Taxonomy as a Guide to decide what goes outside and inside the classroom,

images The top half of the pyramid goes inside and the bottom half goes outside.

Step One

Assign Homework

Watch Lecture; Take Notes. (Remember, Understand) or Watch Lecture and complete activity, such as a diagram, quiz or reflection assignment (Remember, Understand, Apply)

Read Textbook; Complete online “Adaptive” quiz series—Remember, Understand, Apply.

Step Two

Facilitate Problem-Solving Classroom

Design lessons wherein students Analyze, Evaluate and/or Create with the material covered in the homework.

Step Three

Repeat

Step Four

Create Check Points  (Graded but Low Stakes)

Have students complete a Low-Stakes Assessment. This can be done either outside or inside of class, but must be a chance to practice and receive feedback prior to formal assessment. Homework assignments work great for this, but these formative assessments can also be part of the active learning taking place in the class.

Step Five

Administer Formal Assessment

This assessment should have a rubric that clearly aligns with learning outcomes associated with the assignment. Unlike the formative assessment, where feedback is crucial, the formal assessment should be light on manual instructor feedback because the rubric/score should send a clear message to students as to where they excelled and where they need to improve. Frequent communication with Instructors should be encouraged if there are questions from students about these scores, but they should not take class time and should be diverted to either a virtual or in-person office hour setting.

 

Adaptive Learning Study

Beacon.JPGJust read an interesting study on Adaptive Learning conducted by SRI Education in higher education. (Yes, I admit it, I am actively avoiding the depressing Twitter-infested news cycle)

I conducted an informal analysis of students outcomes using adaptive learning during my first year of implementing McGraw Hill’s Connect, and found similar results to SRI’s study: no significant difference in learning outcomes, course completion and course grades between the control group and those using the “machine.”  So there is no magic bullet here. In fact, in the case of ASU’s adoption of these tools for a basic skill math course, the adaptive product slightly lowered performance compared to a “blended” course control group. However, after reading this study and reconsidering the results of my own informal study, I do believe that a longitudinal, controlled study with properly calibrated metrics may, over time, show better results for some adaptive technology products.

Which is why I continue to use the adaptive learning software in my classes despite studies like these. Why? There are several reasons for my persistence—and persistence is key here. Because there has been struggle all along the way, from technology to sales reps, these products are not plug and play—some day maybe, but not today.

Reason #1 for Adopting Adaptive Technology

There is nothing worse than asking a classroom full of students a question that was thoroughly covered in the required reading—and seeing that collective, vacant stare. Or worse, seeing the eyes drop from view with the classic look of  “please don’t call on me panic.”  For years before trying adaptive technology, it had become very clear to me that more and more students were getting away with not reading the text, and many were not even buying the text.

The dashboard that tracks students progress through the material proved to be valuable enough to keep me hooked in. Working as my “reading police,” this feature allows me to ‘see’ where students are, identify at-risk students, and align classroom plans with progress. According to the SRI study, higher education faculty agreed with me that this feature is highly valuable. In the age of “just Google it,” educators need to hold students accountable for required course reading—this tool puts the advantage back in favor of faculty. As I tell students, you can find any answer you want on the internet—it just might not be the right answer. We carefully select college textbooks for a reason and holding students accountable for reading them is the number one reason for adopting textbooks with progress tracking dashboard features.

Reason #2 for Adopting Adaptive Technology

The study also points out that these technologies were implemented, in several cases, along with the push to reformulate the traditional lecture model into a student-centered pedagogical model. SRI authors write, “Both bachelor’s-degree-granting institutions were seeking to reduce the amount of lecture and increase active student learning in the classroom, so they used the adaptive courseware to support self-study of basic material usually presented in class lectures.” However, what SRI researchers found was that despite adding the adaptive learning, lectures and presentation times were not decreased significantly.

Decreasing time spent lecturing on the textbook material helped me immensely—and continues to be a motivating factor as I continue to work through myriad issues involved with implementing this technology. The adaptive learning helps me to focus on the higher-level concepts and active learning in the classroom and leave the lower-level “information gathering” to the reading police.  But if educators don’t make this shift—more active engagement, less lecture—then the adoption of this technology seems pointless. This shift requires extra work as faculty must take a holistic view of the course and adjust accordingly—and that hurdle probably explains the results of the SRI study. Humans don’t automatically adjust to the presence of machines—it’s a bit like putting together a jigsaw puzzle. Until the big picture becomes clear, connecting all the little pieces takes time.

Reason #3 for Giving Adaptive Technology a Chance

I agree with the study authors’ recommendation for the next wave of research on these products: “The ALMAP evaluation focused on time devoted to lecture and presentations, but future work should examine (1) how adaptive courseware affects the relative balance between low-level and high-level content interactions between instructors and students and (2) how the automated dashboards in adaptive courseware affect instructors’ sensitivity to individual and whole-class learning needs.” Both of these examinations look into important adjustments that faculty make in response to the machine. Again, these adjustments don’t automatically happen, educators make them happen and this takes time and effort.

Final Musings…

These technologies are disruptive—which I believe is a good thing. Thinking about the impact these technologies have on 21st century instructional design and pedagogy really is the new mental space we all need to rent. How do I adjust my lecture/teaching/activities—time— to take advantage of what the machine can do for me? How can the machine help me better serve the needs of ALL students? What value can I place on these tools given my particular challenges in the classroom? How can the machine serve my needs and therefore better serve students’ needs? Entering this new space where we deeply consider emerging technologies can be daunting but also invigorating—familiar territory for 21st century pedagogical pioneers.

Bigbeacon

Formulating Teams

download-e1495818599258.jpg Huddled on the floor with scissors, I used to cut student names from the roster, and shuffle them around like tea leaves, hoping to see a bright future—productivity, friendship, personal growth and fulfillment! After years of practicing alchemy, I decided to get real, so I turned to math. By creating numeric teams—for example, count off by 5, students were at least in equitably sized groups—and a reasonably sized team IS easier to manage. What I finally learned about creating teams harkens back to that familiar Hallmark greeting card slogan: When you love something set it free…

That’s right: Chalk one up for democracy when it comes to successful team creation. For major, graded collaborative projects, I have found student choice— not alchemy, not size, but the simple act of letting students choose—produces the best results. However, like most choices, wise decisions stem from a clear understanding of what you are getting yourself in to—so clear objectives are crucial here, and successful democracies depend on a framework of law and order—so the way these choices are set up in the classroom with clear parameters and guidelines—sound instructional design—is really the hidden key to success.

Student-Centered Choice

Now, I give students a couple of weeks to come up with a great idea for their final project, and they pitch their idea during a whole class networking session. They decide which idea they support and join that team.

Sounds simple right? It is, but there are a few tips that I will share after a couple of years of conducting this process and learning from my mistakes. I know from my own experience of working on teams that even the best need an infrastructure that supports productivity. Supportive infrastructure helps me too because I want my focus to be on reviewing the actual projects—not on team dynamics, so here again student-centered pedagogy reigns supreme. However, effective student-centered instructional design doesn’t leave students in the dark without any guiding lights. Effective instructional design provides students with explicit instruction on the strategies they need and therefore equips them with essential tools that will help them navigate their way to success.

What is a networking session?

A networking session simply means that students circulate the room, meeting and greeting their peers. Prior to arriving at the networking session, students view a short video on “the elevator pitch,” complete some reading designed to build knowledge in the subject and then they write out a three-five sentence project idea pitch. All of this takes place online prior to coming to the physical classroom on networking day.

There is a skill to networking that I explain is similar to navigating a family gathering—you don’t want to get cornered talking to Aunt May for too long, but you do want to at least greet Aunt May because you don’t want to be rude, so you need to say just enough and then enact your exit strategy. What is an Exit Strategy? I ask students. They all know the answer in this context: it’s a polite way to move along without offending Aunt May.

I ask students to share suggested Exit Strategies, and I suggest phrases like, “Thanks for your time” and “It’s been great talking with you” or “Sound like an interesting idea, thanks for sharing.” I explain to students that a networking session means that you circulate the room, and it is customary to move along and therefore awkward when people don’t move along—so don’t be an Aunt May and corner people, keep moving along. The goal is to meet and greet EVERY student to hear every idea because this is a big deal—”remember, you are choosing your partners for the next 10-12 weeks.” That’s about all the direction these 20 year olds need to hear in order to start networking, and majority seem to really enjoy the opportunity to chat with everyone in the class—even the introverts do a good job of at least faking enjoyment—mission accomplished!

How are teams created?

After shaking hands and pitching to every student in the class, they write down their top three choices—by name and idea— on a sheet of paper.

While they watch a 20 minute writing strategy video, I sort students by first choice. A handful may get their second choice. Then I call each group into an area of the room for their first stand-up meeting. These meetings proceed the field trip to the library so students brainstorm possible areas for their research using the campus databases.

Can students switch teams?

I allow students to change teams until the Planning Memorandum, but after that, I explain, they are locked in contract.

How is mutual accountability built into the team structure?

Not only do students utilize a table to identify who is doing what in the planning memorandum, but this document also asks teams to create and list their own deadlines for all of the required components of the final project.

In the case of the research-based proposal, these assignments include:

1) Collaborative Annotated Bibliography (two sources per person minimum), 2)Individual Draft Deadline, 2) Collaborative Draft Peer Review, 3) Individual Presentation Script (for final presentation) and 4) Final Proposal Draft Deadline.

How are students held accountable for team deadlines?

Just after the draft deadlines, I ask teams to individually submit a progress report about how well they did on meeting their draft and peer review deadlines (self-reflection), but other than that I do not “police” students’ deadlines except at the time I score their Planning Memo. I wait to grade this document until after the draft deadlines pass. If the students have met the deadline, they earn more points. I make a comment on each Planning Memo either commending the team or noting that points were lost due to drafts not being submitting by the team deadline.

How do the team jobs reinforce productivity?

Since I have added the team jobs video that clarifies the role facilitators play in getting drafts collected and communicating to scribes when deadlines dates need to be updated—in other words, if facilitators know teammates are going to miss the deadline, then they need to advise the scribe to reset the deadline to accommodate teammates’ needs—all teams have done a much better job of actually paying attention to their Planning Memorandum which prevents students from than waiting until the last moment to try to complete their proposal section. Hence the dramatic decrease I have observed in the number of panicked emails I receive the week of the final deadline.

Procrastination and Busy Schedules Exist

Even though students chose to work on the idea, selected the team, and they may even be highly interested in the succeeding on the project, the disease of procrastination is no where near cured. Many have a long history of waiting until the last minute—but creating/enforcing draft deadlines, clear communication, and knowing that the team counting is on each individual in the team in order to succeed goes a long way toward breaking this cycle. In this case, peer pressure is a good thing.

download

Team building exercises

When students engage in authentic exploration of effective communication strategies in an environment that supports reflection and collaboration, the collective benefit is that they don’t want to let each other down. They try harder. When individuals who are part of teams self-report and self-reflect, and engage in learning activities that build leadership knowledge—they perform at a higher level. They take more responsibility—they become real stakeholders in the greater good of the team. I would also argue that they learn more from each other and this, ultimately, is what improves outcomes. I remember learning about Vygotsky’s Zone of Proximal Development in graduate school, but seeing it in action is quite inspiring.  I have found this result to be the norm for every class that engages in at least five team building activities.

At a minimum, I embed the following five activity threads:

  1. Online/Video•Reading•Quiz on Collective Intelligence and Leadership articles (10 minute mini-lessons totaling 90 min)
  2. Online/Metacognitive Reflections (self-monitoring) (3 X 5-10 min each)
  3. In-Person or Online/ Agile Scrum Meeting (15-30 min)
  4. In-Person/Whole Class Active Listening of Case Study (30-45 min)
  5. In-Person/Planning Memorandum with Sketch, “Three Common Traits” activity and Expert Team Meetings (90 minutes)

Expert Groups

An essential component of Team Building is forming Expert Groups which builds mutual accountability and applies the concept of collective intelligence. Collective Intelligence is a key term defined in thread #1.

Students first practice the concept of “expert groups” during the “jigsaw” activity on the second day of class when they “teach: the five required assignments to their “row teams.”

Team Jobs

Click here for Micro-Lecture defining team jobs.

The concept of the “expert groups” continues throughout the semester with students adopting these four roles/jobs:

  1. Scribe—transcribes the team answer/response. Handles the management of the document. Sets up Google Docs and PPT.
  2. Facilitator—coordinates team communication; responsible for collecting contacts and disseminating information; keeps track of time, including deadlines; uses a calendar, text message and/or Google doc.
  3. Editor—makes sure the document follows the appropriate style (APA) and helps individual writers with editing; uses a highlighter tool to focus revision on documents.
  4. Spokesperson—manages Q and A and informal presentations for the team; choreographs final presentation, including flow and organization, and plans dress rehearsal.

These job assignments are assigned on the Project Planning Memorandum, which builds mutual accountability in the group. The Planning Memorandum becomes the team contract—adding value to this standardized assignment and providing a documentation of the team terms—building mutual accountability. Project planning requires clearly defined goals and when students take ownership of specific tasks on the project and are held accountable for these aspects of the project, the project moves steadily towards success.

Team Sketch

Another important aspect of thread #5 is the discussion that takes place in the team during the sketch—how does the team visualize the project under time constraint? How well does the team communicate to create a clear idea?

Three Favorites: General To Specific Lesson

A quick community building activity that also teaches writing is the “Three favorite things” activity. In this activity students have 10 minutes to brainstorm three favorites, such as going to the movies, taking trips to the beach, tacos (favorite places to go/favorite food). But the challenge is that the team must agree on three favorites and then attempt to be as specific as possible. The example I model is asking a friend to get pizza. I say: your friend my be thinking frozen Diggornio’s pepperoni, mushroom at home on the couch watching television and you may be thinking Pizza My Heart downtown San Jose. So your job is to get as specific as possible. When I ask team to share their most specific favorite—I ask the class to listen and raise their hand if they can get the team to be MORE specific, so the lesson is that when writers use general concepts, the audience may have questions, or need clarification. Whereas when no hands go up, the team did a great job of being very specific. I teach that both strategies are useful in writing. In a summary, for example, the writer must balance general context that draws the reader into the topic with enough specific details to illustrate the particulars of this topic.

This activity is a great ice breaker for teams because the audience participation is fun, and surprising! I learn lots about youth culture during this session as students’ favorites reveal much about each particular class. But the lesson occurs on a day where the learning experience is valuable as students must then formulate a summary of their final project idea to the class “advisory board” and we will again test the general to specific concept by raising hands if more clarification of the project description is required—one of the most challenging aspects of technical writing is getting complicated systems into simple, understandable terms using a blend of general and specific.

 Language-based Pedagogy

Language-based pedagogy centralizes terms and definitions and asks students to apply the terms in authentic contexts.

Struggle Expected

Don’t get me wrong—I expect struggle within teams—that’s part of what I teach—that challenges are inherent in any collaboration—but if students’ have the tools that work to overcome these problems, then teamwork really can be dreamwork.

Team Building

Every semester I struggle to fit everything in. The flipped model removes the lecture from the classroom, but giving students time during class to practice with instructor feedback is a fundamental principal of “flipped” instruction, so class time becomes a balancing act between 1) time spent discussing key concepts taught outside of class and 2) time spent applying the knowledge through hands-on practice. Both of these activities require ample time, so adding a third element to this already busy class schedule often seems impossible.

pebbles-2020100__340-1However, in today’s project-based classroom environments, a significant percentage of students’ grades stem from collaborative assignments, so building functional teams benefits everyone. If we are going to tether students grades together, then time spent teaching communication and mutual accountability is time well spent.

One of the challenges of incorporating team building into busy classrooms is to develop effective activities that take minimal time. Another challenge is understanding the benefits of team building aren’t always immediately obvious. At first, I struggled with both of these aspects. Over time though I learned how to cut lessons down to a handful of short, generative strategies designed to fill a “team toolkit” that students can carry with them to their next project.

I also learned to appreciate delayed gratification. While the long term payoff of incorporating team building is unknowable, the short term payoff becomes obvious during crunch time, the days and hours before the collaborative project deadline when I used to count on receiving panicked student emails about missing teammates or requests for deadline extensions.

Since incorporating the team toolkit a couple of years ago, I have yet to receive one of these emails. Perhaps the most important result was how dramatically the students’ experience improved as documented in individual progress reports.  Read student testimonial here.Yup, that’s right 100% reported a positive team experience even on this 35+-page, research-based collaborative Proposal project worth more than 20% of their grade. Whereas students had plenty of negative to say about previous team experiences, after incorporating the team toolkit, students reports shifted to the positive.

Most surprisingly, however, was that team writing also showed remarkable improvement—not just a handful of student writers improved, but rather the majority of writers who engaged in the team building experience submitted improved writing. How do I know this? I collect individual and collaborative drafts of the Proposal. This practice allows me to track progress, word count requirements (mutual accountability on my part) and gaps in the proposal narrative. Occasionally, I require a rewrite of particularly egregious errors or missteps, and access to individual drafts helps me to target this feedback to a specific individual—because mutual accountability goes both ways. I need to hold individuals accountable just as the team does. I need to clarify objectives (clear communication) just as the team does. These simple steps lead to team success. Overall, students understand this framework holds them accountable and supports the team efforts, which improves attitude—and that’s half the battle.