## Disrupt This: My Challenge to Silicon Valley

Over the past few months, Audrey WattersDan Meyer, and Keith Devlin have been critical of Silicon Valley, edtech startups, and iPad textbooks which hope to “disrupt” education. In my opinion, the real stumbling block to meaningful change is students’ formal reasoning skills — analytical thinking that cannot be cultivated by pausing and rewinding video or playing Math Blasters.

Here are my 5 points:

1. Many of our students are transitioning from concrete to formal reasoning.
2. A significant barrier to learning for understanding is students’ own formal reasoning skills.
3. Formal reasoning skills (and thus learning for understanding) can be developing when instruction is structured around the Learning Cycle.
4. Silicon Valley and edtech startups have been focusing on (often inappropriately) just a small fraction of the learning cycle.
5. My Challenge to Silicon Valley: Help students learn for understanding by innovating around the rest of the learning cycle.

1. Many of our students are transitioning from concrete to formal reasoning.

Below are 3 reasoning puzzles, each followed by a video of college students attempting to solve the puzzle while explaining and discussing their logic. It’s a highly illuminating look at students’ reasoning processes.

I. The Algae Puzzle (Combinatorial Reasoning)

II. The Frog Puzzle (Proportional Reasoning)

III. The Mealworm Puzzle (Scientific Reasoning)

2. A significant barrier to learning for understanding is students’ own formal reasoning skills.

You’re probably thinking, “So, what? Just because Johnny can’t figure out all the possible combinations of algae doesn’t mean he can’t learn physics.” But the research strongly suggests that it does, even in interactive engagement classes.

In a previous post, I presented this graph from Hake’s famous six thousand student study:

As you can see, interactive engagement course outperformed traditional courses in learning gains as measured by the Force Concept Inventory (FCI). The FCI is the most widely used test of physics understanding. But why is there such a wide range of FCI gains among the IE courses and (not shown) among the individual students within a particular course? A study entitled “Why You Should Measure Your Students’ Reasoning Ability” (Coletta, Phillips, and Steiner) suggests reasoning ability is strongly correlated with physics success.

In the study, several different physics courses administered both the FCI (to measure physics gains) and the Lawson Test of Classroom Reasoning Skills (to measure formal reasoning ability). The Lawson test contains several items very similar the three puzzles above. Here’s what they found:

The data were split into quartiles based on the Lawson scores. The light green bars represent the average Lawson test score for each quartile and the dark green bars represent the average FCI gain for each quartile. There is clear correlation between reasoning ability and learning gains in physics. I’d wager this correlation extends to other subjects as well.

3. Formal reasoning skills (and thus learning for understanding) can be developed when instruction is structured around the Learning Cycle.

According to Piaget, intellectual growth happens through self-regulation — a process in which a person actively searches for relationships and patterns to resolve contradictions and to bring coherence to a new set of experiences.

In order to get students to experience self-regulation and further develop their reasoning skills, classroom experiences should be constructed around the Karplus learning cycle, which contains the the stages of EXPLORATION, INVENTION, and APPLICIATION. From Karplus’s workshop materials on the learning cycle:

EXPLORATION: The students learn through their own actions and reactions in a new situation. In this phase they explore new materials and new ideas with minimal guidance or expectation of specific accomplishments. The new experience should raise questions that they cannot answer with their accustomed patterns of reasoning. Having made an effort that was not completely successful, the students will be ready for self-regulation.

INVENTION: Starts with the invention of a new concept or principle that leads the students to apply new patterns of reasoning to their experiences. The concept can be invented in class discussion, based on the exploration activity and later re-emphasized by the teacher, the textbook, a film, or another medium. This step, which aids in self-regulation, should always follow EXPLORATION and relate to the EXPLORATION activities.  Students should be encouraged to develop as much of a new reasoning pattern as possible before it is explained to the class.

APPLICATION: The students apply the new concept and/or reasoning pattern to additional examples. The APPLICATION phase is necessary to extend the range of applicability of the new concept. APPLICATION provides additional time and experiences for self-regulation and stabilizing the new reasoning patterns. Without a number and variety of APPLICATIONs, the concept’s meaning will remain restricted to the examples used during its definition. Many students may fail to abstract it from its concrete examples or generalize it to other situations. In addition, APPLICATION activities aid students whose conceptual reorganization takes place more slowly than average, or who did not adequately relate the teacher’s original explanation to their experiences. Individual conferences with these students to help identify and resolve their difficulties are especially helpful.

4. Silicon Valley and edtech startups have been focusing on (often inappropriately) just a small fraction of the learning cycle.

Unfortunately, Silicon Valley has been dumping its disruptive dollars almost solely into the INVENTION phase and on the tail-end of the phase at that. It views education purely as a content consumption process and ignores the development of formal thinking and reasoning.

Remember, in the invention phase, “The concept can be invented in class discussion, based on the exploration activity and later re-emphasized by the teacher, the textbook, film, or another medium.” That’s Khan Academy videos, flipclass videos, iBooks, an similar technologies designed to present content via direct instruction. However, “Students should be encouraged to develop as much of a new reasoning pattern as possible before it is explained to the class.” Which means that this type of direct instruction should be as minimal as possible, because it robs kids from reasoning and making meaning. In other words, Silicon Valley is putting its energy into the portion of the invention phase that should be as small as possible!

Now let’s look at the application phase. There has been some development here as well, most notably in apps and exercise software which seek to gamify the classroom. But the application phase isn’t about getting 10 right answers in a row or solving problems to shoot aliens. Remember, Without a number and variety of APPLICATIONs, the concept’s meaning will remain restricted to the examples used during its definition. Real learning with understanding means students can reason about the concepts well enough to use them in new and unique concepts (aka transfer). Applications should require students to examine their own thinking, make comparisons, and raise questions. Great applications examples are open-ended problems, problems which present a paradox, and student reflection on both successful and unsuccessful problem-solving methods. Deep learning does not end when the Application phase begins.

5. My Challenge to Silicon Valley: Help students learn for understanding by innovating around the rest of the learning cycle.

Real disruption isn’t going to come from skill and drill apps, self-paced learning, badges, YouTube videos, socially-infused learning management systems, or electronic textbooks. Students must be continuously engaged in the learning cycle. We need to equip our students with the reasoning skills to learn how to learn anything. Focus on experiences in the exploration phase, meaningful sense making in the invention phase, and worthy problems in the application phase.

But, in reality, we only have ourselves to blame. It shouldn’t come as a surprise to us when students can’t think — the status-quo in education has been to spend most of our time on content delivery while robbing students of exploring and reasoning opportunities. And current edtech trends aren’t fixing this problem; rather, they are making it easier to make the problem worse.

To be fair, a few “good disrutptions” have occurred in the other phases of the learning cycle. Motion detectors allow students to “walk a graph” so they can easily explore position-time and velocity-time graphs. GeoGebra allows students to explore and play with geometry and functions quickly and easily. PhET simulations allows students to conduct open-ended planetary orbit experiments that would be impossible in real life. And VPython programming gets students to apply what they learned to write their own simulations and visualizations.

So when presented with the next great edtech “disruption,” ask yourself: has this innovation actually changed how student think about math and science concepts? Or has it just allowed students to get a few more questions correct on the state exam?

The next two articles:

• “Promoting Intellectual Development Through Science Teaching” (Renner and Lawson)
• “Physics Problems and the Process of Self-Regulation” (Lawson and Wollman)

are found here: Module 11: Suggested Reading (Workshop Materials for Physics Teaching and the Development of Reasoning)

## You Khan’t Ignore How Students Learn

From Harvard EdCast’s “The Celebrity Math Tutor” (transcript below)

Buffy Cushman-Patz: What efforts do you take to ensure that your pedagogy is consistent with what education research shows about how people learn, especially how people learn math and science?

It’s unfortunate that “The Teacher to the World” was only able to mention one study about how students learn. A study which he then dismisses. And since he doesn’t describe any other efforts to be consistent with pedagogy, his real answer to Buffy’s question is: “I don’t.”

Let’s look at Khan’s response in more detail:

“Now we are getting pretty deep on our own analytics on our website.”

I don’t see how statistics about how many times students have watched/rewound each video or how many times students miss a question in the exercises tells us anything about how effective his videos are. I don’t see how he could use that data to refine his future videos in the same way a teacher would reflect and refine lessons from year-to-year.

“…you can’t come up with these rules the way, all teaching has to be done like this.”

He’s right. There is no one rule, no one formula, for teaching. The Physics Education Research User Guide website contains 51 different research-based teaching methods. The website can filter these methods by type, instructional setting, course level, coverage, topic, instructor effort, etc. And while 51 different methods may seem overwhelming, they all have one important characteristic in common: interactive engagement (IE).

So what is interactive engagement? Hake defines IE as methods “designed at least in part to promote conceptual understanding through interactive engagement of students in heads-on (always) and hands-on (usually) activities which yield immediate feedback through discussion with peers and/or instructors.”

A video lecture is not interactive engagement.

“…maybe you explain once and you reemphasize that this goes against misconception A, B, C, or D.”

Khan (along with most of the general public, in my opinion) has this naive notion that teaching is really just explaining. And that the way to be a better teacher is to improve your explanations. Not so! Teaching is really about creating experiences that allow students to construct meaning.

“And I think frankly, the best way to do it is you put stuff out there and you see how people react to it…”

This is flawed. People’s reactions are not indicators of effectiveness. Pre/post testing is needed to indicate effectiveness. Ah, but perhaps there is a relationship between people’s reaction and effectiveness? The research indicates otherwise. In the very research study that Khan says is valid (and then dismisses), student actually did better after watching the videos they described as confusing, and made no gains after watching the videos they described as easy to understand. Additional research indicates that when an instructor switches over to IE methods, course evaluations from students tend to be more negative than the previous year, despite gains from students going up. (Don’t worry, a few years after the switch to IE, the evaluations go back to pre-IE levels.)

You see, the comments they put, they’ll ask questions based on… Every time I put a YouTube video up, I look at the comments — at least the first 20, 30, 40 comments that go up — and I can normally see a theme: that look, a lot of people kind of got the wrong idea here. Or maybe some people did, and then I’ll usually make another video saying “Hey, look after the last video, I read some the comments and a lot of y’all are saying this is not what we’re talking about it’s completely different.” So that means I am attacking the misconceptions.”

Again, it’s not about crafting better explanations. It’s about helping students wrestle with their conceptions and guiding them.

“But I think if you had a formula in place, and you do that every time, I think once again the learner will say, “This guy’s not thinking through it and he’s not teaching us his sensibilities, his thought processes. He’s just trying to meet some formula on what apparently is good video practice.”

Another naive notion of teaching. The goal is not for the teacher to teach the students his sensibilities and thought processes. The goal is for the teacher to have the students use their sensibilities and thought processes to reason through the concepts. Empower the student to think for themselves, rather than consuming the teacher’s ideas.

“And I’ll go the other way: you can dot all the “i”s and cross all the “t”s on some research-based idea about how a video should be made, but if your voice is condescending, if you’re not thinking things through, if it’s a scripted lecture, I can guarantee you it’s not going to appeal with students.”

Yet there are plenty of people who prefer to watch Walter Lewin’s highly-scripted performance lectures to Khan’s off-the-cuff style lectures. (Though remember that preference has nothing to do with effectiveness. In fact, Lewin’s showstopping lectures were no more effective than the mundane professors before him.)

“…and I think in general, people would be doing a disservice if they trump what one research study does and there’s a million variables there: who was the instructor, what were they teaching, what was the form factor, how did they use to produce it? You’d be doing yourself a disservice if you just take the apparent conclusions from a research study and try to blanket them onto what is really more of an art. It’s like saying that there’s a research study on what makes a nice painting and always making your painting according to that research study that would obviously be a mistake.”

Here is the most damning piece of evidence, from Hake’s famous six thousand student study:

The six thousand students in Hake’s study were not in a single class. They were in 62 different courses, from high school to university, taught by a variety of instructors with different personalities and expertise. And yet ALL the IE courses made greater gains (the slope of the graph — between 0.34 and 0.69) than the traditionally taught courses (average 0.23). It should also be noted that the green IE courses above were NOT identical and did not follow some magic teaching formula. They only had to conform to the Hake’s broad definition of IE given above. So you see, those “million variables” that Khan mentions don’t matter. METHOD trumps all those other variables.

But surely teacher expertise matters, right?

Yes and no.

NO: As seen in Hake’s study above, when comparing IE teachers to traditional teachers, expertise doesn’t matter because IE always trumps traditional.

NO: Note the small spread of the red-colored traditional classes shown above, which hover around an average gain of 0.23. Traditional methods produce very similar results no matter the level of the course or instructor.

YES: When comparing IE teachers to other IE teachers, expertise does matter. IE gains ranged from 0.34 and 0.69. As instructors get more comfortable using IE methods, gain increases. See, for example, this graph about the effectiveness of modeling instruction:

Expert modelers had higher gains than novice modelers.

But surely there is a place for lectures, right?

Yes, BUT students must be “primed” for the lecture. According to the PER User’s Guide FAQ:

It is possible for students to learn from a lecture if they are prepared to engage with it.  For example, Schwartz et al. found that if students work to solve a problem on their own before hearing a lecture with the correct explanation, they learn more from the lecture.  (For a short summary of this article aimed at physics instructors, see these posts – part 1 and part 2 – on the sciencegeekgirl blog.) Schwartz and Bransford argue that lectures can be effective “when students enter a learning situation with a wealth of background knowledge and a clear sense of the problems for which they seek solutions.”

If you are a physics teacher, be sure to get these discipline specific books about how students learn physics:

And just in case you think I’m an armchair critic with nothing to contribute, I want you to know I’ve opened up my classroom to the whole world on my Noschese 180 blog, where I’ve been sharing a picture and a reflection from each school day. It’s not quite the Noschese Academy, but I hope you find it worth reading and commenting, as we journey through teaching together.

## A Demonstration of the Ineffectiveness of Traditional Instruction

A student in a lab holds a brick of weight in her outstretched horizontal palm and lifts the brick vertically upward at a constant speed. The force of the student’s hand on the brick is:
A. constant in time and equal to zero.
B. constant in time, greater than zero, but less than W.
C. constant in time and equal to W.
D. constant in time and greater than W.
E. decreasing in time but always greater than W.

Now watch this video. Feel free to pause, rewind, and rewatch as needed.

A student in a lab holds a brick of weight in her outstretched horizontal palm and lifts the brick vertically upward at a constant speed. The force of the student’s hand on the brick is:
A. constant in time and equal to zero.
B. constant in time, greater than zero, but less than W.
C. constant in time and equal to W.
D. constant in time and greater than W.
E. decreasing in time but always greater than W.

Believe it or not, the concept needed to reach the correct answer is given in Khan’s video. Highlight below to reveal:
C. constant in time and W. Why? Since the brick moves at a constant velocity, the forces on the brick (you and gravity) must be balanced.

## Khan’s School of the Future

From Hacked Education:

Khan Academy announced this morning that it has raised \$5 million from the O’Sullivan Foundation (a foundation created by Irish engineer and investor Sean O’Sullivan). The money is earmarked for several initiatives: expanding the Khan Academy faculty, creating a content management system so that others can use the program’s learning analytics system, and building an actual brick-and-mortar school, beginning with a summer camp program.

“Teachers don’t scale,” I remember Sal Khan saying to me when I interviewed him last year. What can scale, he argues, is the infrastructure for content delivery. And that means you just need a handful of good lecturers’ record their lessons; the Internet will take care of the rest.

But online instruction clearly isn’t enough, and as “blended learning” becomes the latest buzzword — that is, a blend of offline and computer-mediated/online instruction — Khan Academy is now eyeing building its own school. The money from the O’Sullivan Foundation will go towards developing a “testbed for physical programs and K-12 curricula,” including an actual physical Khan Academy school.

What might Khan’s “school of the future” look like?

In his video interview with GOOD Magazine, Khan said:

As far as the future of learning is concerned, the school is going to be one or two really big classrooms, and because everyone can work at their own pace, we are going to see the best be a higher bar and you’re going to see everyone having access to that and they can move up with the best.

One or two large classrooms where everyone works at their own pace? That sounds a lot like Rick Ogston’s Carpe Diem school:

Matt Lander reports the following from his visit to Carpe Diem:

Carpe Diem is a hybrid model school, rotating kids between self-paced instruction on the computer and classroom instruction. Their building is laid out with one large computer lab, with classroom space in the back. They had 240 students working on computers when I walked in, and you could have heard a pin drop.

Carpe Diem has successfully substituted technology for labor. With seven grade levels and 240 students they have only 1 math teacher and one aide who focuses on math. [emphasis mine]

Carpe Diem also  touts they get great results with less per pupil spending. How? Well, as implied above, they have fewer teachers and staff. Also, take a look at the Carpe Diem Parent/Student handbook and you can see why: they have NO nurse and NO food service. Other ways I bet they save on money, compared to a regular public school: there is very little equipment to buy (aside from computers and furniture) — no art supplies, no science labs, no physical education equipment, etc. It seems Carpe Diem also lack special programs: no special education, no athletics, and no performing arts. We also have the problem of using standardized test scores to measure success. I think what is more important is: How many are successful in college? How many stay on past freshman year?

Carpe Diem is really an online school that also has a few brick and mortar campuses. The curriculum they use for both their virtual and physical schools is called e2020. From e2020’s website:

e2020 then designs each lesson with student-centered objectives that maximize the use of Bloom’s Taxonomy of Learning Domains. Lessons are designed in order to provide the student with an optimal learning experience that is unique for each course.  Students progress through the lesson with a series of activities such as, direct instruction videos by certified teachers; vocabulary instruction: interactive lab simulations; journals and essay writing; 21st century skills; activities that include projects, design proposals, case studies, on-line content reading; and homework/practice before being formatively assessed with a quiz. Topic test and cumulative exam reviews are provided to reinforce mastery prior to students’ taking summative assessments.

So the kids work through the modules at their cubicles and can seek out extra help at “workshops.” You can test some of the modules if you register here. For the science modules I tested, there are periodic multiple choice assessments. The in-module labs are all simulations — no manipulation on any physical equipment. It seems kids can pass the state exams based on their module work, but I wager they will be severely ill equipped for college or the real world, especially in STEM fields.

My issues with this blended/hybrid model of school:

• The conception of learning seems to be isolated, rather than group.
• It appears to teach/assess mostly low-order practices.
• I can’t see how physics and chemistry could be done well, and thus contribute to  developing the STEM workforce.
• How can ONE teacher be versed in pedagogically appropriate ways of helping students across SEVEN grade levels?

Blended learning schools like Carpe Diem pale  in comparison to what schools like High Tech High are doing:

Where would you rather go to school?

Exhibit A:

Exhibit B:

Discuss.

## The Poison of Points

Exhibit A: Clickers and Points

Exhibit B: Cramster and Homework Points

Exhibit D: Khan Academy and Points

I think all of this cheating and gaming is great. Why? Because it forces us to improve our practice. (Or would you rather wear yourself out playing “To Catch a Cheater?”)

If students do homework and go to class solely because if points, there is a larger systemic issue that needs to be addressed.

To which you say, “But if I don’t give points, they won’t do it.”

So where does it stop? Why do we let ourselves become willing participants in this game for points? We need a culture shift.

## Interview on NSTA’s Lab Out Loud Podcast

In which I talk with the hosts of Lab Out Loud, science teachers Dale Basler and Brian Bartel, about blogging, active student engagement, flipped classrooms, pseudoteaching, and the Khan Academy:

Episode 66 – But Are They Really Learning?

## Same Planet, Different Worlds

What is the future of learning?

Vision #1: Doing Old Things in New Ways

Vision #2: Doing New Things in New Ways

(Thanks to David Smith of the Da Vinci Discovery Center of Science & Technology for bringing the Cyberlearning video to my attention)

## Interview with MSNBC.com

A few weeks ago I was interviewed for MSNBC.com’s “Future of Technology” series for a story on Khan Academy and online lectures. I appear in these two videos:

Vodpod videos no longer available.

View it on MSNBC.com: Khan Academy sparks education reform debate

Vodpod videos no longer available.

View it on MSNBC.com: Teaching with technology: What works in class

I am grateful to the show’s producers, Matt Rivera and Wilson Rothman, for giving me the opportunity to share my work in the classroom and for staying true to my main criticisms. And extra kudos to Matt for what I fear is now commonplace in journalism: he was a one-man show — he brought and set up all the equipment (camera, lights, and sound) AND conducted the interview. Thanks!

## Khan vs. Karplus: Elevator Edition

Exhibit A: Sal Khan on elevators

Exhibit B: My students on elevators
Framed around the Karplus learning cycle (Exploration, Invention, and Application) my students construct the conceptual and mathematical models themselves.

1. Exploration Phase:

2. Invention Phase:

• Draw a motion diagram for the object attached to the scale when the scale is stationary, then being pulled up and then stops.
• Draw a force diagram for the object attached to the scale when the scale is stationary, then being pulled up and then stops. Decide whether the force diagram is consistent with the motion diagram. How is the force diagram related ot the reading of the scale?
• Use the force diagram and the idea under test to make a prediction of the relative readings of the scale.
• Observe the experiment and reconcile the outcome with your prediction.

(Video and questions for this phase taken from Eugenia Etkina’s awesome site Physics Teaching Technology Resource which has many more video experiments.)

3. Application Phase:

Instead of showing our students a better lecture, let’s get them doing something better than lecture.

UPDATE: Welcome New York Times readers! Other recommended posts: