Category Archives: Uncategorized

Flappy Bird Physics Is Real Life?

If you don’t already know, Flappy Bird is the hot new mobile game right now. The premise is simple: navigate the bird through the gaps between the green pipes. Tapping the screen gives a slight upward impulse to the bird. Stop tapping and the bird plummets to the ground. Timing and reflexes are the key to Flappy Bird success.

This game is HARD. It took me at least 10 minutes before I even made it past the first pair of pipes. And it’s not just me who finds the game difficult. Other folks have taken to Twitter to complain about Flappy Bird. They say the game is so difficult, that the physics must be WRONG.

 

So, is the physics unrealistic in Flappy Bird?

Sounds like a job for Logger Pro video analysis! I used my phone to take a video of Flappy Bird on my iPad. To keep the phone steady, I placed it on top of a ring stand with the iPad underneath.

IMG_20140130_141424614

(I’ve uploaded several of the videos here if you’d like to use them yourself or with students: Flappy Bird Videos.)

Then I imported the videos into Logger Pro and did a typical video analysis by tracking Flappy’s vertical position in the video. Sure enough, the upside-down parabolic curves indicate Flappy is undergoing downward acceleration.

FlappyBirdPosition

But do the numerical values represent normal Earth-like gravity or insanely hard Jupiter gravity? In order to do this, we need to (1) set a scale in the video so that Logger Pro knows how big each pixel is in real life and (2) determine the slope of Flappy’s velocity-time graph while in free fall, which is equal to the gravitational acceleration.

The only thing we could realistically assume is the size of Flappy Bird. If we assume he’s as long as a robin (24 cm), then the slope of the velocity-time graph is 9.75 m/s/s, which is really close to Earth’s gravitational acceleration of 9.8 m/s/s. Flappy Bird is REAL LIFE.

FlappyBirdAcceleration

So then why is everyone complaining that the game is unrealistic when, in fact, it is very realistic? I blame Angry Birds and lots of other video games. Repeating the same video analysis on Angry Birds and assuming the red bird is the size of a robin (24 cm), we get a gravitational acceleration of 2.5 m/s/s, which only 25% of Earth’s gravitational pull.

AngryBirdsRobin

In order to make Angry Birds more fun to play, the programmers had to make the physics less realistic. People have gotten  used to it, and when a game like Flappy Bird comes along with realistic physics, people exclaim that it must be wrong. As one of my students notes:

 

UPDATE 31 Jan 2014:
Inspired by a tweet from John Burk,

we made a video showing Flappy Bird falling at the same rate as a basketball:

Here’s what I did: We determined from the analysis above that Flappy Bird is about 24 cm across. Conveniently, basketballs are also about 24 cm across. So I had my physics teacher colleague Dan Longhurst drop a basketball so I could video it with my iPad. Dan just needed to be the right distance away from the camera so that the size of the basketball on the iPad screen was the same size as Flappy Bird on the screen (1.5 cm). Next, I played the basketball drop video and Flappy Bird on side-by-side iPads and recorded that with my phone’s camera. Once I got the timing right, I uploaded the video to YouTube, trimmed it, made a slow motion version in YouTube editor, then stitched the real-time and slow motion videos together to create the final video you see above.

UPDATE 1 Feb 2014: While the gravitational acceleration in Flappy Bird is realistic, the impulse provided by the taps are NOT realistic. Here’s a velocity-time graph showing many taps. When a tap happens, the velocity graph rises upward:

FlappyBirdConstantPostTapVelocity

As you can see, no matter what the pre-tap velocity (the velocity right before the graph rises up), the post-tap velocity is always the same (a bit more than 2 m/s on this scale). This means that the impulses are not constant. In real life, the taps should produce equal impulses, which means that we would see that the differences between pre- and post-tap velocities are constant.

TL;DR: Is the physics in Flappy Bird realistic? Yes AND no.
YES: The gravitational pull is constant, producing a constant downward acceleration of 9.8 m/s/s (if we scale the bird to the size of a robin).
NO: The impulse provided by each tap is variable in order to produce the same post-tap velocity. In real life, the impulse from each tap would be constant and produce the same change in velocity.

UPDATE 1 Feb 2014 (2): Fellow physics teacher Jared Keester did his own independent analysis and shares his findings in this video:

 

What Happened When I Gave Them the Answers

Frank Noschese:

Reblogging today’s 180 blog post to Action-Reaction in order to try to get more feedback from folks. Click through to read more and leave comments over there. Thanks!

Originally posted on Noschese 180:

[TL;DR - Not as much as I had hoped.]

College-Prep Physics: Students came to class with the following question completed for homework:

You are on a sleigh ride in Central Park one brisk winter evening. The mass of the sleigh with everyone in it is 250 kg, and the horses are pulling the sled with a combined horizontal force of 500 N. The sled moves at a constant speed of 3.33 m/s.
(a) What is the force of kinetic friction on the sleigh?
(b) What is the coefficient of kinetic friction between the sleigh and the ground?

I asked everyone to whiteboard their answers. I heard some students say they didn’t get it. Several other students came up to me — worksheet in hand — to ask if their answer was right.

“I’m going to give you the answers,” I said. “Here they are.”

IMG_20131119_181312

“Now on your whiteboards, I want…

View original 536 more words

Quizzes vs. Projects (Mass & Weight Edition)

Tests are evil, let them do projects.

That type of rhetoric frequently appears in my Twitter stream. My gut reaction is hell yeah. But some recent quiz results have gotten me thinking ….

Take for example, this learning objective:
The student understands the difference between mass and weight.

Here’s a student project (not mine) which clearly addresses the objective.

Here’s another project (also not mine). This one is very creative and totally adorable.

But those two projects are really just rehashes of the traditional explanation of the difference between mass and weight: “mass is the amount of stuff an object has and doesn’t change, while weight is the gravitational pull on an object and can change depending on location.” I wonder what would happen if those two students encountered quiz questions like the ones below. Would they make the same mistakes as several of my students did? I feel that even though my students can parrot back the difference between mass and weight (like in the above videos), they don’t really understand that difference if they miss these type of quiz questions:

New Doc 32_2

New Doc 32_3

New Doc 32_4

I did find one project where a student (again, not mine) gives a thorough explanation and uses several examples. I predict that this student should be able to answer those quiz questions.

What I’m trying to say is that I feel that teacher-generated questions and experiences (quizzes, labs, whiteboard problems, etc.) are important because they challenge students to think and apply in ways they probably wouldn’t if we just left them to their own devices.

But I also get that projects let students be creative and allow them to demonstrate their understanding in ways that quizzes simply can’t.

Perhaps the answer is just “all things in moderation.” Or perhaps the project parameters need improvement so students aren’t simply reciting Wikipedia definitions from a Powerpoint? Or something else?

What are your thoughts?

Edtech PR Tips

I’m not a PR guy. I’m just a teacher. But they say that if you want to be a disruptor, the best experience is no experience. So here goes…

1. It’s not about the technology. It’s about what students are empowered to do because of your technology. Show us how you take students beyond what they could do previously. Show student work (“Hey, look what this kid can do!”). Stop focusing on checkmarks, badges, data, dashboards, and slick UI.


2. Learning is social. Show students interacting with each other, questioning, helping, constructing — all as a result of using your technology. Don’t show kids glued to screens, headphones on, working en masse and in isolation. It’s creepy.

rocketship-charter-schools

The Learning Lab at a Rocketship school, where students spend 2 hours each day.

3. Don’t use phrases that signal you have simplistic views about teaching and learning. In particular: Learning stylesdigital nativesindividualized instruction, and content delivery.


4. Practices are equally as important as content. Show how you enable students to engage and grow in the core practices in math, science, and ELA.

PracticesVennDiagram

Credit: Tina Cheuk, tcheuk@stanford.edu [PDF (scroll to bottom)]

5. Show how you implement/compliment research-based practices about how students learn. Study up on these characteristics of effective teaching methods. Otherwise…


6. Run controlled, peer-reviewed experiments that use conceptual diagnostic tests to measure growth. We know most anything works better than (or as well as) passive lecture instruction. But how does implementation of your technology stack up to other evidence-based teaching methods? And be sure to use conceptual diagnostic tests, not final exams or standardized tests or failure rates. CDTs have been painstakingly researched and designed to measure true conceptual understanding rather than algorithm memorization. Without strong evidence, we’re just skeptical of your claims.

hake1

Hake’s analysis of 62 different physics courses as determined by gain on a physics conceptual diagnostic test.

7. Don’t contradict yourself. Your words should match your actions.


8. Show feedback and testimonials from students. In particular, have students demonstrate their deeper understanding and expert thinking that has been enhanced by using your product. Or perhaps your technology has decreased student anxiety and contributed to a positive classroom climate. However, don’t have students talk about shallow things such as raising grades and doing well on tests.

MyEconLab

Testimonials from Pearson/Knewton’s MyEconLab

9. There’s nothing revolutionary about old wine in new bottles. A digital textbook is still a textbook. A video lecture is still a lecture.


10. Read everything Audrey Watters writes. Everything.

Do you have any more edtech PR tips to share? Any more examples of bad PR? Any good examples? Thanks!

Convincing Reluctant Teachers

This question was posted to Twitter today:

Question: how do you convince teachers who are ADAMANT that they teach to the rigor required by CCSS that they really don’t?

(CCSS means Common Core State Standards)

This is a great question. I think it applies to a wide range of situations. You can replace “CCSS” with the Next Generation Science Standards, the new AP Physics 1 and 2 course, or any curricula du jour. It all boils down to showing these teachers that traditional teaching methods do not lead students to a deeper understanding of the concepts.

Some folks may suggest showing the reluctant teachers sample test questions from the new assessments. I say stay far away from that. These teachers will likely look for tricks to game the assessments so students can be successful without the in-depth understanding these teachers think they are teaching.

My suggestion is to have the reluctant teachers administer a basic conceptual diagnostic test to their students. The questions are so basic, so easy, the teachers will say “Of course my students can ace this!”

And then wait for the results to come in.

In all likelihood, the students (on average) will do poorly. Amazingly poorly. Even worse than if they had simply guessed randomly.

To which the reluctant teacher responds, “What happened? They should have known all this!”

Now’s your chance. I think now they’ll be more receptive to what you have to say about how students learn math and science and why interactive engagement techniques work.

***

Here’s Erik Mazur (Harvard physics professor) explaining what happened when he gave his students a conceptual diagnostic test:

(The video is an excerpt from Mazur’s longer “Confessions of a Converted Lecturer” talk.)

***

Extensive lists of concept inventories can be found at FLAG and NC State. Remember, many of these tests have been painstakingly developed and refined by researchers. Be sure to abide by the developers’ rules with administering the tests to students. You should not post them to the internet or discuss the answers with students.

My Google Reader Alternatives

Google Reader will be ending on July 1st. After searching through several apps and services, I’ve finally settled on a few alternatives I like.

First, I imported my Google Reader feeds to Feedly. This can be done in just one-click if you allow Feedly access to your Google Reader account.

However, I hardly ever read posts in Feedly because:

    • I can’t star/favorite posts. (However, I can “bookmark for later.”)
    • I was having issues with the Feedly Android and iOS apps: My feeds wouldn’t sync. The posts I had read would reappear as unread. The apps also lack the option to star/favorite posts.
    • The Feedly Android and iOS apps do not allow for offline reading.
    • Feedly apps don’t show videos (not even thumbnails) that are embedded in posts. Feedly has corrected this problem. Thanks, @Alby!

Thankfully, there are iOS and Android apps that sync with Feedly and solve all the problems above. Here are my two favorites:

iOS: Newsify (free, no ads)

Photo Jun 29, 7 48 00 AM

Newsify also shows embedded YouTube videos:

Photo Jun 29, 7 48 39 AM

Android: gReader (free with ads, $4.99 no ads)

2013-06-29_07-43-53 (1)     2013-06-29_07-44-06 (1)

gReader also shows embedded YouTube videos:

2013-06-29_07-45-12

Plus, gReader lets you easily subscribe to new blogs via the share option in the browser:

2013-06-29_07-50-52

Most importantly, Newsify and gReader play well with each other and stay synced so I can easily read, star, and share posts from both my Android phone and iPad.

Be sure to move your feeds to Feedly before July 1st! Good luck!

Project Work: Group or Individual?

Thanks to Chija Bauer for prompting me to write this post:

For the last several years, I’ve allowed students to work together in groups on their end-of-year projects (a self-designed lab investigation). The rationale was that students would be able to do much more complicated experimental designs with two, three, or four people than with just one. But in the end, I was never satisfied with how it worked out. Often the experiments were simple enough that they could have easily been carried out solo. Or two students actually did the project and then added the name of a non-contributing friend (or two) to the report.

One solution I’ve tried is to require individual reports. This usually ends up with group members submitting identical “individual” reports. Which leads to phone calls, discipline, cries of “I didn’t know we couldn’t do that.” etc., etc. It’s a battle I don’t enjoy fighting, so I don’t find this solution to be successful for me (though your mileage may vary).

This year, each student must do their own unique investigation. All students are now fully immersed in the experimental design process. Sure, some of the experiments require an extra pair of hands, but students have been enthusiastically helping each other out. Jack might be the cameraman for Jill’s terminal velocity experiment. And then Jill might release the cart at the top of the ramp for Jack’s conservation of energy experiment.

Some students have stated that if they work together to collect data, then they should both be able to analyze that data for their projects. My response to this is that they must have unique data sets. Take Jill’s terminal velocity experiment. She’s looking at the effect of mass on terminal velocity by dropping nested coffee filters. Jack is using a camera to film the falling filters so Jill can analyze the videos in LoggerPro. Now Jack is not allowed to use Jill’s data, but Jack could investigate the effect of surface area on terminal velocity or simply repeat Jill’s experiment using jumbo coffee filters or cupcake wrappers instead. And in the end, Jill and Jack can compare conclusions and come up with a mega-conclusion that ties together both experiments.

coffeefilter

Sometimes, however, the project work must be done as a group because that’s the only feasible way. I had to do this in my Conceptual Physics class this year for our model defibrillator circuit project and our modified bike light generator project. I did not have enough equipment (or storage!) for each student to have their own circuit kit or bicycle.

Both of these projects came from the Physics That Works curriculum, and I used their solution to this problem of group project vs. individual work. The solution is that the project has two parts: a group component and an individual component. For example, for one project, each group had to modify a bike light generator so that the headlights would light even when the rider wasn’t peddling, yet wouldn’t add more batteries to the landfill. For the group portion of the project, students worked in groups to design and build such a circuit for their group’s bicycle. And everyone in the group received the same grade for that part (25% of the overall project grade).

bikelight

For the individual portion, each person had to submit an annotated circuit diagram (25% of the project grade) and give a mini-presentation to the class (50% of the project grade). I’ve posted my rubrics below:

Even the way the mini-presentations are handled by the authors of Physics That Works is genius. Students are given several choices for topics for their mini-presentation, but the caveat is that, as group, no two students can do the same mini-presentation and that two of the mini-presentations must come from the two required topics and the others come from the elective topics. For example, for the bike light presentations, these are the options:

bikelightpresentations

Ideally, the mini-presentations would be tied together in one large presentation for the whole group, but each student would only be graded on their contribution.

grouppresentation

What are your solutions to the group project vs. individual work dilemma?

Labs, Notebooks, and Reports: For What Purpose?

Today was Senior Seminar: a day-long school event where seniors get breakfast, BBQ lunch, yearbooks, and attend workshops about upcoming college life. So all my seniors were not in class today, which gave me some time to reflect. I was thinking about how best to use lab notebooks and lab reports next year.

You see, this year in college-prep physics, students recorded lab work in spiral-bound graph-paper notebooks. They taped a rubric next to each lab. I collected their notebooks, lugged them around, marked their rubrics, and returned their notebooks. All 51 of them. For each lab. (I could have simply collected one notebook from each lab group, since the other notebooks in the group were usually identical — right down to the conclusion, awkward sentences and all.)

Ugh.

I’ve gone through various other incarnations of notebooks, reports, whiteboards, packets, etc. in my 15 years of teaching. My handwritten reflection for what to do next year are below. I think it captures the best of all those previous systems while still maintaining a reasonable workload.

75aSgLS1D9a0Y0bfF1gCd0f7 (1)

  1. I stamp the lab notebooks during class as evidence that the student was present in lab and participating — brief design, measured data, calculations, and graphs. These are the things that will be identical from noteb0ok to notebook anyway. I won’t be picky about proper format because I’d rather have them spend most of their time taking and analyzing data than worrying about the notebook looking picture-perfect. Also, students who are absent would be required to come during a free period or after school to perform the lab. (I’ve never done that before. It could be overwhelming. But I also think it sends the wrong message to a student that they can just copy the data from a partner.)
  2. Students write a post-lab reflection. After we’ve had our post-lab class discussion to tease out the concepts, idea, models, relationships, etc. from lab, I’d ask students to summarize what they’ve learned, what questions they had,  and what they found to be (in)effective about the lab. I wouldn’t grade this either, but I think taking the time for solo sense making and summarizing is important. This could be done on an exit ticket, in the notebook, or online.
  3. Students write a formal lab report. I think that effective communication of a scientific experiment is important. My failure this year was trying to do it simultaneously in the notebook. How to make a table and graph and put it into a report is an important skill. How to best represent the data is an important skill. How to make a scientific argument based on evidence is an important skill. But reading 50 lab reports about 6 times per quarter is awful. So I’m taking a cue from my freshman writing professor. He set up a rotating schedule in which just a few students submitted an essay each week, based upon one of the books we had read. I think doing it this way would lead to fewer reports to look at each week, thereby allowing me to give more effective feedback. Plus, I’d have fewer copied reports since I’d have just one student from each group-turn in the report. So if there are 3 students in each lab group (A, B, and C) then all the As would turn in a report one week, all the Bs the following week, etc. Hopefully the schedule will allow for 2 write ups per student each quarter in order to show growth.

What’s your system for lab work?

An “I-hit-publish-too-early” update: Of course, none of this directly addresses what I feel is the most important issue with lab work: how to assess the scientific inquiry process. I’m reminded of AAPT’s Goals of the Introductory Physics Laboratory and Eugenia Etkina’s Scientific Abilities.

“Is it getting hot in here?”

The other day in class, we were having a discussion about stars and color and temperature. But since most of the kids were looking silently at their laps, I knew their interest was fading fast. (Which is surprising, since they voted by a landslide to study astronomy in the 4th quarter.)

So to get the kids’ attention, I got up on the teacher desk at the front of the room. Then I stood on my hands and farted fire. No, I didn’t merely light one on fire. I literally farted fire. (Lucky for me, I keep a change of clothes at school — just in case.)

And the biggest reaction I got was from a student, who, without even  looking up from his lap said, “Is it getting hot in here?”

And then another said, “What’s with the sudden breeze? Can someone close the window?”

*sigh*

It seems that, at this time of year, any attempt at whole-class discussion is a recipe for failure. Any advice?

Learning Analytics

MOM: Billy! Billy, I got an email today from your computer-based math class. It’s your Learning Analytics Progress Report. Please come inside, dear.

BILLY: Uh oh.

MOM: Let’s see. It says here: you pick choice C too often; you spend more time working on even numbered problems than odd ones; you watched 3 videos all the way through, and rewound portions of 5 other videos. And last, it says your answer patterns most closely match those of women over age 25 who live in Canada and prefer One Direction over Justin Beiber.

BILLY: Does it say why I’m struggling with algebra?

MOM: (shrugs)


Also: The Soaring Promise of Big Data in Math Education by Dan Meyer