Michael Mayrath, Jody Clarke-Midura, et al. at Harvard
Money is being spent on STEM ed, yet we remain 25th in STEM. Harvard's done virtual performance assessments (VPA), supplements to standardized tests. Standardized tests aren't legitimate to scientific process. Use prior research on River City from Chris Dede and Jody.
What is VPA? simulations of authentic, real-world ecosystems more of a simulation than a game
<insert demo here> <insert tech problems here>
contextual feedback --> Zelda style Button at the top right changes depending on local environment.
[same demo as at NARST so far]
working with Bob Mislevy, Val Shute, Cathy Kennedy using ECD
KSA (thing being assessed) Table*Observable Variables
- Stage / Quests/ Tasks
- Quest Giver
- Action to Complete
After a study, found needed to scaffold, needed performance palettes (compare with other assessments *not* GTA), needed to consider graphics.
Angela Shelton Effects of a Reading While Listening (RWL)... [in a Virtual Environment (VE)]
SAVE Science (SS) provides students opportunities to solve science problems in virtual environments Angie's research adds a RWL component to SS
<no audio in room.. stupid tech issues>
Theory: Universal Design for Learning Situated Cognition
exploratory case study: 31 students with RWL vs. 41 without 4 point scoring rubric
<insert screenshot of SAVE Science weather module> [same one as at NARST]
Prelim findings: low scores in general, RWL students did score higher but not significant Testing it on more students this year
Patrick Pettyjohn, Sasha Barab Scaling Disruptive Technologies: responsibility of learning sciences
Learning Scientist what is <--> what could be observation <--> design understand <--> change
making claims on the same thing you're altering DBR --> design, theory, problem all iterate
Theory: Transformative Play*projection into role
- recruited into partly fantastical problem
- must apply conceptual understandings
- transform context
- transform self
Repositions students, content, and context
Plague World for persuasive writing dark, scary world based on Frankenstein choose a thesis, collect data, go graverobbing, experience consequences
Intentionally position student as reporter Dialog trees that give students different choices and reasons to back up choices [But doesn't this bias results? (e.g., GWB: great president or greatest president?) Or wouldn't a student pick what seemed like a stronger argument and that's different than spontaneously coming up with own argument?] [Ultimately, testing for persuasive writing ability, so maybe this is moot.]
Study showed that students who used Plague World had learning gains and were engaged. Showed more agency in persuasive writing.
Matt Gaydos, Kurt Squire, Ben Devane
What are possibilities for making assessments about what people know and do in game? make ed games as gamey as possible
game: Citizen Science (w. Filament Games) investigate fresh water problems in Wisconsin [need to get game (filamentgames.com)... looks totally awesome with many different variables and robust evidence collection and argumentation processes]
Can we correlate in-game behavior with civic science expertise? Do experts do better in this game than non-experts?
Study 1: Novices (high school students) blew through the game. Experts had problems playing the game. Therefore videogames literacy is really important if we're thinking about assessment within videogames. (specific genres, how much, etc?)
Study 2: high school classroom, 3-day curriculum, teacher pre-played and came up with curriculum on his own, to take ownership of the project Pre and Post Interviews to assess previous gaming experience, etc. Game is better than class but not as good as other games. In the class, students helped each other out a lot.
The more the game was gamey, the more youth liked it, the more experts got lost.