Monday, December 31, 2012

Where Test Insiders and Data Wonks Meet

StandardizedTests: Rotten at the Core

First, from test insider Todd Farley, who has worked for a testing company:
http://dianeravitch.net/2012/12/27/11990/#comment-75986
"there’s also the fundamental problem with getting twenty people to score a hundred thousand tests in any standardized way, especially in some two-week time frame. It just can’t happen. The students’ responses are too varied, unusual, unique."

Then, from the "data wonks" at a recent MIT conference:
http://www.nytimes.com/2012/12/30/technology/big-data-is-great-but-dont-forget-intuition.html?_r=0
"The problem is that a math model, like a metaphor, is a simplification. This type of modeling came out of the sciences, where the behavior of particles in a fluid, for example, is predictable according to the laws of physics.
In so many Big Data applications, a math model attaches a crisp number to human behavior, interests and preferences. The peril of that approach, as in finance, was the subject of a recent book by Emanuel Derman, a former quant at Goldman Sachs and now a professor at Columbia University. Its title is “Models. Behaving. Badly.”
*********************************************************************************
The rest of these posts each contain great information, supporting information.  The 2 quotes above show how two distinct groups, independently, show the basic fallacy that underlies standardized assessment:
"Humans don't behave that way!"

No comments:

Post a Comment