WHICHEVER way, I hope the accountability question will also include providing a reliable way of measuring improvement in student achievement, which I raised in this previous post.
I am part of an e-group that includes high-ranking officials from the DepEd, NEDA as well as our leading academic institutions in the country. I pretty much raised the same concern more than a month ago. The only response came from a UNICEF representative who said
You may argue that BEIS [Basic Education Information System] or the NAT [National Achievement Test] results have a lot of "noise" which makes them questionable and not credible. But I wonder which data base is totally free of any such "noise." Even the data supposedly gathered by the graduate students themselves may not be totally free of such problems. But I think this is where people in the academe and those in the reseach field can work more closely with DepED so that there is pressure to improve the integrity of data. Rather than treating the DepED data base with a 10-foot pole, and questioning its accuracy, we could make ourselves main users and use the whole research excercise to help uncover the kinks in the system, if any, and validate and help strengthen such data base.In reply, I said I can only agree, proposing that we should find ways of (a) estimating the size of that "noise"; (b) sifting out the "noise" from the valid and reliable data; and (c) ensuring that the "noise" is kept to the minimum in subsequent NATs.
Because unless we are able to take out or minimize the "noise," we will be saddled with bad data which will lead to bad decisions. And this Time article will certainly be a recurring déjà vu.