|
by laudablepuss 03/28/2013, 3:13pm PDT |
|
 |
|
 |
|
I'm the researcher who presented the study, and I'm sorry that Metacritic appears to be upset. We were careful to stress that the data presented were *modeled;* that is, they were an attempt to explain observed data in the absence of full internal knowledge of the system. All models are more or less inaccurate, but if ours was "more," rather than "less," we'd love for Metacritic to tell us how so we can improve upon it. I should point out that our internal checking and validation (which we also presented at the talk) showed our model was accurate in most cases to within a few tenths of a point. That being said, we'd love to see the actual weights to use for comparison. We got into this mostly as an intellectual exercise in statistical modeling, and we'd be interested to see how close we got (or didn't, if that's the case).
Adams Greenwood-Ericksen, PhD
No testing = "our internal checking". I want a mail-in PhD too.
a guy in the comments wrote:
As a student of Full Sail I can honestly say I'm not that surprised at the inaccuracy. I remember finding the questions (word for word) of a statistics test on google that were taken from another university's program. There are also classes that consist entirely of making students watch Buzz 3D tutorial videos (which are available for free) and then turning in the results of following along with the videos as the final assignment. That's right, they are charging students thousands of dollars to go watch freely available youtube videos.
Of course, I have no idea which particular mail-in program Dr. Greenwood-Ericksen took. Just the one he works at. |
|
 |
|
 |
|
|
|