10 Comments
User's avatar
SteveWright's avatar

Thanks for this absorbing read.

I've set this as a reading on a critical literature review course for clinical education - partly as a fantastic example of a critical review, and for setting out the stages of that. However I have also incldued is as a discussion paper to explore not only the challenge of bad sceince (where your forensic dismantling of fake references and AI text is exemplary) but also to consider the limits of the criticality of research within this.

I'd argue that your final five points have some problematic ones, to me those are numbers 1, 2, 3 and 5. I'd further suggest that these denote a *very* narrow view of what educational research can, could and should be. The prioritisation of only valuing what can be (often spuriously) measured, seems to reduce that to an instrumental view of what edcuation is education and what appropriate educational research is research. I.E. only test scores = legitimate educational achievement or a focus for research. yet that stands at odds wuith point 4.

As such it not only dismisses (in a very pejorative albeit humorous) way entire classes of research but also seems to miss that in a quasi-experimental study there are so many confounding variables that asserting the experimental variable is the "cause" is to ignore all those rich complex social elements that aren't so easily measured. For example in a study that gives one group ChatGPT but to the control group it's not given - what else comes with that "intervention" or variable e.g. additional attention, training, group membership of the "we're using the shiny thing" group, motivation from that. What are the students doing with/without it? That would require observation and qualitative insights into experience... however by the logic here any insights or findings from that sort of study would be dismissed as obvious / flogging a dead horse / not (very narrow view of) science-y enough

On "obviousness" that's oftena spurious / straman argument - would SDTRONGLY recommend a read of:

GAGE, N. L. (1991). The Obviousness of Social and Educational Research Results. Educational Researcher, 20(1), 10–16. https://doi.org/10.3102/0013189X020001010

The issues about AI also seem to stem from this being fundamentally a marketing/buzzword claim as endemic in science as tech-hype.

cF pages 5-8 of "The AI Con" by Emily Bender and Alex Hanna - preview at https://www.penguin.co.uk/books/468070/the-ai-con-by-hanna-emily-m-bender-and-alex/9781847928610

Bender and hanna do a great job with a taxonomy of what gets called AI which is broader than the subset of Large-Language Models, or within that the example of ChatGPT.

More broadly - given the comments in response as well it feels like a lot of work has been forgotten and that explored and deconstructed previous calles for what counts as evidence in educational research to inform evidence-based policy making.

That's collected together in:

Hammersley, M. (2007). Educational research and evidence-based practice. SAGE Publications.

So to address: "WTF is everybody doing? Is everybody pretending to do science all day long? What's the deal? I do not understand."

Here's two key debates that explored that in SERIOUS detail if you do want to understand (not just decry) then there's a lot to learn here :-)

Elliott, J. (2001). Making Evidence-based Practice Educational. British Educational Research Journal, 27(5), 555–574. https://doi.org/10.1080/01411920120095735

Oakley, A. (2001). Making Evidence-based Practice Educational: A rejoinder to John Elliott. British Educational Research Journal, 27(5), 575–576. https://doi.org/10.1080/01411920120095744

and the more involved and policy focussed discussion:

HARGREAVES, D. (1996) Teaching as a research-based profession: possibilities and prospects, Teacher Training Agency Annual Lecture (London, Teacher Training Agency).

HAMMERSLEY, M. (1997) Educational research and teaching: a response to David Hargreaves's TTA Lecture, British Educational Research Journal, 23, pp. 141-161.

HARGREAVES, D. (1997) In defence of research for evidence-based teaching: a rejoinder to Martyn Hammersley, British Educational Research Journal, 23, pp. 405-419.

HARGREAVES, D. (1999) Revitalizing educational research: lessons from the past and proposals for the future, Cambridge Journal of Education, 29, pp. 239-249.

Elliott, J. (2001). Making Evidence-based Practice Educational. British Educational Research Journal, 27(5), 555–574. https://doi.org/10.1080/01411920120095735

Expand full comment
wess trabelsi's avatar

Thanks for the homework and the thoughtful reply. I agree that my conclusion may have come off as narrow — likely a reflection of my limited background in formal research. Since publishing the article, others have also pointed out that quantitative, controlled studies aren’t the end-all-be-all in education, and I’ve been hearing more about alternative approaches that I find really interesting. I've also been made aware of two papers I'd like to read but haven't gotten access to yet: "What Is Research, Anyway?" by Douglas B Reeves, and "So You Want to Get Serious About Evidence-Based Practice?" by Frederick M. Hess. I'll add your links to my list of MUST READ. I do want to educate myself more on research. I'm working on an idea that partnerships between secondary educators and researchers might yield more interesting results, because something else that most studies have in common, is that they're not making judicious uses of genAI in their instructional design, often making their results predictable (see my article on the MIT paper). I would love to test this intuition by collaborating with researchers to design a study, or several, together. next school year I'll be part of a sort of field research initiative, but informal, that's all I have to go with for now. Thanks again!

Expand full comment
Mike Trucano's avatar

Thanks so much for sharing this analysis, Wess. Helping people who want to to be 'evidence-based' in their decisions better understand the evidence -- and the 'evidence' -- is often not a very glamorous task. When you do this, you don't get sexy publication credits or citations, but you do get admiration from many people who really care about separating fact from fiction -- and how to tell the difference. This is messy stuff, thanks for taking the time to sweep your broom in public like this!

Expand full comment
wess trabelsi's avatar

Thx Mike. While doing it I was indeed perplexed that none of the few lit reviews I included called any study out.

Expand full comment
Benjamin Riley's avatar

This is very well done and accords with what I'm seeing too (see link below). Thanks for doing the hard work of looking into all this shoddy research.

https://buildcognitiveresonance.substack.com/p/something-rotten-in-ai-research

Expand full comment
wess trabelsi's avatar

right back at you!

Expand full comment
Alan Davison's avatar

Thank you Wess! It has taken me until now to totally absorb your work! Excellent helpful job!

Expand full comment
Rob Nelson's avatar

Bless you for doing this work so I don't have to. You just provided my first stop when people push back on my claims that "evidence-based" approaches to AI in education are mostly bullshit, at least so far. Such claims only make sense if the evidence is based in research methods that inquire after truth. That's just not happening much, and you just provided some evidence. Thanks!

Expand full comment
wess trabelsi's avatar

What one can notice from the table is that most "researchers" were very laconic concerning how exactly AI was used, or any training provided. I suspect not much happened there, leaving the door open to experiment with careful and purposeful integration of LLMs in instruction.

Expand full comment
Cathy's avatar

Bad Science is everywhere used as "PROOF" of whatever agenda is currently popular. Most studies are biased and lean heavily toward outcomes that funding bodies want to find. These studies are use to justified Education Department Policies and directives. The bottom line is like everything, AI used well is great, used poorly is crap. The TRUTH is, it is out there, it isn't a matter of it we should use it in EDUCATION, it already is being used in EDUCATION. It isn't a matter of whether it is used everyday by everyone in every social media app, cars and most fridges etc. It IS OUT THERE. So you better damn well learn to use it effectively, and if EDUCATION doesn't prepare kids to use it, then EDUCATORS are failing.

Check out my podcast DEEP DIVE Podcast - AI for GOOD or EVILon this topic.

https://youtu.be/-OlwnCwmiHg?si=AX-cTnKY8lapJZv8

Expand full comment