Some Reflections on AI and Higher Education, Part 1

There is a fresh round of AI cheating hysteria as a result of a recent NY Magazine article and also another round of recent articles reports on professors using AI to give feedback on assignments sparking outrage from a few students. I think articles likes these are not helpful at all in thinking through the current moment in higher education and the problems that AI chatbots pose for our work. Rather, all they do is sensationalize the issues and maybe worse, they contribute to an arms race pitting students against faculty that ultimately benefits no one except the companies who make this technology and then work to push it into the project of higher education- not because they genuinely think these technologies are useful, but because it benefits their bottom line. The news media that eats stories like these up and over and over again decries AI as putting an ‘end’ to education as we know it, is itself playing a role in the destruction of not only the vital trust relationship between students and professors that is necessary for teaching and learning, but also the larger project of education that they seem to be so worriedly reporting on.

I do get it, the creep of LLMs into higher ed needs considering, and these technologies can certainly be used to undermine some methods that have been core to higher ed, like the classroom essay and such. This creep has made me change some things in my classrooms, and we know that certain processes like drafting and peer review work can be protective against AI driven student intellectual labor displacement but I also think- and I will keep beating this drum- if we talk to our students about it (which we should!- they need to understand these technologies. And we do also) we find out that they recognize AI slop faster than we do. And they mostly hate it as much as we do—hence the shock of the students in the recent article about the professor who is using it. Student cheating has always been around, and it will always be for some students. I do not see that fundamentally being changed by any of this.

This is not to say that we should ignore the ways these technologies make it easier (in some ways) for students to do less intellectual labor especially in the humanities and other spaces in higher ed (see the point above about potential methods that are protective). But we should be clear about what is and is not going on here.

The problem is larger than the technology- and that technology, and the companies that created it and market it to students and our institutions capitalizes on decades long de-valuing of higher ed, and especially humanities work, this devaluation involves more than just technological advancements like LLMs. As those if us in higher ed know, that devaluing has happened at many levels. We can think here of the long drumbeat of claims that higher ed should be about vocational training and since the humanistic disciplines do not create ‘value’ or ‘return on investment’ for students the same way the STEM disciplines do, the humanities are made out to be useless, or add-ons, or part of basic requirements for students giving not much more than lip-service to the concept of a well-rounded liberal education. This devaluing and defunding is both internal and external to the university system- the humanities have been underfunded and adjunctified and administrators and university boards of trustees—the same folks who are also cheerleading the “AI revolution” are part and parcel of this process. And all of this makes it hard in many corners of the academy to employ pedagogies that might be protective against AI encroachment- if you are teaching at multiple institutions to make ends meet and have large class sizes, the idea that you can do the kind of educative work mentioned above is a joke.

The message to our students (and to many of us, frankly) from all of this is loud and clear- yes you may have to take a few courses that require you to think humanistically and to do some writing, but the university does not really care about those- and the larger society thinks they are valueless also, so you should not care that much about those classes either, and look! we now have a way for you to automate you work in those classes. It seems to me that university administrators are doing very little to counter these issues. These are the conditions we find ourselves in.

There has long been blood in the water and AI companies and AI-captured university administrators smell it.

AI companies are shoving the technology into universities and other academic spaces as one of many guaranteed revenue streams because they need all the revenue they can get- they keep burning through money and their products keep getting worse in ways that even the companies themselves do not understand.

This is what we should fight tooth and nail- we should fight the companies that are doing this and the administrators who are buying into it. And we should do this in as far as possible not by seeing our students as the enemy as many in the media fantasize that we do (or that we should) we should enlist them in this fight- teach them about AI, not how to use it, but what it means for society, the conditions of their education, and the ways that AI technologies are simply part of the larger process of not only de-valuing education, but de-valuing them as learners and tuning them into mere consumers of that education rather than the whole persons that they are and that we want them to be.

Maybe AI technologies are useful in some small subset of worklife spaces- I am not qualified to say much about that- but it they not universally useful. And they certainly do nothing useful for teaching and learning broadly.

But we should not blame our students. They are not the real problem here and we cannot let the forces trying to push us to do that win.

Leave a comment