TLDR: Most of her students have delegated doing their homework to ChatGPT.
Well I agree with her: fuck that. She makes a good point that writing is not some busywork consisting of transcribing thoughts, it is thinking. I can certainly understand the frustration of correcting LLM slop for days. If the student can’t be bothered to write it, why would the teacher be bothered with correcting it? Just ask ChatGPT to correct “your” homework and put “AI Prompter” in your resume. Apparently it pays really well.
…OK I’m not being nice, I’ll step back a little. Article touches on an interesting concept:
Using ChatGPT to complete assignments is like bringing a forklift into the weight room
Why are students bringing the forklift into the weight room though? Is it because they don’t give a shit? Or are the stakes too high and they don’t trust their own abilities? Do they have the time to even try between their work shifts?
Is it because they don’t give a shit? Or are the stakes too high and they don’t trust their own abilities? Do they have the time to even try between their work shifts?
Likely a mix of all these factors and more. I think the author fails to critically examine how much skill is necessary for the average person and sets a bar of mastery for which many of her students are clearly uninterested in clearing.
While I don’t say this as a criticism of the author, it is worth pointing out that she’s also failed to adapt to the new technologies. She talks about how teachers will need to adapt to the new tools but ultimately places the blame on the students rather than reconsidering who her audience is. I’m guessing these are not individuals who are honestly pursuing a career in writing as those individuals would likely be much more engaged on the subject and willing to grow their skills (unless it’s purely a means to an end- the acquisition of any degree). Using a tool which obscures stylistic choices may be “good enough” for these individuals and being able to accommodate the use of this tool effectively would necessarily require a shift in teaching style which gets them asking questions of the output. She recognizes this, but rather than questioning her teaching style it’s written off as a failure of the student’s ability to withstand the ‘temporary discomfort of not knowing’.
While I don’t say this as a criticism of the author, it is worth pointing out that she’s also failed to adapt to the new technologies. She talks about how teachers will need to adapt to the new tools but ultimately places the blame on the students rather than reconsidering who her audience is.
How would you propose adapting to this? Do you believe it’s the teacher’s responsibility to enact this change rather than (for example) a principal or board of directors?
The average teacher does not have the luxury of choosing their audience. Ideally you’d only teach students who want to learn, but in reality teachers are given a class of students and ordered to teach them. If enough students fail their exams, or if the teacher gives up on the ones who don’t care, the teacher is assumed to be at fault and gets fired.
You can theoretically change your exams so that chatbot-dependent students will fail, or lower your bar because chatbots are “good enough” for everyday life. But thanks to standardized testing, most teachers do not have the power to change their success metrics in either direction.
This article is about PhD students coasting through their technical writing courses using chatbots. This is an environment/application where the product (writing a paper) is secondary to the process (critical analysis), so being able to use a chatbot is missing the point. Even if it were, cancelling your technical writing class to replace it with an AI-wrangling class is not a curriculum modification but an abdication. Doing that can get your program canceled, and could even get a tenured professor fired.
The author was really stuck between a rock and a hard place. Re-evaluating the systemic circumstances that incentivize cheating is crucially important – on that we absolutely agree – but it’s a responsibility that should be directed at those with actual power over that system.
How would you propose adapting to this? Do you believe it’s the teacher’s responsibility to enact this change rather than (for example) a principal or board of directors?
To be clear, I’m not blaming anyone here. I think it’s a tough problem and frankly, I’m not a professional educator. I don’t think it’s the teacher’s responsibility and I don’t blame them for a second for deciding that nah, this isn’t worth my time.
This article is about PhD students coasting through their technical writing courses using chatbots. This is an environment/application where the product (writing a paper) is secondary to the process (critical analysis), so being able to use a chatbot is missing the point.
Completely agreed here. I would have just failed the students for cheating if it were me. But to be clear, I was talking in more the abstract, since the article is written more about the conundrum and the pattern than it is about a solution. The author decided to quit, not to tackle the problem, and I was interested in hearing them follow that thread a bit further as they’re the real expert here.
TLDR: Most of her students have delegated doing their homework to ChatGPT.
Well I agree with her: fuck that. She makes a good point that writing is not some busywork consisting of transcribing thoughts, it is thinking. I can certainly understand the frustration of correcting LLM slop for days. If the student can’t be bothered to write it, why would the teacher be bothered with correcting it? Just ask ChatGPT to correct “your” homework and put “AI Prompter” in your resume. Apparently it pays really well.
…OK I’m not being nice, I’ll step back a little. Article touches on an interesting concept:
Why are students bringing the forklift into the weight room though? Is it because they don’t give a shit? Or are the stakes too high and they don’t trust their own abilities? Do they have the time to even try between their work shifts?
Likely a mix of all these factors and more. I think the author fails to critically examine how much skill is necessary for the average person and sets a bar of mastery for which many of her students are clearly uninterested in clearing.
While I don’t say this as a criticism of the author, it is worth pointing out that she’s also failed to adapt to the new technologies. She talks about how teachers will need to adapt to the new tools but ultimately places the blame on the students rather than reconsidering who her audience is. I’m guessing these are not individuals who are honestly pursuing a career in writing as those individuals would likely be much more engaged on the subject and willing to grow their skills (unless it’s purely a means to an end- the acquisition of any degree). Using a tool which obscures stylistic choices may be “good enough” for these individuals and being able to accommodate the use of this tool effectively would necessarily require a shift in teaching style which gets them asking questions of the output. She recognizes this, but rather than questioning her teaching style it’s written off as a failure of the student’s ability to withstand the ‘temporary discomfort of not knowing’.
How would you propose adapting to this? Do you believe it’s the teacher’s responsibility to enact this change rather than (for example) a principal or board of directors?
The average teacher does not have the luxury of choosing their audience. Ideally you’d only teach students who want to learn, but in reality teachers are given a class of students and ordered to teach them. If enough students fail their exams, or if the teacher gives up on the ones who don’t care, the teacher is assumed to be at fault and gets fired.
You can theoretically change your exams so that chatbot-dependent students will fail, or lower your bar because chatbots are “good enough” for everyday life. But thanks to standardized testing, most teachers do not have the power to change their success metrics in either direction.
This article is about PhD students coasting through their technical writing courses using chatbots. This is an environment/application where the product (writing a paper) is secondary to the process (critical analysis), so being able to use a chatbot is missing the point. Even if it were, cancelling your technical writing class to replace it with an AI-wrangling class is not a curriculum modification but an abdication. Doing that can get your program canceled, and could even get a tenured professor fired.
The author was really stuck between a rock and a hard place. Re-evaluating the systemic circumstances that incentivize cheating is crucially important – on that we absolutely agree – but it’s a responsibility that should be directed at those with actual power over that system.
[Edit: taking the tone down a notch.]
To be clear, I’m not blaming anyone here. I think it’s a tough problem and frankly, I’m not a professional educator. I don’t think it’s the teacher’s responsibility and I don’t blame them for a second for deciding that nah, this isn’t worth my time.
Completely agreed here. I would have just failed the students for cheating if it were me. But to be clear, I was talking in more the abstract, since the article is written more about the conundrum and the pattern than it is about a solution. The author decided to quit, not to tackle the problem, and I was interested in hearing them follow that thread a bit further as they’re the real expert here.