Preview Mode Links will not work in preview mode

When Experts Attack!


AI is an elephant in the classroom (transcript)

Dec 4, 2023

BRENDAN LYNCH, HOST:  We live in a time where nothing is true. An era where reality and hoax look the same on the internet. Whoa, wait a second. There are people who actually know what they're talking about dangerous people. We call them experts. We're giving these experts a megaphone to drop some truth bombs. If you can handle the truth. I'm Brendan Lynch, and I'm the host of “When Experts Attack!.” For Kathryn Conrad, artificial intelligence is the elephant in the classroom, one that can no longer be safely ignored. It's better, she believes, to try to establish some parameters for its use. That's why just before the school year started, the University of Kansas English professor published a blueprint for an AI Bill of Rights in education in the new scholarly journal Critical AI. For Conrad it's an attempt to establish guideposts in a wild and wooly frontier. While her scholarly expertise in modernism lately centers on 19th century Irish writers like Oscar Wilde, she tells “When Experts Attack!” correspondent Rick Hellman her AI proposal comes from months of study and discussion of the challenges posed by the disruptive technology.

 

THEME MUSIC

 

RICK HELLMAN, CORRESPONDANT: You noted in your article we haven't even reached the one-year anniversary of the day that a private company called Open AI introduced ChatGPT and thereby kicked off a national debate about so-called generative AI. And we were just discussing that we feel like futurists have already raced ahead to imagine artificial intelligence as sentient robots and things which they have already in sci fi movies. And yet, it's real life. It's creeping into every facet of our life today, and yet a lot of people don't know about it, what it is, what it means, what it implies. Let's talk about some of the major terms and entities that are out there in the field of artificial intelligence, and hopefully people can see their relevance to the field of education, where your expertise lies. For instance, what is a large language model, like ChatGPT? Is it a chat bot? What can it do and what can't it do?

 

KATHRYN CONRAD: That's a great question. I love that you started with the robots, because of course, that's our first thought when we hear AI. There's H.A.L. and there’s Skynet. We've seen a lot of alarmist language around AI, especially since last year. I think it's important to note there's debate about whether what we have right now — things like chat GPT, large language models and other kinds of generative AI — actually have anything to do with those fantasies of a sentient robot future. I would say that open AI — and I can say because I've read their read their self-descriptions in more detail than Anthropic and Google's Bard and some of the other models — they are hoping for something like AGI. That's artificial general intelligence. It's not clear what the relationship between that dreamed-up future and what we have right now is. You asked what is ChatGPT, what are large language models. They are text-generating models. Generative AI includes other kinds of generation models, but just for your question, yeah, it is a chat bot. That's important to know. And I'll try to remember to come back to that issue. What they are are text-generating models that use what's called a transformer architecture. That's a kind of processing algorithm that considers the context and weighs the elements in the sequence of tokens or words in order to create a kind of natural-sounding answer. Chat bots have actually been around for a long time. I was teaching with some chat bots back in 2015 just to talk about how we interact with technology. Since 2017, there was a real shift. This transformer architecture really made the difference between something that was kind of quirky and something that's much more natural and the kind of impressive outputs that we get with something like ChatGPT or Bard or Claude. Because chat bots are an interface, that's how we interact with the architecture. If you've ever messed around with ChatGPT, experimented with it, you'll see that it often refers to itself as “I.” It talks about understanding. Sometimes it will use language that makes it sound like an intelligence. That's a deliberate choice. It's a deliberate choice to have us engage with something that seems like a personality. They didn't have to use that kind of model. Chat bots are an interesting choice for companies that have artificial general intelligence as their sort of arc, what they hope to get. If you want to pretend that you're interacting with Jarvis or with Data from “Star Trek,” you can because it uses an AI. But there are other ways you can use artificial intelligence that wouldn't do that. It's a deliberate choice to create something that's known as the Eliza effect. That's when you interact with a computer and you just sort of assume an intelligence you interact with it as if it were human.

 

HELLMAN: But they want us to think that it's a sci fi robot.

 

CONRAD:I would just say, if you go to image-generating models, if you were to enter artificial intelligence into something like Midjourney or Dall E-2, it's pretty likely you're going to get a human face, usually female, with some wires coming out of its head. That's definitely part of the image. It's part of the cultural imagination that they were trained on. They're trained on large datasets. That's a really important other part that we'll probably get to. They're trained on large datasets, and then they're tweaked. They're trained and tweaked by human data workers. There's definitely humans behind the veil throughout the process. That's important to remember. Even though when you're interacting with it, there's not a person right behind screen fixing things. They do fix outputs, for sure, and guardrail and reinforce and direct.

 

HELLMAN: Interesting. Well, I think another thing that people think about when they think about artificial intelligence these days — and certainly in the field of education — is its ability to plagiarize and cheat. This gets to some of the ethical issues with AI that we'll be getting to later. You’re right, there is some truth to the media's obsessive focus on plagiarism or cheating, which is the ease with which students can generate ostensibly passable work on a range of assignments. You say teachers have already been forced to adapt to this. How so?

 

CONRAD: Pretty quickly it was clear that this would have potential impacts on what students might produce. As teachers, we were used to adapting. We adapted to the internet. We adapted to Wikipedia. We adapted to the pandemic most recently. We're used to pivots. But this was dropped in our laps, fully formed, without any consideration for its impact on education. Teachers have been figuring out whether and how to work with it, how to change policies, how to change assessments, so that we ultimately get what we want. If there are any students listening, what we're looking for when we ask you for assignments is not so much the right answer as an opportunity for you to learn. If you're giving me something that was generated by ChatGPT, then I need to reconsider that assignment. I don't want a perfect paper. I want you to learn how to write a paper. I don't want you to give me perfect code. I want you to learn how to code. This is one of the things teachers have had to deal with from K-12 through graduate school is how to create assessments that allow us to give students the kind of competencies as well as content that we want them to have. That's how we've been adapting.

 

HELLMAN:  What are you seeing in the college classroom today in regard to the use of AI, should we say chat bots, by students?

 

CONRAD:  That's a good question. I mean, I've talked to a lot of students over the last 10 months. Certainly there are some teachers that are working with it. There are some students or some teachers who've said students shouldn't use it in their classes. I think a lot of teachers haven't said we're still trying to figure out if there's ethical uses of it and how we might use them in the classroom. What I do sense — again, through students, both in and outside my own classrooms and talking to other educators — is they're sort of at sea. They don't even necessarily know whether if there's no policy, does that mean they can use it or that they can't use it? That's part of the reason I think it's really important to help people understand, to give them opportunities to think about policy and to think about principles for policy. That helps to protect students as well as teachers.

 

HELLMAN: Well, that's why you wrote this article about a Bill of Rights for AI in education, right? Let's talk about some of the things that you raised in the article. You start first by saying AI entails a host of ethical problems, from scraping of data to amplifying stereotypes and bias to surveillance. Can you talk about some of those bad, basic root cause issues? And then we'll get on to your Bill of Rights itself.

 

CONRAD: There are a whole lot of ethical issues. And if people are interested in sort of following conversations about it on social media, the hashtag is usually #AIethics for that. So one of the main issues is that the datasets that have been used to train these models, whether they're Visual Media Generators, or whether they're textual or whether they were trained on data that was not specifically consented to by the creators. When I say data, I'd like to remind people that data doesn't just mean your Social Security number or your medical records. It means poems. It means pictures you may have posted on Instagram or an art website that you have copyright over. Because they're publicly available, they were often scraped. When we talk about scraping, that's what we're talking about is taking that as data to train. And the implications for artists is quite profound. Several models have been shown to reproduce artwork that's very close to the original, even with signature sort of garbled in the corners. That's part of the ethical question: Is that fair use? Most artists, I would argue, at least from my research over the last year or so, are not consenting to that. They feel that they should be remunerated for that training data. So that's one of the ethical questions: Is that consent?

 

HELLMAN: Because the analogous issue with regard to written work, is there not?

 

CONRAD: Absolutely. It’s been clear to a lot of us who've been experimenting with it for the last several months is that you can get copyrighted responses. You can get responses that include copyrighted text, for sure. But now there have been people who've done more deep probing that makes it real clear that some very large data sets of pirated works have been used to train these models, which is kind of significant.

 

HELLMAN: Not a little bit creepy.

 

CONRAD:  Yeah, for sure. And so you've asked about a couple other ones: bias. It's important to recognize that the data set is what's available on the web. It's available in data clusters. On the one hand, it's a whole lot of data that's been scraped. On the other hand, it's limited to what can be on the internet. There's no doubt there are definitely places where there are sort of data gaps, and that reinforces a worldview based on what's available on the internet. That's one thing that reinforces bias. The other are the people who train the models to make sure that they're aligned. That's a very charged word, “alignment,” so I'm not going to use that anymore. But I will say just so they look like what we think that the people who are asking the questions want on the other end of it. There are lots of embedded biases, and there are a lot of people who have written about algorithms and other algorithmic bias that are really important. I'll just mention a couple of names if people want to have some fun like reading over the holidays. Safiya Noble’s “Algorithms of Oppression” is one. Cathy O'Neil's “Weapons of Math Destruction.” You heard that — math. I love that one. It's good “dad joke.” And Joy Buolamwini, who is the founder of the Algorithmic Justice League. She has a book coming out called “Unmasking AI” at the end of this month. Joy, for instance, talks about how facial recognition software was trained primarily on white faces. When she was a grad student, she was trying to work on some of the software and she was testing it, and she put her face in front of it on her screen. It wouldn't read her face. As a Black woman, she was like, “Huh, I wonder what's going on here.” And so she brought in one of her roommates and put her roommate in front of it, and it read her face fine. She found that it would only read her face when she put on a white Halloween mask, and she's like, “Yeah, this is a problem.” These are used for policing. They're used for identification. And I'm also gesturing out towards AI that's beyond necessarily generative AI. This AI is a big tent, and it's involved in everything from generating marketing copy to surveillant policing. It's kind of important to consider these all in a larger network of kinds of technology.

 

HELLMAN: Next, I wanted to go to the meat of your article itself — the proposed a bill of rights for education. And you say that you were impressed by and modeled it after the Biden administration's Office of Science and Technology Policy, which in 2022 released a blueprint for an AI Bill of Rights. Can you talk about that and some of the main points that you think teachers and students need to be aware of or empowered to apply to their lives?

 

CONRAD: Absolutely. I started talking to people on social media about what our responsibilities were as educators and trying to figure out a way to start a conversation that we could all be part of in the curated AI feed. Now everything on my social media seems to be about AI. Somebody had mentioned earlier in the spring, this bill blueprint for a bill of rights for our AI Bill of Rights — that's what it's called — and what the White House had put forward last year. I was like, “Oh, that's great. Let me look at that.” And so I'll just read the main points of it. There's more data, if you go on to the website — it's still on online — you can read some of their examples and fuller description. I think it's important for people to know that this is out there. The first principle is safe and effective systems. You should be protected from unsafe or ineffective systems — probably infected systems, too — and algorithmic discrimination protections. You should not face discrimination by algorithms, and systems should be used and designed in an equitable way. That gets to some of the things we were just talking about: data privacy. You should be protected from abusive data practices via built in protections, and you should have agency over how data about you is used. Not just an explanation; you should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. And human alternatives, consideration and fallback: You should be able to opt out where appropriate and have access to a person who can quickly consider and remedy problems you encounter. Those are great. I loved those, and then I clicked through, and there's all of these caveats on a second page, hidden, saying, “You know, this isn't law. We may not have to use these when national security is at stake,” and so forth. Ultimately, there is a blueprint. It’s guidelines, and they're great principles, and I think the Office of Science and Technology Policy that came up with it was really committed to these principles. Ultimately, I was disappointed to learn, for instance, that Congress had already paid ChatGPT for licenses to use and that the White House had invited a lot of the sort of big tech giants in to talk about how to regulate themselves, which is I think what we call regulatory capture, and I was a little disappointed about that. But I still think the principles are great to start with. So that's that. That was a good starting point and a framework, and that's where the title of the my “Blueprint of an AI Bill of Rights for Education” is. And although I don't love it, because it sounds like we're giving AI rights, and that's a whole different…

 

HELLMAN: Let's talk about how you divided into rights for educators and rights for students.

 

CONRAD: First I wanted to talk about our role as teachers and protecting our role as teachers. One of the things that I wanted to protect was our own input on purchasing and implementation. I will say that when I first got to KU — and we were back before Blackboard, back before other learning management systems — we were consulted. Faculty were consulted on which systems might make more sense, and I'd like to get back to a more active sort of consultation with faculty around ed tech. And maybe it's just that I'm not the one being consulted, but I do think it's important, since our responsibility as educators is over curriculum. I mean, that's part of what faculty governance is for. It's really important to have domain experts being able to say, “Hey, this technology is or is not appropriate for use.” And I will say that while KU has has tended to be open about this, I do know a lot of K-12 educators are frustrated, because they don't necessarily have those inputs. That's what I was trying to build in.

 

HELLMAN: There's not a lot of guidance from above yet. Is that what you're saying?

 

CONRAD: There’s not a lot of guidance, or the guidance is sort of like, “Stop. Don't use it at all.” This is in K-12. I would say I don't really want to see a blanket policy anywhere. I want blanket guidance and protections. That's what I would say. It’s really important, but it's way past time to discuss it. As I say, educators have been pivoting. That's what we do. But is it good? There are so many responsibilities that educators — whether they're eighth grade teachers or whether they're college/university faculty — we have a lot of expectations. Building in some time and space and room at the table to have these conversations is important. That's the next two things in my policies: input on policy. Also, professional development, like KU has, has been great. I just had last week. We're talking in October, and just last week, IDRH did digital jumpstart workshops. I spent an hour and a half talking to people about critical AI literacy in the classroom. I think KU’s has had, maybe seven that I've been involved in. And I know there's more conversation. There’s certainly interest, there's certainly room, but also incentivizing that for people who are busy and have so many things on their plate; giving us opportunities to talk to each other, but also making it valuable when you only have so many waking hours. You have to figure out how to get people who are who are definitely plenty busy to consider these issues.

 

HELLMAN: It's jumped up on the priority scale, has it not?

 

CONRAD: I would say yeah, for sure. The other thing: I think people are frustrated, because AI developments are changing all the time. But I think we're at a place now I would say — this fall especially — we're at a place now where we have some of the sense of the impacts of it. We have people who are trying to do research on it, so we do have a little bit more information than we had, say, last spring. It's sort of like the difference between the spring of the pandemic and the fall of the pandemic, where we're just trying to put Band-Aids over the situation. Now we can actually build more actively, and there plenty of great people out there trying to do that. I'm thinking of my colleague Anna Mills in California, Maha Bali in Egypt, lots of other folks who are having conversations about this nationally and internationally and talking to each other. We've got some stuff to build on it. It's to try to build something that's in various fields that will be useful for students for sure.

 

HELLMAN: Have we covered all the Bill of Rights for educators?

 

CONRAD: The last one is autonomy. Ultimately, I may have a very different opinion about how to use AI in my classroom, from class to class even, than somebody else in another field. Ultimately, I think as long as we're protecting student rights — and that's big — it's really important. Autonomy is important, and I say, inherent. You should be able to decide whether and how to use automated or generative systems in your courses, but that comes with what we're moving on to next, which is student rights and the real importance of protecting those rights if we're going to use these systems in our classrooms.

 

HELLMAN: Students are wanting direction, it sounds like.

 

CONRAD: That's my sense, and that's the first right I list. You should be able to expect clearer guidance from your instructor on whether and how automated and or generative systems are being used in any of your work for a course. And I don't just say AI, I don't just say chat GPT. One of the things I think is really important is that people recognize the difference between a specific model, like chat GPT and a class of things like LLM or visual generators. Or even AI. AI is a big term as we've just suggested. At some level, Siri is AI. At some level, spellcheck is a kind of AI, right? You want to be real clear about that so students feel confident. Part of this is giving them the confidence to learn and build skills. Clear delineation is not punitive ones; just explanatory ones are really important. Students need to be able to ask. I sort of thought it was obvious that students could ask if there was no guidance provided, but I've gotten a sense again over the last year and had specific students say, “Listen, I don't feel comfortable asking, because if I say something…”

 

HELLMAN: Why, they're afraid there'll be their reputation will be impugned by the very notion, they might use AI to write something?

 

CONRAD: They look to us. They feel like, especially if they care, or if they're vulnerable. This is really important to remember: There are going to be students who are, say, on college on a scholarship that they're very concerned about losing and they wouldn't be here otherwise. There are students all across the different levels, who are non-native speakers of English, for whom AI detection software is actually really poor and is much more likely to accuse them falsely of using AI. There's a lot of anxiety. And students can even agree about whether no policy means I can use it or that I can't use it. Or what can I use if I use Grammarly, for instance —  a very popular system I used to encourage students to use but now it has a plugin or a component that is generative, which ticks me off to no end because I really would just like to be able to. I liked Grammarly until this because it really confuses things. So I'm trying to be very clear in my syllabi, and I actively in my policies encourage students to ask and tell them that it's OK to ask if they don't know and that I will not assume the worst. I think we need to do that now, especially at this early stage when things are still so up in the air to really make sure students know that they can ask and make it really clear. Don't just assume that they will ask if they don't know for sure.

 

HELLMAN: Some of the other points you wanted to make: Do you think students have rights with regard to AI?

 

CONRAD: I've heard plenty of people say, “Well, students give away their data all the time. We all do when we're on the internet.” I'm just going to generalize and say most of us are probably not as attentive to privacy policies and Terms of Use as we should be. We didn't read the fine print. I think maybe most of us didn't read the fine print. If you're on any social media, I feel flayed open. I used to work in surveillance studies a little bit, and so I'm very much aware of what happens to our data. One of the rights that I suggest here is privacy and creative control. They're not necessarily the same thing, but both have to do with sharing data consensually. So making sure that if you put something into the system that may be used to train this system, that you know you're not getting money back from them. That it's your choice to make sure that students know to read the Terms of Use. I've even done an assignment where students did that, which was lots of fun. It surprised all of us, actually. I was expecting to be, “Yeah, yeah, they give this away.” But we were digging in, and it was pretty — I'd say entertaining, if it weren't so scary. Also, privacy. Ultimately, you can choose to give your data away. You can choose to give your information away. We do it a lot, but it is not my role as a teacher to give away my students’ data. It's legally suspect because we do have FERPA, the educational rights protection act that we actually are beholden to. More to the point, I'll use this metaphor: Just because my students may have had their apartment robbed doesn't mean I'm going to assign them leaving their door open. It's not my role to say, “Well, you put yourself at risk, so I'm going top make you be at risk for an assignment.” I do think it's important to have opt-out options as well for using generative AI, because generative AI is also — to use some surveillance studies lingo — it's a leaky container. You can put stuff in there that will come out disaggregated and mushed up, and you don't have to worry about it, but sometimes it will come back looking a lot like it went in. And that's what we call a leaky container: a data container that isn't safe. It’s like when you put your credit card in, you really want it to be encrypted. Well guess what? These models are a lot leakier than then your credit card encryption when you're going to buy something on the internet.

 

HELLMAN: Anything else that students have a right to expect?

 

CONRAD: Ultimately, appeal is a big one. I've got two major ones besides a general catchall. Your legal rights should always be respected, regardless. One is the right to appeal. People need to realize AI detection software is itself AI, and it isn't perfect. It isn't even terribly good. There are reasons for that structurally why it probably will never, never be perfect. Even people who are incredibly enthusiastic about using AI in the classroom have been coming out really against AI detection software, because of the false positive rate. It's not super easy, but it is definitely possible and sometimes easy to work around it so that if it's submitted it looks like it was human-generated on the one hand. On the other hand, there's a Stanford study that talks about how non-native speakers of English tested a bunch of TOEFL essay and other essays that were written before they before ChatGPT existed and tested them against this detection software. It was astounding how many were identified as being AI-generated. So that's really, really awful. For equity reasons, that's more important than not then missing them. If you miss some, that's bad enough because that creates an environment in which students who want to cheat think, “Oh, my teachers just relying on this and I found a workaround.” That's not good. People have always cheated, right? We're not going to fix that. You don't want to incentivize cheating. On the other hand, I think it's incredibly important that students not be falsely accused. The right for appeal is really about being able to have a conversation, and that's tied to the last major right that I listed that you should know if an automated system is judging you or assessing you, and you should always be able to ask for a human to make those calls.

 

HELLMAN: Almost none of these things we've been discussing are yet U.S. law or university policy. You're trying to give guidelines, right? So let's end this on a positive note, because in the article you do say that you think universities can lead the way to a better, more ethical AI. Can you explain what you mean by that, please?

 

CONRAD: One of the problems we have with AI right now are the problems with AI right now, and that has to do with the particular companies, the big tech companies that created them with the particular landscape in which they emerge and the economic and social and ethical landscapes in which they've sort of embedded themselves. That doesn't mean that AI as a technology doesn't have great potential. It’s obviously awesome. I started to mess about with it, I started to experiment with it last fall in part because I've talked with chat bots before. I've talked with them in order to help students and myself figure out what the nature of our relationship to technology is, less about the technology and more about us. I started to experiment with them because I was interested in what their capabilities are, what their affordances are, and there are lots of interesting potential for them. One of the things is user design. We talked a little bit earlier about chatbots. There are other ways and there are certainly scholars out there, researchers working with technologists in order to create better interfaces that are more purpose-built for education. There's also possibilities of — and this may be the pie in the sky part — but getting training. Not building off these big tech models, but trying to train with ethically obtained datasets that have been checked ethically. One of the things I didn't mention was how many human crowd workers were behind the scenes, trying to scrape it to take out the horrible things that both people put in the to the datasets and what the AI might generate as a result of those. I won't mention them, but just really horrible things that that give a lot of folks who work in these fields, many of them in the gig economy, PTSD. Literal PTSD. Just think about what drone operators deal with, and it's like that, because those are the images and texts that that are being generated by some of these systems or the part of the data that's going in. So ultimately, trying to create datasets and outputs ethically. This is the great thing about higher education. We have researchers in computer science. We have researchers in philosophy, anthropology, English. We have scientists. There lots of people doing great work that are trying to think about these things ethically in this larger context, and that's where I think the potential lies for the future of AI: in systems that are made in consultation with all the stakeholders and with a broader — maybe a few heads that are outside of the Silicon Valley environment — that can think more broadly about impacts.

 

HOST: We've come to the end of this glorious episode of “When Experts Attack!.” If you like what you hear, subscribe to our humble podcast and tell a friend about us. We'd love to know what you think. So if you have questions, comments or ideas for future episodes, let us know about it. You can reach us just drop a line to whenexpertsattack@ku.edu We're a co-production of the University of Kansas News Service and Kansas Public Radio. Music was provided by Sack of Thunder. Until next time, this is Brendan Lynch, signing off.

 

THEME MUSIC

 

Transcribed by https://otter.ai and edited for clarity.