
CSUSB Advising Podcast
Welcome to the CSUSB Advising Podcast! Join co-hosts Matt Markin and Olga Valdivia as they bring you the latest advising updates at California State University, San Bernardino! Each episode is specifically made for you, the CSUSB students and parents. Matt and Olga provide you advising tips, interviews with both CSUSB campus resources and those in academic advising. Sit back and enjoy. Go Yotes!
CSUSB Advising Podcast
Ep. 102 - Faculty and AI: A Professor’s Thoughts on AI Tools
In Ep. 102 of the CSUSB Advising Podcast, host Matt Markin sits down with Dr. Daniel MacDonald, Professor and Chair of the Economics Department at CSUSB, to explore the hot topic of AI in education. 🤖📚
Dr. MacDonald breaks down what generative AI really is, how it’s changing the academic landscape, and why it’s crucial for students to understand the fine line between using AI as a tool and crossing into plagiarism territory. He offers practical tips for students on how to navigate AI responsibly—especially in research.
Plus, he shares why we need to evolve alongside this technology and how critical thinking still reigns supreme in the age of AI.
Don’t miss this timely conversation on how to use AI smartly—without compromising academic integrity!
Subscribe to the CSUSB Advising Podcast on Apple, Spotify, and more!
Follow us on social media:
Instagram - @csusbadvising
Tik Tok - @csusbadvising
YouTube - @csusbadvising
https://csusbadvising.buzzsprout.com/
Matt Markin
Hello,and welcome back to another episode of the CSUSB advising podcast. This is Matt Markin, an academic advisor here at CSUSB, and I am thrilled to welcome back Dr. Daniel MacDonald, Professor and Department Chair of economics, to talk to to all of us about the hot topic at the moment, and that is AI. We have students that will ask, should you use it or not? One professor says it's okay. Another might say, no, if it is okay, how should it be used? So we thought it'd be interesting to have a faculty's perspective on it. So hey, Dr. MacDonald, welcome back.
Dr. MacDonald
Hey Matt, thanks for having me.
Matt Markin
So it's been a while since you've been last on the podcast, I think was episode 26 so I was wondering if you could share a little bit about yourself.
Dr. MacDonald
Sure, yeah, it's, it seems like it's been ages ago since then, and, well, it kind of has been, I kind of joke with people. So I've been a chair now, a department chair for about four years. And four chair years is about maybe 28 faculty years, if you know what I mean. So I definitely feel like it's been a while, but it's really good to be back. And yeah, our department has really been doing great since, since the last time we chatted. Our scholarship program is really growing. We've been able to increase the amounts that we're giving to our majors who win the scholarships. We've got a new club called the Economic Impact Network, or EIN that's that's a great club that a couple of the students have started to kind of engage with the community with their work, and really kind of give back to the community with with all their knowledge about economics and everything. So that's been really exciting on, on the student front and the department front. Personally, you know, I've, I've been doing well. My research is going well. I think last time we chatted, I was telling you a little bit about, you know, some of my hobbies. I like to weight, lift. I'm still doing that. So still staying in the gym, keeping healthy. I was actually on a New York Times podcast back in September. I was on the this is not a beauty podcast, talking about the lipstick effect, which is a really cool phenomenon that happens sometimes during during recessions. And so, yeah, I've just been, you know, out there, talking to people about economics, spreading the Good News, spreading the love and yeah, it's really nice to be back and see you and have this chat about AI.
Matt Markin
Yeah, you are a busy person. Yeah, I see a lot of your posts on LinkedIn, a lot of the research that you're doing, so always working on something, so always keep it busy, which is great. That's right now, of course, this episode we're talking about AI. So I was wondering, like, from your perspective, as a professor, as a faculty member, as a department chair, all of those like, how do you explain to a student, like, what AI is? You know, because it there's always, like, these broad definitions, but students are always wondering, like, well, what is it? How should I be using it? So how do you talk to them about it, right?
Dr. MacDonald
So AI means artificial intelligence, right? And then generative AI means artificial intelligence that generates new images or ideas or just words, right? Sometimes that's all it looks like. Sometimes it's just words, right? But it's important to realize that, yeah, it's generating things for you, right chatGPT is generative. Ai, it is helping you complete ideas. It's helping you complete tasks. And for this reason, the way that I usually speak to it, you know, speak about it to students, is that you know, it is kind of helping you, you know, with your work. And that you have to realize that if you, you know, try to then go off and, you know, represent that as your own work in whatever form, you know, it does technically violate Academic Honesty codes and everything we're going to talk to, you know, there are obviously a lot of positive aspects of AI, and I use it as well generative AI, but on its basic level, that's, you know, basically how I explain it to our students is that it's generating ideas and work for you, and as a result, that's basically what it is, right? If you use it to represent, if you're using it to represent your own ideas, this is something that's been created by someone else, right? And so technically, it falls under a, you know, a definition of plagiarism. So that's how I speak with about it, with my students. But you know, again, I want everybody to take away the fact that, yeah, it is obviously, you know, generative AI is something that you can borrow from. But again, it's, you know, for example, as a faculty member, I would never plagiarize, right? I would never take something from a book or another paper and copy and paste it into my work. But I use AI. I use generative AI in my coding and my research to help me do tasks around. The Office, and so it is a little bit different. You know, it does have a little bit of a nuanced feel to it.
Matt Markin
Now, I guess from a professor standpoint, you know, when you're teaching your classes and you're building out your syllabus, is having something regarding, AI, do you put, like, a blurb or anything in your syllabus?
Dr. MacDonald
Yeah. I mean, communication is always important with students, right. The more that you can communicate, the more that you can talk about it, the more it just, it just helps everybody right, get a sense of what everybody means. And so I have included something in my syllabus. And in fact, what we have is we have a department policy because a lot of our faculty in our department have been concerned about the use of AI in our classes. Because, as I was saying, you know, a lot of us think about about it as plagiarism, although, you know, again, later we'll, we'll get into some of more, the more nuance there, because it's not exactly that, right? So we've created a department policy, and the reason why we've done that is because, you know, some faculty feel a little bit more okay or comfortable with speaking about this kind of things than other professors, right? All, all professors are different. And so what we wanted to do is we wanted to create a policy where it's like, okay, well, our department together, we're making this position, right? And, yeah, the position is basically that AI, the use of AI, to, you know, represent your own work, it constitutes plagiarism, and that, you know, we go a little bit further though. It's not just, you know, the kind of the negatives, but, you know, we really try to emphasize that in in your coursework, you should be using the the readings and the ideas that are part of the course, right? Because really, what we're trying to do here is not, you know, just punish people, or say, oh, that's that's a bad thing, or don't do this, or set all these boundaries. But it's really about, yeah, clarifying the purpose of a college course, right? The college course is not just for, you know, a generative AI to solve the problems, right? It's about thinking about issues. It's about it's it's more broadly, it's about how to think, right. It's not what to think, right. AI will give you exactly what to think. But most professors classes, I think, are about how to think, and so our policy does, you know, make that point very well known about AI is use of AI to complete assignments is plagiarism. But we also, again, try to bring that other side in and say, you know, when you're doing the work for the class, you should be using the concepts and methods and tools that you've learned about over the term of the course. So that's, that's, that's our approach, and yes, it is in our syllabi. We even had a little handout that we were giving out at the beginning of the semester this semester, just to really clarify the department stance and to make sure that, you know, students know that, oh, this is not just something Professor McDonald thinks. This is not something that just, you know, Professor sundal thinks, but this is something that, together, we, we believe in.
Matt Markin
And I guess, because of that, do you think other departments should have something similar?
Dr. MacDonald
I think it really, it's, it's all about the department, right? This is another thing with AI that makes it so interesting and so nuanced, right? Is that the problems that we might have with AI in our courses are going to be different than the problems that a history department has with AI in their courses, right? Or to go really far, like, you know, kinesiology, right, or art, right, every department is going to have their own concerns when it comes to the use of AI and what's acceptable and what kind of issues to look out for. But in economics, we're trying to solve problems, right? Economics is about problem solving, and if you're trying to solve problems and learn how to solve problems, you have to solve some problems, right? If AI is solving them, then you're not learning that very crucial skill that's part about being an economist, right? For a historian, it's about, how do you think widely or broadly about historical events and the nature of causality and all these, you know, the interconnectedness of different things in society and how they contributed to some event, right? So there they might have a different kind of policy or concern, or they might not, you know. So it's not just discipline specific, though. It's also, it's about character, right? Some, some faculty might want to take a more of a hands off approach, right? Like say, Oh well, you know, actually, I don't mind it so much. Or, you know, I have other things going on. So it really, you know, I'm not going to say that, you know, every department should have a policy, but I do think that departments should be talking about it and and figuring out a way to put a give a message right, so that students are clear on, you know, what constitutes misuse of it in a particular class.
Matt Markin
Well, I think that's a good segue into this. Next question is, you know, from your perspective, like you've talked a little bit about, some of the factors in terms of, why are we not? You know, put doing a blurb about AI professors making sure that students are aware of it, but what could be some factors that you think faculty might be nervous about AI?
Dr. MacDonald
Oh, sure, yeah, you're right. We are so nervous, right? And I think students can definitely tell we're nervous, like, you know, something's going on here, like we're worried about things, you know, if we might keep it to ourselves, but that's kind of what faculty do, right? Like, we're just kind of, you know, we're not always very loud about our beliefs or so on. And so it's true. You know, we are nervous about AI, right? And I think it's ultimately because, as professors, we're constantly engaged in the thinking process, right? That's why I got a PhD. I love to think, right? I just love to think about things. And we all want our students to think as well, right? We want our students to have that same kind of confidence, right, and ability, but also confidence to think about things. And so, you know, I'll assign a problem set question, and students will get AI to, you know, maybe try to generate the code to obtain the answer right. But the point of the question was to get the student to think about the answer right, and to think about it based on what we've learned in class, right? And that's that. And so that's something that I'm nervous about, AI, because I just feel like it's, it's substituting for that thought process that, you know, students should, should be, be going through. And I think that's something that makes us very nervous, you know, it's like, it's like, I guess the other thing I would say about it is, I'll fully admit even myself, right? Professors, even though we might want you know, want you to learn how to think, we don't always do the best job of it even, even that's true of even myself, right? Even the best professors out there, they will not assign the best exams, or, you know, they'll ask a question out there that doesn't really it's like, well, I could just get ChatGPT to, you know, solve it for me. But, you know, ultimately, even though we are imperfect, you know, we are just trying to do the best for our students, and really to get them to think and and to learn how to, you know, ask the best questions, right? And so I think that's another reason why faculty are just very nervous about all of this, is because maybe it's kind of admitting our own faults, right? We don't ask the best exam questions. Maybe it's true that something can be just looked up and, you know, solved in in a minute. But really, we are trying to get our students to think, not, you know how to think, and not what to think, and I and, and that's ultimately, I think, part of what's making us so nervous about everything.
Matt Markin
I mean, it's like, AI is here, it's not going away. And, you know, I would imagine many students use it. I know when I meet with students in advising appointments, I'll ask, and many will share that, yeah, I've used it, and some have found it useful. Others, maybe not so much. But I think also they'll say that, well, some professors said it's okay. Others said they really don't know if they should use it or not. But I guess, in your opinion, let's say, from the student point of view, what can you think of that might be some pros, as well as some cons for a student using AI, such as ChatGPT?
Dr. MacDonald
Sure, yeah. And again, you know, going back to what we were saying earlier, it's, it's, it really depends right on the class, and you should always check the syllabus, right? That's the most important thing that I'm sure you're telling all of your students, right? Look at the syllabus, because that syllabus is that document, right? It's the main form of communication between the professor and the student. And, yeah, you know, this goes back to what we were talking about earlier, right? There. There's a lot of nuance here. And I don't want to get the give the impression that, oh, all faculty, or all professors, are trying to do it. Just draw lines, draw boundaries and everything, because there are a lot of pros of of using generative AI. Students can learn new techniques for formalizing and solving problems. Students can learn how to frame debates. One of the things I love to do with ChatGPT is I really think this, give me the three best arguments against it, right, and then I'll have a little argument with ChatGPT. And who knows where that evolves, right? But that kind of critical thinking, because it's hard to get that right, even as a professor, sometimes it can be hard for for me to either imagine the best argument against, or to have conversations about what, you know an argument might be against, what I'm trying to say students should also absolutely be looking to automate tasks outside of the classroom, but still involving school, or, you know, looking for jobs. Jobs, right. Chat. GPT can help design a database where maybe you can enter jobs, right? It can search through websites and help you, you know, figure out the job, job qualifications. Or, you know, how should I write a cover letter for this position or internship? Right? Look, look for internships. You can give very specific prompts to ChatGPT, and they can really help you. So there are a lot of pros to using, to using generative AI, in terms of cons, I think the students know already, you know, it's when they use it as a replacement for learning the material, right? In the worst case of that, right, students, Another con is that students can actually get caught doing something incorrectly, because, as we know right generative AI does make mistakes, and I will tell you that even when I use it for my research, I find sometimes that it doesn't give me the most efficient answer, or it gives me something that's just downright wrong. It still has a lot of problems with, like symbolic algebra, which is something that I use. I assign calculus problems in my in one of my upper division courses. And if you try to use ChatGPT, you might literally just get the wrong answer. So, so yeah, that's, that's another kind, I think those would be the two big cons, using it as a replacement for learning, so that you don't actually learn, and then literally just, you know, ending, ending up giving the wrong answer on an assignment, which is, you know, it's not good, of course.
Matt Markin
Yeah, I know we've talked offline before about ChatGPT and maybe how it can even be like a study buddy, in a sense, to a student. You know, I'll meet with students, and they might be in, you know, have their time management is like very they don't have the best time management because they have so many classes and work. And, you know, I would suggest, you know, sometimes you can even ask ChatGPT. What are some suggestions that they might have to better your time management? But I tell me, I'll try to be as specific as possible if they're going to ask a question that chat GPT. But just like you said, it can be wrong. For example, like Google's notebook LM platform, they have a little blurb at the bottom of their page that says, you know, it's not 100% accurate. So it's something else for students is just to always double check. You know, it might be a helpful tool, but who knows if it's giving you the wrong answer.
Dr. MacDonald
That's, that's, that's definitely true. And it just, it can be replacement, I think, but it can definitely help, it can definitely help you. I use AI a lot, and it's, yeah, it can be an assistant, basically, and it can be a very good one. So, yeah, I think that's, that's great.
Matt Markin
And, you know, I guess for you, what do you what would you share with a student? Of a student was like, should I use AI? Should or not? What? What could be something that students should consider before or before, even if they they use some sort of AI or generative AI?
Dr. MacDonald
Well, I would say, I mean, first of all, like in a class, you should make sure you're checking the syllabus and making sure it's okay. I think the other thing that you should do is you should think about if the AI is going to help you do something that you have an idea about, or if it's just gonna, you know, replace your thinking on an issue, right? So basically, you should just be aware of its its its its capabilities and and, but also think about how it's entering into your workflow, right? So one of the kind of buzzwords around AI these days is human in the loop. Maybe you've heard that phrase, but it's basically this idea where, if you have a work process now, right, a lot of it might be governed by AI, but where's the human in that process, right? Where's the human kind of coming in and and keeping that loop going? Right? That's that's governed by AI. And so, you know, I would recommend that students definitely think about it, for kind of research opportunities, right? And for helping any kind of research that they might want to do, right? They're still generating the writing and, you know, probably the main ideas by themselves. But the in the research process, they can say, well, you know, I've got this data, and I found this results. What do you think of this? Or, you know, was my model specified correctly? That's how I use AI, right? It? I would advise strongly against using it, and, say, an introductory course, like a principles course, where the professors is still trying to get you to learn how to think about these issues, right? An introductory economics course or a statistics course, you're just learning how to think about probability or economics, right? So I would advise very strongly against in those kinds of courses where you're still trying to learn. And the good thing is that those courses are usually pretty easy anyway, but. Then for the upper level courses where there's some kind of research component, use it as that buddy, right? A study buddy, or even something a little bit more, someone like a coach, right? I used it a couple of weeks ago. I had two weeks to get something done, and it was a it was a big task. I was really feeling very intimidated by what I had to get done, and I told ChatGPT, schedule me out a two week task list for this project. And it put something, and I just followed it. I didn't question, you know, I just followed it and I got it done, you know. So for projects and things like that, it can be just really, quite, quite useful. But, yeah, you know, even in those cases, you want to still double check all the information and just make sure that you know you're doing your due diligence. But I think research opportunities are really great. You know, these days, there's a lot of discussion about just like, where's the university going? And, you know, do we really need a four year degree. Well you do if the four year degree is teaching you the right skills, right teaching you how to research, teaching you how to be an independent, critical thinker and and generative AI can be extremely powerful. It can take you to another level if you use it in those ways.
Matt Markin
My last question, I guess, for you, would be, we're recording this in March of 2025 this is going to be published in April 2025 we know the Cal State system is going to have ChatGPT, not only for faculty and staff, but at some point, and by time someone listens to this, maybe it's already out there, but that a student will have access to ChatGPT for the upcoming school year. General thoughts on that?
Dr. MacDonald
Oh, I just think it's, you know, it is like, like you said earlier, Matt, right. It's here to stay. It's something that we have to learn, and, and, and I think what it does as any kind of technological moment does it crystallizes us to figure out what is our place, right, in this new environment, right? So everybody's doing a lot of thinking. I think probably one thing that you know, maybe you don't hear from students, but I feel like I definitely get the sense just from speaking about them, speaking with them, I'm sorry, is that maybe they're a little scared about it, right? Like, it's really exciting chat, GPT, but it is pretty scary, right? If I were graduating college right, right now, and I knew that there's some AI out there that can solve my econometrics problem sets in two seconds, right? Or solve high level math problems, right, like the Amy or the US amo. I think they even had to do the Putnam exam, which is like the top theoretical, like pure math exam in the US. Maybe it's not at that level yet, but basically, if they can do these things, where is my place, right? But you have a place, right? You have a place in this new world. But you have to, you know, take ownership of yourself and your abilities, and grow those abilities and then figure out how AI can complement them, right? Because that's, that's, that's the answer here, that's, that's the answer that's going to happen. I, you know, I teach economic history, and so, you know, throughout history, we've had events almost just like AI, okay, obviously not exactly like AI, but pretty close to it, right? And people have been scared at every point. Go back to the 1800s we had events like AI, but every time, what happens is to use a phrase that actually comes from an economist, Eric Bryn olfson, race with the machine. Not Rage Against the Machine, but race with the machine. And that's ultimately what we've done. And I think that's that's something that I would tell students, is learn how to take advantage of a bit in a good way, right? Because it's definitely there for you. It can definitely help you. And, yeah, don't be scared. I guess I would say too.
Matt Markin
Yeah, I've learned just like play with play around with AI and test it out and see where, where it takes you, how you, how you, you can take advantage of it in a useful way, positive way, and not a malicious way, but also what kind of limitations it might have right now, and whatever limitations it might have right now, six months from now, it may not have that anymore. So it's always developing in a tier. It's here to stay, and we just have to go along with it, adapt with it. Yeah, awesome, yeah, well, fascinating conversation. I wish we talk longer about it. Yeah? Dr. MacDonald, I appreciate you being on again for the podcast, and especially talking about this topic.
Dr. MacDonald
Yeah, of course, Matt, it was a pleasure. And, and if anybody wants, you know, wants to reach out, or, you know, talk about how they think economics can work with AI or whatever, I'd be happy to chat with anybody. So, yeah, very happy to be on and, and just thanks for having me.
Transcribed by https://otter.ai