
CSUSB Advising Podcast
Welcome to the CSUSB Advising Podcast! Join co-hosts Matt Markin and Olga Valdivia as they bring you the latest advising updates at California State University, San Bernardino! Each episode is specifically made for you, the CSUSB students and parents. Matt and Olga provide you advising tips, interviews with both CSUSB campus resources and those in academic advising. Sit back and enjoy. Go Yotes!
CSUSB Advising Podcast
EP. 105 - Integrating ChatGPT EDU at CSUSB: Ethics, Access, and Innovation
🎙️ In this episode of the CSUSB Advising Podcast, Matt Markin and Olga Valdivia dive into the bold rollout of ChatGPT Edu across campus by interviewing Dr. Fadi Muheidat, Associate Professor and Director of the Teaching Resource Center! Dr. Muheidat breaks down how this AI tool is now seamlessly integrated into the MyCoyote portal, offering secure, student-friendly access.
Discover how you as a student can critically evaluate responses from ChatGPT, possible uses depending on class assignments, and how CSUSB promotes ethical and equitable use of AI—boosting productivity and preparing you in an AI-powered world.
Tune in to hear how CSUSB is turning AI into a powerful ally—and why active use and lifelong learning are the keys to staying ahead in today’s tech-driven future.
*Note: ChatGPT EDU at CSUSB blocks any personal information from being sent to OpenAI.
Subscribe to the CSUSB Advising Podcast on Apple, Spotify, and more!
Follow us on social media:
Instagram - @csusbadvising
Tik Tok - @csusbadvising
YouTube - @csusbadvising
https://csusbadvising.buzzsprout.com/
Matt Markin
Welcome to the CSUSB advising podcast. This is Matt Markin, an academic advisor in the asua academic advising office, and I am joined by my esteemed colleague...
Olga Valdivia
Hi. My name is Olga Valdivia. Happy to be here.
Matt Markin
I know we're both excited for this interview. And you know, there's been various initiatives and support offered to students, and one of the newest ones is providing a chat GPT, or chat GPT edu, to all faculty staff, but also to all students at CSUSB. So you know, Olga and I have many questions about how this impacts you, the CSUSB student, and we have a special guest to help dive into all of this, and that is Dr. Fadi Muheidat, or also known as Dr. M, Associate Professor of Computer Science and Engineering and the director of the teaching Resources Center. Dr. M, welcome.
Dr. Fadi Muheidat
Well, thank you, Matt, thank you, Olga, for for having me. I'm very excited about this.
Matt Markin
Yeah, again, we're both very excited as well. And I'm going to start with the first question the Olga and I will kind of take turns of asking more about this, this initiative. So I guess, you know, can you help us, kind of give us the broad picture to start this interview, what's the background behind CSUSB, implementing ChatGPT to faculty, staff and students?
Dr. Fadi Muheidat
Well, I mean, this is a, really a wonderful opportunity, like, came from the California, you know, Cal State, you know, the Chancellor's Office about, you know, providing access to all CSU students, faculty and staff for kind of the latest technology of AI, which says, or the kind of chatgpt, basically, for education. So that initiative was, you know, meant to, again, provide equity access to all students. Technology is there, and it's kind of proven to be useful technology, and there are kind of multiple purpose not only just to kind of, you know, enhance or provide the equity access. Also they believe, as you mentioned at the beginning, that this technology is here to stay. So we need to make sure that our students are really prepared. They know how to use the tools, they know how to interact with the tool, and in some majors, how to develop the tool, right? So because we have different measures in our computer science and engineering and other disciplines, stem, non stem. So we have kind of different interests and the tool. So that was a kind of a great opportunity for for our students to have access to this wonderful tool, and they very powerful at the same time as it can, you know, change people's life, between time management, between productivity, between tutoring service, for example. So you can name it. So I think that's where that initiative came from. And yeah, we are grateful for that opportunity. So
Olga Valdivia
I'll continue with the next question. So what's the way students can access ChatGPT?
Dr. Fadi Muheidat
Yeah, the good thing about the chatgpt education, we call it ChatGPT education, kind of, that's kind of the real name for this initiative. It's being integrated into our, kind of my coyote portal, so students, they can use the single sign on and they can access it. It's kind of a tab on their on their portal, and secure, safe. And that's kind of another kind of level of assurance to our students that it's not just something going to be accessed for any browser is protected behind, behind our firewalls, behind our kind of security measures that the university is providing, and kind of their fingertips like we just, can we click that, that you know, my caveat tab, and We'll have access to it. And not only have access to to the tool, they are also access to everything in the tool provided. So as students, they see the same screen as faculty, as staff, and that's very important that we are not like showing like tiered or level of access now they are all have access to the same tools, so we ensure that they like whatever we use as faculty in our classrooms or design our assignments we assign, we design them for a tools that our students having access to.
Olga Valdivia
Awesome. Thank you for that. Now. Do we have any tools available for students to better understand how to use it?
Dr. Fadi Muheidat
Yeah, actually, thank you for this question, because this is what getting us busy this spring, when I was, like, kind of joking, saying I'm waiting for semester to end it just we have the role came in, like mid semester, where we need to get treated for it. So from. Kind of faculty perspective as a Faculty Center for Excellence, or TRC, or the IDAT, the social designers and academic technology, we work together as a team here to make sure that transition is taken care of in a very smooth way. So as you might know, the rollout went to a pilot study of faculty, kind of a cohort who kind of have, you know, early access, kind of to assess and help us by doing the workshops. So we started with the workshops as kind of our first kind of tools and access to faculty we look at we kind of after doing the workshops, we learned a lot from their experience and with their needs. Then we opened it for all faculty in the whole community, and we also run another series of workshops, like more advanced, more focused, and then now we have access to the students. So what we did, or our kind of motivation or our kind of goal, is let us equip our faculty with the tools needed so that they can also help us provide that, what we call it AI literacy to our students. So if I'm a faculty who are embracing the use of AI, for example, in my class, then I have a ton of resources our our website has like a generative AI kind of general resources about any tool, about generative AI, as well as a specific pages and resources for chatgpt education, so our research can have access to that. It's public domain. And also we are kind of developing more kind of Bite Size trainings, you know, similar to the technology, like how to use this technology, how to use Canvas. That's kind of where we are going next. In addition to the existing, you know, resources, slides, workshops, recordings. Also, we are doing those bite size, kind of, you know, workshops, videos, so to give them like on board quickly on that.
Olga Valdivia
Okay, wonderful. I have one more follow up to that. So what measures are in place to ensure ethical and equitable use among students with different levels of digital literacy?
Dr. Fadi Muheidat
That's a very big question. I'm not sure if we have the full answer or the accurate answer at this time, as I said, we are really still navigating, uh, kind of this domain. It is changing very, very often, very frequent, like I always say, if you are creating a workshop, when we create a slides and for the workshop that we are giving, if you finalize your slides, like two days ago before the workshop, go check them again, because in the morning there's something in you're going to come will be there. So with that again, more challenges. For example, if I would say the moment when we first had chatgpt becomes like, you know, like two years and a half ago, it was like 3.5 and there's a lot of issues with it. We have to be careful, like issues like hallucination, incorrect information, outdated information. And again, you might I've seen, like your wonderful podcast, I know, like you've been talking about with other kind of, you know, guests. I mean, it's being trained in this corpus of data, like different and whatever is available online, right? Can represent our different points of views of people, maybe being trained on incorrect sources of data, right? They the one kind of obstacle for training the those large language models or ChatGPT in general, is they don't have access to the behind the paywall resources, like the actual scholar literature that we believe is authentic being peer reviewed, you know, reviewed and refereed. So with that camp challenges. So 3.5 was a very, very challenging model. Then four point or four all came in, and everybody was excited about it, because kind of it reduces some of this, hallucinations, some of this, like, incorrect information they had, what we call it, content moderation, to the model itself. So to provide, like, more safety, more kind of, kind of be, like, culturally responsive, or culturally, you know, appropriate responses, understanding that. So thinking about that ethical and equitable, equitable use, the first thing I would think of, and they, our university, did a great job on that. Actually we had kind of a policy. We kind of, we come up with a, you know, at least, like we started the policy. We call it, you know, a FEM 803.5. Basically it is up to the faculty, the discussion of the instructor to use generative AI, so that we can start before we talk about like house is going to use it, we need to know, like our faculty are there ready to that, right? So we gave it to the policy was kind of flexible to provide faculty that it's okay if you want to, if you want to prohibit the use of chatgpt or permit with certain limitations, or if you want to increase. Is the use of generative AI for your courses, and it didn't work. So this is what kind of we designed at this time, and that's kind of for faculty to provide that freedom, right? I just want to make sure that when I design my syllabus or my course, I need to be clear to students, this is who I am, and it's how we're going to write my course, right? So for me, Fadi, for example, I might be on certain courses. I might say I'm with the adapt, integrate, go for it. In other course, I might say I will prohibit or will be limited access, meaning that I need to be ready and prepared to educate my students about my policy and about how to use the tool, because, again, it's not only about the ethical use, it's also about how can I integrate it as a faculty in my classroom, and that's something we are focusing on honestly in our workshops, is okay, it's here to stay. If I need to embrace it and go for it, then that means I cannot still use my old notes. I have to go to the next level. I have to make sure that my classes are or my courses, my assignments, my activities are kind of AI friendly, as they call they call it. That means you have to set clear policies, right? That's one of the step of about like, critical use in equitable use of AI. So we passed that. The structure piece is now I need to provide guidance to my students, right? So having a clear policy that I believe in, I am adapting it given that the university with the general policy is allowing me to do so. So now I need to be sure that I have that clear policies, or in my syllabus, I provide guidelines. Also I need to provide my students the digital literacy training that they want correct. So some of us might think again, back to what I said about the Bite Size trainings. I might think about maybe creating a module in my classes provide like a boot camp, right? For for my students about how to use that. And again, we have to acknowledge something that every discipline different, right? If I'm in communication studies, maybe not work for me as if I'm in STEM or in computer engineering, or in biology or in social behavior sciences. So it differs, right? Some people are hardliners on the use of ChatGPT, because that's completely against the purpose of what they are teaching, thinking about writing assignments, for example, right? Even though some faculty came up with a new ideas of scaffolded writing assignments, right? But it's, again, it's all pressure on faculty to educate students, provide them that digital literacy that they want, and kind of provide the support they need, right they need. For example, either the faculty can or able to provide that support students needs, or they can be directed towards the kind of the student support in our campus. Also an important challenge I would see, and part of the ethics, part is the bias mitigation, right? When those tools, as I said before, they have they are trained on a large corpus of material, right? They are different opinion. So I need to help my students navigate the tool, the outcome of the tool, either by being intentional, designing a certain prompts to and that prompt will force a large language model like chatgpt to create a potential bias output, correct or incorrect output, or could be, I'm going to call it harmful output, but in the sense of a as a culturally acceptable or not acceptable outward. So for us, that will be an opportunity to provide critical thinking to our students. That's why, when I was talking about the assignments, you can redesign your assignment in a way that meets your needs. So if I'm worried about critical thinking, which is most faculty are worried about critical thinking, they feel like our students have become if you talk about the blooms tax enemy, right? We're going to be in the bottom of the blooms tax enemy because chatgpt going to help them with all this information and basic knowledge, and there will be a list of the synthesis on the analysis at the top of the pyramid, right? So that's very important for us. So we might use that biased output, incorrect output as an opportunity to develop that critical thinking, right? So that could be a way other faculty might think about, oh, let me, do you know think pair share, right? It's a common technique and in active learning, but here we can say think pair share, share, or think pair, ChatGPT and share, right? So that you can, you can, kind of, you know, interact with the tool and criticize the output of the tool, and you provide your own response, similar to the to the ranking. So to summarize, yes, I have to have clear policy in my syllabus. Have to have an even each assignment I might have at the. Level of use of ChatGPT in that assignment. And you know, I also I need to make sure that students understand the code of conduct, how to cite if you are allowing it, how to cite it properly, so that you are not on the process of, like cheating if we introduce that term and acknowledging the fact, as a faculty and the students that the ChatGPT or AI detectors are not valid, they are not accurate, and that could provide an incorrect kind of results. And maybe as students who are doing great job by doing their own voice, their own tone, being flagged as an AI generated content, and that would be a problem. So this is a huge challenge. So we have to be careful how to navigate that and how to deliver that to our students. So educate our students. Would all you know have been mentioned is very, very important and essential before even using the tool in the classroom.
Matt Markin
And I like what you were saying about like, AI is here, and it's not going away. And I think when Olga and I have talked with our students, you know, and they've mentioned, should I use AI or not? Should I use a generative AI? And you know what we tell them is like, well, now we have chatgpt education for you, but always follow with your professor. Look at the syllabus. Of course, if there's any questions, talk to your professors and see what they tell you about it. And you were mentioning about assignments, and I guess that's a good segue into this question, if you know of, or can share examples of maybe in an assignment where ChatGPT might be helpful for a student to use.
Dr. Fadi Muheidat
Yeah, I mean, to be honest with you, I think, yes, we, when we talk about ChatGPT, and we put policies and prohibition and embracing and all that, we are using it as faculty, right? As for me, Fadi, I use it in a different kind of settings. Brainstorming is a wonderful brainstorming, right? So the way we do it, we can also teach our Swiss help to you, right? It's basically generating ideas. Think about, if I'm doing like engineering design course, right? I'm computer science faculty, and I'm teaching engineering design, so I know that you can put the whole project in chatgpt and solve it for you, and you're going to finish the a year long, of course, will be done, like in half an hour, or maybe two hours of a prompting correct. But we also tell the students, it's not just about that. The outcome that it is working, it's not working. It's a whole process between team based, you know, working leadership, you know, kind of going through that engineering design process so but it will be helpful for them to navigate and generate ideas and remember, when we design something, I'm giving that specific example as an engineering student, when we design something we don't design it for, like the people on South Cal right, if I design a project for an app or hardware, or whatever settings I need to make sure I include the whole world. I'm a global citizen, and I always advocate for for that. So you need to make sure that you cover all aspects of that design, unless it is specific for certain things, right? But if you want to be completely culturally relevant, then you might need to understand how people in the Middle East, for example, accept that, is it acceptable in their culture or not people in South, you know, America, for example, or north Cal or South Cal. So that's idea generation, when they start putting this and thinking about like with again, proper prompting. That's we always call it ChatGPT is a product. Is a kind of probability model. So you might ask me a question, I'll provide you an answer, but this is not what you want. Then you're going to follow up a question, right? Then I said, Oh, I got it. Then I provide you with an answer. Then all you're going to come back to me say, actually, I would like to say it this way, how you think about it. Then now I get the correct the correct question, then you will hear the correct answer from me. That's exactly how chatgpt works. So we have to teach our students how to properly or craft that prompt, right? We call it prompt engineering, or prompt crafting, basically, to maximize the output of the chatgpt. So idea generation is one great, great tool. Time management. Think about like saving time, but writing the emails, right? Or writing maybe a short, you know, abstract or biography. Again, I'm not I'm doing a like mix between the the kind of the code of conduct, between being, like, avoiding copyright or using other people's work. I'm not going to talk about that because we we have the tool. We are using it in more ethical way, so we teach others how to use an ethical way so they can be study it. When students, one time mentioned to me about, oh, I've been taking statistics class and I was cum. Getting or interact with ChatGPT. And I asked the class I did with an A in that course. And it was very helpful, because it was a good study aid, right? Good study partner. However, we provide students with a warning. It doesn't work for every discipline, like certain disciplines, like statistics, is a well known, well established certain criteria, mathematically proof or like, you know, there's a baseline for it, so there's not much hallucination there that could provide your ring confirmation, right? You might have different case scenarios, but it's still a good kind of a study guide. But other disciplines, you have to be careful, right? Another thing myself, I'm kind of English is my second language. You can think about it, but I can't use it for language support, right? You know, understanding some of the concept I'm in the US. This is my almost 17 years, but I still struggle with some idioms and expressions that you guys use sometimes. So I go there and say, What do you mean by this? That's very helpful. So there again, different ways our students can interact with the tool computer science, they can use for coding. And if you are a biology student and you have a project, but you didn't know how to write in Python, but you have a wonderful data set that you interact with. ChatGPT will be a good tool for you, because they have the tools embedded there. You just communicate as a natural language with it. So they're kind of different, different kind of aspects and examples to this. Can use it again. We have to make sure and back to all this point in the ethical way, like we have to train them, teach them, and proceed with that.
Olga Valdivia
Thank you so much for that response. So the next question is, what could be a way students can critically evaluate the responses that they receive back from ChatGPT edu?
Dr. Fadi Muheidat
Yeah, this is a very, again, very excellent question, as well as a very challenging question, as they say, it takes a village right To achieve something. That's exactly my point here is it's not enough to ask students to read, watch this YouTube video, go to the sub site, read about it. It's not enough for faculty. Just talk about it in the classroom. We have to go back to the people who can fact check this content. Right for me, I call it, it's my best friend. Is the library at this time, right? Librarian are great resource, and they are actually doing a great job in our campus by having their own website as well, providing that AI literacy for our students. And again, I don't talk much about it, but I think we also work in an initiative with them on like, how we provide that AI literacy, and one of the focal point is how we can ensure accuracy and relevancy, right of the outcomes. So, and again, I said before, it's an opportunity for faculty to enforce critical thinking, right and extra research, so possibly cross referencing some of the information, right? I mean, it can go back if, for example, chat GBT, can give you, if you go to ChatGPT and ask for something or do deep research with ChatGPT, it will provide you with a wonderful outcomes, maybe a paper, like 10 pages, paper with resources, such a wonderful I said, Oh, my God, where have you been, right? However, what things that you might find there that, oh, there's a statement has been quoted from this reference, right? You click in the free reference previous models, you don't find it. It just came up with that difference from nowhere in new models. And that's where the challenge, when I said it's keep changing. They are improving those models in a certain way makes whatever instruction you provide previously to students invalid. We have to update that instruction. So one of the reference models could be when the references could be accurate. Click on the link. It goes to nature, it goes to PubMed, it goes to, you know, Google Scholar, and it will give you that access. However, the challenge is where the content being cited doesn't exist in that reference. So that's why one faculty she mentioned about we were in training, and she said it referenced my work, but I have never said that in my paper, you know? So that was an eye opening to us. But she's a faculty, and she is the author of that paper, of that reference, but students are not we don't have that knowledge, right? So that's where the library rule is. We have the library access. So we need to tell our students, this is the access, and you have access to it free on campus, right? Maybe we can be intentional about some of this assignments, and we want them to learn how to go and do that. So understanding that to do a cross referencing, going to the authentic resources, could be time consuming, by the way, yes, I'm saying it with ease here, but that could. Are time consuming certain measures. You don't have to worry about it. Beyond math, computer science. They have to establish sciences. There's no problem. But others who have to be careful with that. We have to teach them to understand the limitation of the chatgpt, right? The thing about how it's being trained, what kind of hallucination is possible? What kind of, again, the issues like, you know, maybe this is output is not accurate. And you give them an example, I let them use sometimes, their common sense and sometimes, that's why, when you say, pair, share right, think pair and share. So you can use a team based learning, or head of group or teams, so that they can go and do the work and start fact checking each other's, you know, outputs based on what they are reading, ask follow up questions. And that's the beauty of large language models. The moment you start asking follow up questions and start doing, oh, but I didn't say that. Like one time they give me like I said I authored a book. I said I didn't author that you know, then, Oh, you are right. Sorry, I made a mistake, you know. Then it's up. So you have, if you keep adding them, make some of the your prompts that as doesn't make sense, then you will know that it's going to bring any information regardless. It is just kind of a disclaimer here. It is getting better with that. It is getting better that providing can say, No, this is there is no reference that supports your point, or it doesn't exist. Always, always the faculty is your expert, subject matter expert. So it's always good way to contact your faculty about about that. I think, as I said, I mean verification is important. How to verify different resources, being aware that there are biases and that right also something that maybe some of us have hard time with is the context. Remember when I said about prompt engineering? Part of the techniques or frameworks of prompt engineering is give me the context. Oh, you are Laura sangre model. You are a high school teacher who teaches biology. And you are talking about the nuclear, whatever, the DNA sequencing design and assignment for a ninth grade students who will be interested in understanding the different types of the DNA. See the way I'm saying it, I'm providing the context, the rule, the time, the expected output, that way students will be able, kind of to to verify that. Oh, but I'm talking about high school, and you gave me something maybe graduate level, or you give me something not necessarily related to DNA sequencing, maybe it becomes a math topic, rather than it's a biology topic. So I think that's how, again, it's challenging. I'm not saying there's going to be an easy task. This is very challenging, but doable.
Matt Markin
Yeah. But, I mean, it's essentially like just testing it out, but also not believing what you're told, in a sense, but double checking. You know, a few months ago, I asked ChatGPT about architecture majors, and it response made it seem like we have an architecture major at CSUSB. And I asked a couple of follow up questions, and it was trying to direct me to contact the art and design department. And when eventually said, Well, we actually don't have an architecture major, then it was like, Oh, true, so, but here's some other schools that have hit, you know, so, right? It's always like these follow up questions, but yeah, always kind of double checking. Now, you were mentioning, you know, about policies with AI and you know, maybe there being a blurb in the syllabus, or the department having a policy. Do you know, like from the institution level for CSUSB? Is it CSUSB has a policy, and then departments and faculty then kind of use that as an umbrella or or is it kind of vice versa that the faculty and departments create it and then the university has like a overarching policy?
Dr. Fadi Muheidat
Yeah, actually the because the policy that the university developed and adapted has the three categories between completely prohibit or accept with limitation or integrated, complete integration and make it up to the faculty discretion, right? So I would assume that pulse, again, it's just a policy, and I can use it, and we also assess them of the language that possibly they can use in the syllabi, and they have control over all of that, maybe some departments, again, I'm not sure. Like, you know, at least in my department, we didn't like talk much about, you know, having an autonomy, like, you know, like, this is exactly how we need to handle it, right. For example, computer science engineering, you shouldn't allow it right, because it is the hardcore of what we are teaching our students to do. So, so it's difficult for me to assign a lab project on programming and Python or C++ because you go ChatGPT in five minutes is done, right? Even though we have a solution for this. Again, back to the instructional design that we want to do. But there is, again, there is it's all free at this time, but freely to each department, each unit, to decide, you know, the best way that fit their needs, but there's no obligation on you have to use it.
Olga Valdivia
Awesome. So my last question is, is there an evaluation plan in place to track and implement ChatGPT edu, on student learning?
Dr. Fadi Muheidat
The one I know that the university have, like, did the things in this survey, and faculty kind of a baseline survey so that we can, you know, learn more about, you know, their experience. And I'm pretty sure this will be, like, a pre and post I didn't have, like, really, like, solid knowledge about, you know, the full plan, you know what to do, but I know some faculty have some initiatives about creating their own kind of performance metrics, right? And you know, so that they can see how our students using it, or maybe ethical audits that they provide see how our students, like adhere to those ethical guidelines that I set in my syllabus or in my AI literacy could be for research purposes or could be for other proposals or to improve their future offerings of the of the courses. As I said, the thing that I know, and I've seen it in my in myCoyote, is that the baseline survey that the university did with the rollout of the ChatGPT education, however, from the faculty development perspective, we did survey our faculty as like a need assessment, right? Like, how you use it, what you use it, what kind of support that you need from us to gage our trainings. And I think we will see like, more of this going to students, especially the chatgpt education was ruled out for students, I think on April 14, like maybe last, last week or something, and so. And again, I'm not on the steering committee who are on the rollout of the chatgpt rollout, but I know that they are working. It's part of their plan to do so.
Matt Markin
Well, I know it's an exciting new initiative. It's going to be an adventure. And I'm glad that you were able to talk to Olga and I about this, about ChatGPT education, and kind of its benefits and kind of what to look out for. And you know, I'm very happy that when we get to meet with our students, we get to tell about it. And hey, if he I know we've had some students, I've seen the emails. They've seen it on their my coyote account, and they're kind of testing it out as well. So it's going to be an ongoing conversation. We'll kind of see how things go in the year. But thank you so much for chatting with us about ChatGPT education.
Dr. Fadi Muheidat
All right, thank you so much for having me. Really, it's wonderful to talk about it. And again, this is ongoing and always improving and updating. And if we need to give like an advice for our students, please go learn it, embrace it like, use it in a way, not just as in a passive way, actively using it, integrate it and see how it kind of provides a workflow for you, right? Especially when you go to industry, you will be competitive, meaning, I always like it from Jose Bhavan one time he said, ChatGPTdoesn't, doesn't like we are not losing our jobs because of it, but we are called to change up. So basically, we need to change the way we apply for jobs, because skills are changing something going to be automated with generative AI, which is good for ourselves to learn about it, as well as when you change something to AI automated with AI at the same time, it opens another opportunity and jobs for you to develop those tools. So that's why we, we are really excited about this, this initiative, just because of that, not only just use as a homework or whatever it is, but we want to make sure that they're using it, understanding it, so that they can be a better users and kind of career ready. Let's call it this way.
Matt Markin
Yeah, I know Olga, and I look at it as a it's, it's like a sandbox, and you can play with it and see, see where it takes you.
Dr. Fadi Muheidat
Correct, correct.