Without Boundaries, AI Makes Us Complacent

AI nowadays has a significant presence in student academics, but the students who use it might not know how to be over-reliant on it.
Chatbots and AI models have improved a lot over the past two years. Students have developed a natural reflex to use AI in case they can’t solve a homework question, need a second opinion on a text, or even come up with an idea for a project. With all these applications, I began to think while we are instilled rules of how much we can use AI, we have not yet been taught to create a balance between coming up with a solution versus using AI to generate a solution. I had several conversations with computer science majors from different backgrounds to see just how much of an influence AI has in their studies.
One was a first-year software engineering major. With student speculation on how AI is flawed, she and I touched on the limitations of trusting AI with an external source of judgment.
“Let’s say I am working on one project [and] reviewing someone’s code. Sometimes I don’t know what something means, so I put it into ChatGPT and ask, ‘What does this variable mean,’ or ‘What is this function doing?’”
“Do you not always trust it, hence how you only use it once a week?” I asked.
“Yeah, I think it’s not 100% reliable sometimes.”
“How so?” I replied.
“It might make up things that maybe aren’t any code, or for things that are missing it doesn’t fill in the blanks, so I don’t think it’s completely reliable, and you kind of have to do your own research before you ask.”
Every student should have this mindset of trusting AI to provide them with quick or concise ideas and answers. Media representation has painted AI as this all-knowing source of accurate knowledge hence why people like to use it. However, technology companies update their AI models and platforms each year, people should be more aware that AI hasn’t yet been confirmed foolproof in terms of making mistakes.
When using AI as a learning tool, I asked a second-year computer science major and finance minor for his insight on why AI should only be used in obtaining simplified facts to help us come up with small solutions for academic questions or problems.
“I think getting access to high-quality resources, is one of the AI’s main tasks which I am actively taking advantage of. Let’s say I am struggling in any subject or coding lesson, I can easily ask them, ‘Hey how can I learn this’ or ‘How can I do this’ for a decent explanation. We can do that in Google, for sure, we could have got the same answers, but AI basically saves us tons of time, [a lot] of effort, and gives us compressed information at the same time.
“In terms of that, what makes AI help better than a human perspective?” I added.
“It’s free, easy, accessible, and works 24/7, but I would say you aren’t gonna get decent face-to-face experience than you get from a professor or tutor. In general, [there could be a] good balance, if you can afford a tutor, of course, get a tutor, but if you cannot afford it, then you can go for AI, but one thing I want to mention is that no matter how smart the tutor is, AI always exceeds them, and AI knows more information than any professor.”
Given that AI is both helpful and distrustful simultaneously, like two sides of a coin, I believe there is room for students to have a healthy usage of AI by setting boundaries and accepting its knowledge limitations.
Another second-year computer science major I talked to addressed how he thinks AI can damage our thinking processes if we overuse it.
“In terms of academics, I use AI mostly for explaining and breaking down complex topics. Usually when anything can be easily explained, or I can’t understand it. I found myself using it a little too much, and I don’t want to become complacent, because when you overuse it, you start to not really digest the material you are learning, and it can lead you to not really understanding the concepts. As long as you use it in a way that promotes learning for yourself, I think it’s acceptable.”
“Do you think students should set boundaries for themselves when using AI?” I asked.
“If you tend to use it [you can] kind of turn your brain off. It’s like building a bad habit, you’re training your brain not to think, and you’re training your brain to rely on AI. You want to use it as a resource and not the end to a means, just something for you to build onto.”
“What could students better do to trust AI in the future as it evolves?” I added.
“I definitely feel like AI could pull from actual resources or actual links because currently it’s known for making stuff up or pulling false information. Adding filters and quality checks to AI results would improve the quality and trustworthiness in the first place.”
These are all ideas we should always have in the back of our minds as we continue to use AI to assist us in any academic matter. It’s on us to control how much we allow AI to influence our answering and thinking processes as we complete assignments and draw up project ideas. All my conversations have led me to conclude that until AI is better proven to be foolproof and better coded, we should set boundaries in how much we can use AI to help us solve academic problems.