Highlights:
Are AI Robots Really the Future of Education?
Are AI robots Really the Future of Education
​
By Sameer Kumar
​
The coronavirus pandemic has changed the norm of the teaching and education structure over the past few years. The transition from handwritten to typed-out essays and assignments is just one of many cases. Some companies including College Board have recently even made the switch to administer their exams digitally, such as the Scholastic Assessment Test. The chaotic year of 2020 even opened for some effective ideas in learning and the ability to have remote synchronous classes through Zoom or Google Meet.
Students were learning how to submit their assignments digitally via softwares like Canvas and Anthology, and would expect to receive their grades the same way. Above all, the reality was education became more tech-driven than ever imagined. Fast forward two years - something that sparked quite a huge amount of momentum in 2022 - the introduction of a fully artificial intelligence built robot named ChatGPT, which was meant to be a ‘human-like’ language model.
And, everyone was talking about it. The worst part is that even with the platform being so new to the industry, the public was taking full advantage of it despite many of the features being in ‘beta-testing’ mode. It was truly the excitement of receiving instant responses to anything and just about everything which brought alarming fame. Users could ask the language model to make a decision between multiple given choices and it would. It could do the job of summarizing news articles, short stories, novels, etc, create a list, and even write an essay based on a prompt. Here’s the big question - can it do it accurately? When ChatGPT just launched, one of the main issues was that many were heavily relying on it with the mindset that it can accomplish so many tasks.
While all users should've been thinking about the extent of precision involved, most weren’t. Some students were simply ‘hyped’ about the chatbot’s ability to write a six page essay within seconds. This was what really caused the most problems. And yes - up to this day, while developers have spent lots of time in improving the system and defining some of the major errors, artificial intelligence simply doesn’t fall into the category to replace human teachers and take on the role of ensuring students learn adequately, at least not yet.
Let’s dive into the psychological aspect of what using machine-learning based platforms impedes. In a traditional in-person class format, when teachers and professors encourage students to engage, interact, and help each other, they truly want everyone to make the most of the immersive experience. In fact, they don't let their students come to class, stare at the projector, take the exam, and go home. On the other hand, digital models like ChatGPT don't quite allow for the same practice. Communicating with a bot is not in any way similar to the conveying of ideas with humans.
Often, those necessary skills aren’t learned properly. For example, if a user would communicate in sentence fragments and informal language with artificial intelligence, bots like ChatGPT or even Google Gemini would be able to manipulate what is meant to be said. Can it output the correct response? Maybe not. But, it is likely that machines can understand what is inputted. It’s typically how the bots process what users want to do where the conflict frequently lies. And obviously the more casual type of conversation would not be so ideal when speaking to teachers, professors, etc. The type of guidance that may be received from an AI bot might not be the same as a human-to-human based mutual support.
With constant exposure, teachers can expertly sense when kids are having trouble understanding something. Students may be shy to raise their hand and ask for help, but their puzzled and stressed looks give clues to teachers to reapproach their teaching style. AI unfortunately does not have the capability to read facial expressions yet. In addition, when it comes to grading assignments and exams, AI can indeed score work with specific guidelines provided such as asking chatbots to look for certain points
ChatGPT-like solutions typically offer a one-solution-for-all and doesn’t work well with explaining concepts in altered matters for each user. Even more, it will have more of a difficult time giving the user clues to the final answer of a question, and will instead just put the final response (again which may or may not be accurate).
This doesn’t allow users to develop their problem-solving and imaginative abilities. This often raises concerns for when students take their exams on paper in class. Sometimes the techniques used by ChatGPT to approach a problem aren’t identical to what is being taught in the classroom. This means students won’t be prepared for success if they neglect their instructor’s support and use AI as their primary source of learning.
Let’s face it. Relying on such technology now may even allow students to get through their computer programming course assignments. It might be tough to rely so heavily on those bots, but maybe it’s possible with trial and error. However, is it helping them in the long run when they enter their careers? Not quite. Students enrolled in a programming I or intro to coding course who use ChatGPT for their assignments in college are most likely computer science, information technology, or other engineering-related majors. These students have dreams of becoming software developers, web developers, network administrators, etc.
They might be able to get away with using AI for their world history homework, but it is obviously essential to have strong coding skills to have the potential of getting any jobs. All jobs are looking for experience from their applicants. Using ChatGPT or Google Gemini is not “the experience” they want. The same goes for any other field. In fact, managers look for their employees to be able to use what they know to expand further and plan solutions to problems that may seem impossible to solve. This requires being creative and thinking outside of the box. Using machine learning models for the most basic tasks only shows one sign to the job-hiring team - the applicant is not qualified.
During the pandemic, many students were attending school remotely. Due to the sudden onset of problems with the rapid spread of the coronavirus as well as the digital divide to internet connectivity, teachers may have become more lenient in their grading policies. Some schools adopted a policy where students would deserve full credit just for submitting their assignment on time. This led to the uncertainty to what extent academic integrity was being enforced. Students spending hours on their homework who truly had the desire to learn something were getting the same grades as others who did the work in 5 minutes by cheating. It was unfair. Despite schools enacting stricter rules post 2020, some who were in their junior year of high school or even graduating were at an unfair advantage when it came to applying to colleges, especially if they did really well due to cheating one year.
Many colleges lifted their standardized testing policies and cumulative grade point average requirements, therefore resulting in higher acceptance rates. There was less competition between a 3.7 and a 4.0 gpa. This had both pros and cons. More students had the privilege to attend a university of their choice. At the same time, those who took their academics very seriously and set high goals to achieve success felt like their hard work no longer had the same impact.
College admissions staff wouldn’t know if all A’s on the transcript were from a diligent work ethic or from copying answers through whatever AI provided. As discussed previously, some high school teachers weren’t required to evaluate students’ work for plagiarism or dishonesty. This was purely due to the transition from paper to digital submissions being so new. Many students and teachers were being exposed to tools like Google Classroom for the very first time. It was indeed a forced change.
Another aspect where AI is introduced is through grading of exams. Typically, an exam given on paper would be graded by the teacher. They would be able to provide comments if an answer was fully correct or if something was missing which would result in the student not receiving the maximum points possible. AI, on the other hand, might be able to grade multiple choice questions to some extent accurately by indicating if the chosen answer is correct or incorrect. But, it lacks greatly when it comes to explaining why something was scored as it was. Students can’t rely on AI giving valuable feedback as to why an answer was right or wrong, and therefore they won’t be able to learn why something is the way it is with a valid reason.
Like many other technologies, artificial intelligence too has made great strides in every area it is being used. We’ve seen it being implemented in transportation, smart devices, retail/e-commerce, and even healthcare. While there is still room for more development, AI has changed vastly over the years compared to when it was first established. Indeed it can be used as a useful tool to guide the teaching and learning process as well, but we can’t fully depend on it just yet. No matter how much AI improves over the next decade, it won’t be able to substitute the existence of schooling like we know it. There is nothing like the environment and true emotions that are bought with a full classroom of students eager to learn from their teacher all while being engaged and inspired greatly.