Schools and universities will start this academic year cautiously regarding AI-generated platforms, without a generally accepted guide for their use, but with much more tolerance towards them than during the previous turbulent academic year.
This is the first academic year, which begins with ChatGPT and other content-generation platforms as a legacy that needs to be addressed. It is no longer a novelty to get acquainted with.
Are they the enemies of education, professional training, critical thinking, creativity and knowledge or just another new tool that the education system continues to adapt to?
Schools and universities do not have a shared answer to these questions, and anyone about to start classes will be waiting for an individual mode of approach to ChatGPT and other platforms tailored to their school or university.
It did not take much time to get used to it since Chat GPT was launched last November as a revolutionary content creation engine.
All previous recommendations will be implemented during the first "regular" academic year with AI tools as a legacy, including the prohibition on use, the permission to use but only following new regulations, and the review procedure for all chatbots until the final decision.
The ban proved to be ineffective
The first wave of banning the use of chatbot platforms has already passed on the eve of the new academic year.
This was a reflex action of educational administrations to a technological innovation because it seemed that this would completely destroy the educational system, with massive abuse and elusive plagiarism.
Just 2 months after the appearance of ChatGPT, the largest educational district in the US - New York City Public Schools - banned its use, in order to prevent cheating and plagiarism in student assignments.
It was immediately followed by the second largest school district in the US (Los Angeles Unified) blocking access to ChatGPT, and soon, the ban spread to other parts of the US.
However, the ban was lifted in public schools in New York last May, followed by other educational districts in the US.
?While initial caution was justified, it has now evolved into an exploration and careful examination of this new technology?s power and risks?, said David Banks, the chancellor of New York City Public Schools.
Many schools and universities throughout the world have gone through a similar evolution in the past months, first banning, but soon and after many conversations with experts on education and technology companies, softening their decisions - mainly using the explanation given by Mr Banks.
Accepting AI tools cautiously
At this moment, right before the start of the new academic year, the dominant trend seems to be the cautious acceptance of chatbot platforms, with an effort to limit the field for their abuse.
The ban has become more or less meaningless because whoever promulgates it does not have the technological ability to enforce it. ChatGPT has not been the only platform abused for writing student papers or solving tests. There are hundreds of them today.
Also, blocking access when using Wi-Fi at university is easy to bypass (via VPN, for example), and it is impossible to set up a firewall when students work at home.
That is why the widely accepted approach is to adapt to the new reality instead of prohibiting it.
This is founded on the idea that technological innovation, like chatbot platforms, has numerous benefits and should be employed in the educational process while also searching for a means to reduce the possibility of abuse.
The last academic year, which passed mainly in the spasm of educational institutions due to a possible flood of cheating and plagiarism, showed that students did not resort to the misuse of AI tools to a worrying extent.
Only 17% of Stanford University students utilised ChatGPT for final exams, according to an "informal" survey among students published last January, at the peak of interest in chatbots.
The vast majority (about 60%) used it only as a tool for exam preparation, brainstorming or formulating ideas, in a way that is becoming mainstream - using platforms to support the learning process, and not for its abuse.
Change of pedagogical approaches and codes
In the short term, schools and universities will be forced to change some of their current pedagogical practices due to the acceptance of AI tools, particularly the evaluation and types of control of tests, essays and other works.
Additionally, since AI use will get a new and specific role, a change is required in their policies, preventing plagiarism and other types of dishonourable academic conduct.
Many have already done so, preparing for a new and challenging academic year. The solutions are different, but most promote the positive sides of AI platforms, their potential to stimulate discussion, for example, or improve the scientific and linguistic neatness of student papers.
The acceptance of AI tools as an inevitability follows the demands for their "decriminalisation" and as an expression of the inability to establish human control over all the possibilities of their abuse.
?Beyond a desire to encourage responsible experimentation ... an important factor in taking this position is that detection of AI-generated content is unlikely to be feasible?, stated Monash University from Melbourne in its contribution to the parliamentary debate on the use of AI in education.
Adaptation to AI technologies in education will be closely monitored during the next academic year because no one expects to achieve maximum efficiency with minimal abuse in the short term.
Those who ban the use of chatbots and others who willingly or unwillingly embrace them agree that everyone involved in education has a huge job ahead of them regarding AI literacy.
We cannot have AI-literate students and teachers without AI in schools.