Johannes Cronjé is a Professor at Cape Peninsula University of Technology in South Africa, where he teaches and researches digital teaching and learning in the Information Technology Department. He has supervised more than 140 Masters’ and Doctoral students and published more than 65 peer-reviewed papers on topics such as mobile and blended learning, academic writing, and instructional design. He is a popular keynote speaker at conferences and visiting professor at several universities around the world. As an advocate for open education he offers a free online doctoral program on his website: https://sites.google.com/site/johannescronje/doctor-doctor
As an academic writing coach, Johannes Cronjé was instantly intrigued by the potential of generative AI – in his own words ‘I am completely blown away by how well ChatGPT writes’. He demonstrated the potential in a video in December of last year: How I wrote a conference call in one minute with ChatGPT Artificial Intelligence. It did not end with the call for papers: In February of this year, Prof. Cronjé organized the ‘Teaching and Learning with Artificial Intelligence Symposium’ at CPUT.
He is also a pioneer in triangulating the affordances, limitations and boundaries of AI in writing. As an example, he instructed his students to write a structured literature review using ChatGPT, and used one of the papers to demonstrate how important it is for the author to remain in control of the process: ChatGPT and Literature Reviews.
I am intrigued by the transformative challenges and questions AI poses for the Humanities. In 1935, Walter Benjamin wrote his famous essay about the work of art in the age of mechanical reproduction. Benjamin argued that with reproducibility, both uniqueness and distance are erased; an artwork becomes viewable and possessable at any time and in any place. Since then, we have seen the Internet, Non-fungible Token art by Bleeple and now the generative power of Dall-E, Midjourney and Bing. What does this mean for the “aura” of art, and what does it mean for the integrity of writing and research?
I think AI is the next technology to help level the playing field. In the same way as the etching press made copies of art available, generative AI “art” is making “original” art available to anyone. Where techniques of mass production have made good art and good design available to almost anyone, AI now makes unique pieces available to anyone. As far as the “aura” of art is concerned I think we must distinguish between the emotional and economical value of art. Somehow the value of Van Gogh’s “Sunflowers” is determined not by market forces, but by a system of galleries and collectors that play in the extremely high value market. But that art that sparks joy, that you just must hang on your wall – it will spark joy regardless of how it was produced. The Marxist in me has a problem with the “aura” anyway – if it speaks of exclusivity. I live in a country that has an almost four-hundred-year history of exclusivity. I am about inclusivity. In the same way as large breweries have made beer available cheaply to the masses, while at the same time opening the way to the craft beer market for people who want something exclusive, AI generated art has the potential to make more people aware of the aesthetics of their visual environment, and when more people put AI images on their walls, more people may possibly want to by hand-crafted art.
As a pioneer could you share some practical insights from your experiment where students wrote structured literature reviews using ChatGPT? How do you strike a balance between leveraging AI tools and maintaining the author’s control in the writing process?
Yes. It was a fascinating experiment. I constructed a series of exercises whereby students searched for appropriate literature using Research Rabbit. I then got them to group their results into themes and make mind maps. Finally they had to convert those into fully fledged literature surveys – and to list the prompts that they used. The results were fascinating. Some students used a single prompt: “Write me a literature survey about X and include 20 citations in Harvard format”. For them ChatGPT produced vacuous text conforming to the format of a structured literature review and containing numerous fake references. Others went through the process step by step, using prompts such as “Give me the proper outline of a literature survey”, “Write an introduction for a literature survey on X”, “Summarize this article”, “Rephrase this paragraph” and “Convert these references into Harvard style”. Interestingly enough, the grading rubric held up. Students who used simple prompts produced essays that showed no rigour, no originality, and most importantly, no internal consistency. The introduction promised certain things that were not realized in the body or in the conclusions. The citations were fake. Students who followed the process and used ChatGPT to assist them with writing using the sources they found from Research Rabbit fared much better. Those students stuck to their mind map structures. They organized their thoughts themselves, and they used ChatGPT to improve their language, as well as to create introductions and conclusions that followed the sequence of the main text. Such students scored highly on the rubric. Here is a short video to show how I graded one paper: https://youtu.be/d8xp9dugeOU
For me the main value of the exercise was to show that my students, many of them who speak English as a third, or “another” language, were suddenly able to produce coherent text in which they expressed their own ideas and insights with the assistance of ChatGPT – in the same way as they would have done had they worked with the close assistance of a consultant at the University language center. Thus, in this way ChatGPT liberated them from the cultural and historical constraints placed upon them.
In your video demonstration of using ChatGPT to write a conference call in one minute, you showcased the capabilities of AI in generating content extremely quickly. How do you see AI shaping the future of academic writing? More specifically, how will it change peer review and journal policies?
I expect that we will begin in the same way as with the introduction of the motor car. We will have a man with a red flag walking ahead of it. We have already seen this with some institutions trying to ban AI generated text all together. And then we saw the immediate rise of AI detection software, coupled with creative ways of bluffing the software by using a series of paraphrasers. In grading my students’ work I learnt very quickly that I did not need software to detect AI – my ordinary abilities of detecting actual stupidity worked well enough. In working with my own graduate students developing academic papers and even theses I have found that AI tools are excellent in suggesting titles, writing abstracts, and re-formulating convoluted paragraphs. The end products of such mundane tasks are usually not flagged by AI writing detectors – mainly because the content is fresh and original – it’s just the syntax and grammar that had been improved.
My take on the ethics of using AI in the production of academic research output is that you should be able to ask AI to do anything that you ask a research assistant to do. If ethically you would be required to acknowledge the work of the assistant, then you need to acknowledge the AI.
So, I believe that in peer-reviewed journals, it would reduce the times that “reviewer 2” calls for having the work corrected by “a first-language speaker of English”. It will also put more pressure on reviewers to check for originality of thought, since it is possible to ask AI to reproduce a complete paper using another style of writing, different organizing principles, etc. In short – you can plagiarize an entire paper, have it peer-reviewed, checked for plagiarism, etc, and get away with it…
As you organized the ‘Teaching and Learning with Artificial Intelligence Symposium,’ could you share some key takeaways and discussions that emerged during the event?
Some of the key issues:
- AI Lies. It lies a lot, and it lies convincingly. Students are bad lie detectors.
- Students struggle to develop good prompts.
- The best way to deal with AI is to follow a constructivist approach whereby students work with Ai to produce outcomes.
Video recordings and slideshows presented are available for free from the conference website at https://sites.google.com/site/capelearning/
What AI tools are open in your browser right now and what do you use them for?
Hahah. I have quite a strict protocol about my browser, and there is nothing open now. However, my quick-access menu points to ChatCPT. Bing opens automatically should I use Edge. I use ChatGPT as a general assistant: “Please change this from passive to active”, “Please change this from 1st person singular to third person feminine”, “Please rewrite this passage removing redundancies and grouping similar concepts together”; or even “Please draft a multiple-choice test from this text. Indicate the correct answers with *”. I have the free versions of QuillBot and Writefull installed in MSWord, and Google Assistant hangs around in the background, and I use Dall-e to make pictures for slideshows or websites if required.
As a supervisor of graduate students, I tend not to use much software anyway, but I encourage them to use Elicit and Lateral in their literature searches, VOS viewer for visualizing data, and then generally to explore and report back to their peers on their usefulness.
How do you observe the adoption and integration of AI in higher education across different regions? Are there any notable trends or disparities that you’ve noticed?
I think it is one of the technologies that has most equitably distributed itself in the shortest time. My colleagues at universities in India, Ghana, Ethiopia, Norway, Finland, and Canada all use it equally well and with equal healthy skepticism. My colleagues in the US, though, are a little more conservative and are still trying to get it banned.
Overall, do you think artificial intelligence will increase or decrease opportunities for universities in South Africa and similar emerging economy countries?
I once heard it said that it is not AI that will take our jobs, but people working with AI that will. I think there is much truth in this. After all, I will not have laser treatment of my eyes if the doctor does it by hand. Yes. The equalizing effect will work both in emerging and mature economies, but I worry about instances where AI might learn and therefore emphasize our existing biases.
For graduate students who are unsure who to use AI effectively, are there some resources you can recommend?
My collection of resources is here: https://www.dropbox.com/sh/xqob3f9wee75cc0/AAB6eqRo0VnlFgtxZewBy5yNa?dl=0
I also recommend Dr Andrew Stapleton’s YouTube channel: https://www.youtube.com/@DrAndyStapleton
Finally, what are your next endeavors in the field of AI and academic writing?
I am working with a graduate student to develop an automated research proposal writer that will run as an API off ChatGPT. The app will begin by asking some coaching questions about what motivates the student, then come around to the field of specialty, then ask the student to reflect on their current reality and help the student to get to the best possible match between their situation and a possible research project. Once the match has been established, the AI will assist the student to write the proposal using appropriate literature and the correct format and style.
Here are Chat GPT’s answers to questions 1 and 2: (actually much better than mine…)
Question 1: The advent of AI and digital technologies has indeed brought about transformative challenges and questions for the Humanities, particularly in the realms of art, writing, and research. The concept of “aura” in art, as famously discussed by Walter Benjamin in his 1935 essay, has undergone significant changes in the age of mechanical reproduction, and now AI-driven generative technologies further alter our understanding of it.
Benjamin argued that the “aura” of an artwork is the unique and authentic presence that is tied to its originality and historical context. With mechanical reproduction, such as through photography or printing, an artwork’s uniqueness and distance from the viewer are diminished because multiple copies can be easily created and disseminated. This mass reproducibility detached the art object from its original context, changing the way it was perceived and experienced.
The internet and the recent developments in AI art, represented by Non-fungible Token (NFT) art by artists like Bleeple and generative models like Dall-E, Midjourney, and Bing, further challenge the traditional notions of aura. NFT art, which uses blockchain technology to create unique tokens for digital art pieces, attempts to restore some level of uniqueness and scarcity in the digital realm. On the other hand, AI generative models create new artworks that often challenge the idea of authorship, as they are not created by human hands but by algorithms.
In the context of writing and research, AI poses both opportunities and challenges. AI language models like GPT-3, for instance, can assist researchers and writers by generating content, providing insights, and even automating certain aspects of the writing process. However, this raises questions about the integrity of research and the role of human creativity and critical thinking.
The challenges lie in ensuring that AI-generated content is accurate, unbiased, and ethically sound. Plagiarism and the potential to manipulate information are concerns that need to be addressed. Additionally, the human touch in writing and research, which involves emotions, lived experiences, and subjective insights, might be lost in the process of relying solely on AI-generated content.
In conclusion, the rise of AI-driven technologies in the fields of art, writing, and research has profound implications for the concept of “aura” in art and the integrity of scholarly work. While these technologies offer exciting possibilities, it is essential to approach them thoughtfully, critically, and ethically to strike a balance between the convenience they provide and the preservation of authenticity and human creativity. As the technology continues to evolve, we must continually evaluate its impact on our understanding of art, knowledge, and humanity itself.
Question 2: As an AI language model, I don’t have personal experiences or the ability to conduct experiments. However, I can provide insights based on how AI tools like ChatGPT can be utilized in the context of structured literature reviews and maintaining author control.
Preparing the research question: Before using AI tools like ChatGPT, it’s crucial to have a well-defined research question or topic. Clearly outline the scope and objectives of the literature review to ensure that the AI-generated content aligns with the author’s intentions.
Using AI as an assistant: AI tools like ChatGPT can be valuable assistants in the literature review process. They can help researchers find relevant sources, suggest potential subtopics, and even provide summaries of existing research. However, it’s essential to treat AI-generated content as a starting point rather than a finished piece.
Setting clear guidelines and constraints: Authors should provide clear guidelines and constraints to the AI model. This can include specifying the preferred style, tone, and depth of analysis. By doing so, authors retain control over the direction and presentation of the literature review.
Reviewing and editing: While AI-generated content can be a time-saver, it’s crucial for authors to review and edit the material thoroughly. AI may not always understand the nuances of the topic, and human oversight is essential to ensure accuracy and coherence.
Citing sources and plagiarism checks: Plagiarism is a significant concern when using AI tools for writing. Authors should ensure that proper citations are included for all AI-generated content and use plagiarism detection software to verify originality.
Integrating human insights: AI can process vast amounts of information, but it lacks the depth of human understanding and insights. Authors should incorporate their own critical analysis, interpretations, and personal perspective to enrich the literature review.
Ethical considerations: Researchers must be mindful of the ethical implications of using AI in scholarly work. It’s essential to acknowledge the contributions of AI tools appropriately and be transparent about their usage in the research process.
Limitations of AI: AI language models like ChatGPT are powerful, but they have limitations. Authors should be aware of these limitations and not rely solely on AI-generated content. Using AI as one of several tools in the writing process can lead to more robust and well-rounded literature reviews.
In summary, leveraging AI tools like ChatGPT in writing structured literature reviews can be beneficial for researchers, but it requires a balanced approach. Authors should use AI as an assistant, set clear guidelines, review and edit the content, and integrate human insights to maintain control over the writing process and produce high-quality, original work.
Let’s agree to disagree on that last point. I prefer the human answers, that never end with ‘In summary’, and are more poignant and fun! Thank you!
Johannes Cronjé is a professor of Digital Teaching and Learning in the Department of Information Technology at the Cape Peninsula University of Technology. Prior to that he was the Dean Informatics and Design. He has supervised more than 140 Masters’ and Doctoral students and published more than 65 peer reviewed papers. He is a sought-after international keynote speaker and has been a visiting professor at seven universities internationally. Full CV