AI In Education: An Interview with EdTech Pioneer Inge de Waard

I recently had the honor to welcome Dr. Inge de Waard (Learning Strategist at EIT InnoEnergy) to my online design thinking classroom at the Asian University for Women. For my students, Inge connected the advance of generative AI back to Skinner’s early concepts of machine learning, the advance of e-learning, big data, learning analytics and the ‘deep learning revolution’, which she describes as ‘machines teaching machines how to interpret data’. ‘AI in Education means that learning and machine learning are coming together’ Inge explained. She anchored the debate on dystopian or utopian outcomes of AI by positioning the technological innovation as part of the Gartner’s hype cycle.

Inge described the tremendous creative potential of AI: “We can generate new designs based upon existing designs. We can generate new voices based upon the voices that we all have. We can even create an environment created by AI that will help us with specific tasks”. To exemplify these points, she shared a fascinating example with the students that uses the Resemble.ai, Midjourney and D-ID and Wondershare (Filmora) while making use of paintings, look-a-likes, and pictures as the sole basis to create a talking segment created by these AI tools.

In the interview for AACE Review, we follow up on some of the questions raised during the classroom discussion.

 

When discussing the impact of AI on society, the students were concerned about far-ranging changes to the labor market. You pointed out that in machine learning, humans still conduct quality controls of data output. However, this does not create high-earning jobs. As you pointed out when talking to the students at AUW: “The ones who check the data only get paid peanuts”. I wonder how you think about ‘data dignity’, a concept put forth by Jaron Lanier?

While AI is infiltrating society on various levels the transparent understanding of AI becomes more urgent (e.g. specific sectors using AI tools to take on basic tasks like legal references, or AI being used to create personal assistants when surfing like what the new Bing is doing, etc.). While AI has always been a subject to passionate debates on ethics, it is now growing into a system that influences all of us without us even knowing and understanding how AI impacts our daily lives. I fully agree with Jaron Lanier’s point that we need to ensure data dignity on all levels and for everyone. Some initiatives started, one of such is the Data.org initiative where the goal is to use data to impact life towards a better world (https://data.org/ ), my inspirational colleague Ronda Zelezny Green can provide you with additional background information. If technology is adopted, we need to make sure it aligns with values we want to strive for. To me, it would be a world where all people can live a peaceful life, enjoying learning, and be actively involved in a society that is aimed at empowering people to live an active, full life. This is one of the reasons that embedding Ethics in any curriculum is so important. Realizing impact on society viewed through an ethical lens, creates more awareness for all. But not everyone has the same goal, and AI is so complex that it can easily be used to change the course of values. Let me give some examples.

Although AI has no cognitive values to either take up or reject, it does gather all the moral values that are dominant in our society. The more information AI gathers on a subject, the more weight it gives to that information. This distorts the equality and diversity of information. To show how it can have a negative impact on the results that come out of AI, I provide two examples:

One is Cambridge analytica (https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Analytica_data_scandal ), where individual data of real people were analysed to see whether a person could be manipulated towards a different political stance (psychological targeting). This meant that freedom of thought was corrupted and manipulated with sometime fake argumentations that felt like real life facts. This started in 2010, by now AI can be much more nuanced in their use for psychological targeting.

AI bias has affected hiring procedures, correctional sanctions, healthcare, depictions of what is considered professional or not (take a test, and type in CEO and look at the ‘diversity’ of the images to give you an idea. Article of interest: https://towardsdatascience.com/real-life-examples-of-discriminating-artificial-intelligence-cae395a90070

You pointed out the potential of AI for delegating routine tasks and supporting decision making based on large datasets. This will obviously change our field: Instructional design, edtech research, IT development, even teaching and learning support. One of the jobs you mentioned that would significantly change are professional technical proposal writers. Are there other examples that come to mind?

All innovations have an impact on society. Just think of the electric light bulb, or a car, or photovoltaic solar panels. But once an innovation stands the test of time, it influences professional jobs as well. With the emergence of the internet, more people had access to different types of information, and paper encyclopedia became out of fashion, with the green energy coming to fruition, engineers need to be reskilled to be able to work in the sustainable energy sector, as jobs in fossil fuels are declining. In a sense, this is nothing new, it has happened throughout history (cars replacing horse carriages, so drivers needed to become chauffeurs). With AI you have a similar trend. Report writing can be replaced up to a point through AI, as GPT4 gives rise to complex, useful writing tools. However, people are still needed to check the content and review where necessary. Other jobs that are affected among others are: live translators (think of the automated captions that are taking over translated subtitles), telemarketers (think chatbot for specific sales), graphic designers / web design partly replaced by AI tools like Dall-E and Midjourney, legal preparations (e.g. Luminance, but also ChatGPT – https://www.wired.com/story/chatgpt-generative-ai-is-coming-for-the-lawyers/), healthcare is impacted both for the professionals as for the patients (see this article by McKinsey https://www.mckinsey.com/industries/healthcare/our-insights/transforming-healthcare-with-ai ).

But just like all disruptive innovations, they will not only reduce specific societal roles, but they will give rise to new applications as well. AI is now increasingly embedded in courses and curricula, as more people need to work with AI in order to make it suitable for our societal needs. This means new jobs are created as well. As an example, I refer to a Data Science summer course that EIT InnoEnergy has set up for all interested Master school students, where AI tools are used to visualize the data and do forecasting in sustainable energy. The simple fact that we built such a course, shows how AI provides new job opportunities as well, where teachers are taking up new course topics, and students are prepared for new jobs. And for those jobs that are influenced by AI (like teachers), we offer continued professional learning in the form of a teaching staff newsletter and so-called ‘Teacher Webinars’ that are short less than 30 minutes webinars on AI topics.

I loved that you went back to explain that, in the end, these are algorithms. You seemed a little less interested in prompt engineering, stating ‘Natural Language Processing algorithms process keywords. But keywords no longer sound fancy, so within AI tools they are now called prompts’. Are there good learning resources you can recommend to students who want to understand the technology behind GPT models?

There are theoretical as well as practical approaches, so quickly listing a bit of both. If you are looking for theoretical knowledge, some courses might enlighten you. There are quite a few good basic AI courses out there, for instance this course on Udemy that covers ChatGPT with good real life examples (it costs 15 $) https://www.udemy.com/course/artificial-intelligence-az/

If you are interested in applying AI tools for specific reasons, then I suggest testing out the trial versions of some of the Tools and …. use YouTube! If you find a good tool, chances are that there are already users sharing their findings on YouTube. Which is still the main go to area if I want to learn anything. If you want to create a visual character using AI tools, look at this 7 minute video: https://youtu.be/LwlUuUrAUzk .

And if you are wondering what prompts are, look at the difference different prompts make in visual AI tools, this is the example of prompts in Midjourney: https://docs.midjourney.com/docs/prompts

BTW: a strong AI tool has great documentation!

What are the specific privacy concerns raised by generative AI? When talking to the students, you mentioned the amount of personal, identifiable information shared on social media. Does AI add another layer to this problem, and if so, how?

Yes, there are certainly privacy concerns. An in-depth overview of the potential privacy pitfalls and results by AI tools can be seen in a documentary ‘Coded Bias’ (https://www.youtube.com/watch?v=jZl55PsfZJQ ). This documentary investigates the bias in algorithms after M.I.T. Media Lab researcher Joy Buolamwini uncovered flaws in facial recognition technology.

Another option is a documentary called ‘The Social Dilemma’ (trailer: https://www.youtube.com/watch?v=uaaC57tcci0). Although this focuses mainly on big data and the fact that algorithms use this data to get specific outputs, it gives a clear look behind the scenes of engineering algorithms. And I feel this is less critical than Coded Bias.

Basically, AI makes it even less transparent which ‘rules’ govern the data results that we are all provided when we use data passing through the cloud. This means that in theory, just like AI tools make up content if they cannot find an answer, we might be getting search results that actually do not exist, so fake information. If we take into consideration that some people might have malicious intentions, they can then manipulate the data results towards their goals. It is in line with what Jaron Lanier’s quest for a more dignified data use.

Another dark side of AI is that dominant societies are feeding the information models. Can you elaborate on this point and how it might lead to a dominance of information in English language?

Most of the AI tools that are based on Natural Language Processing are English oriented at the start. This has grown historically and is also a direct result of English being one of the few globally spoken languages. As the AI programmes like deep learning were developed in English speaking parts of the world, the English language is often used to test out new releases.

Could you also see the opposite come true based on advanced translation capabilities?

There are indeed more linguistic options as the translation tools become more proficient, more languages are taken up. But even then, the more locally spoken languages are still underrepresented in the offers. To me, this also means that AI is adding to the linguistic digital divide, on top of the digital divide that is currently growing between those who understand and are able to use AI, versus those who are not (yet) familiar with AI features.

When discussing AI-gerism, you made an interesting point: ‘It will lead to process over product’. What pedagogical opportunities do you see when we no longer check the originality and instead focus on the reflection of the generative process?

To me thinking is in a sense copying others up to a large extend. We mimic those we admire; we share those ideas as coming from ourselves on many occasions. As we learn, most of the things we learn are also reverberations of what has come before us. So, what is originality? What is a ‘correct result’? Frequently our assessment methods are aimed at rehashing what we have learned, so even there a result is just mimicry of what we heard earlier. If that is the basis, then surely AI simply produces very similar results to students. But of course, in case they do not come up with a result themselves, they did not go through the learning and thinking process of getting to a result.

In terms of assessment, we need to use different approaches. Our learning strategy moves more towards collaborative or innovative pedagogies, for instance Challenge Based Learning assessments, where coming up with new ideas while collaborating to solve sustainable energy challenges is emphasizing not only engineering or content skills, but also communication skills and creative thinking. Or Moonshot thinking where big impact technologies (such as AI) are taken into account when trying to solve ‘impossible’ challenges that could improve the world.

So apart from the more traditional ways of testing students (multiple choice, essays…) our assessment methods must also focus on moments within the learning process. It is also meaningful to integrate what AI can do to get familiar with AI. For instance, in a group work you can ask everyone to use ChatGPT to construct a text for a particular problem or paper. With that result, the students in a team need to finetune and review what AI has come up with. What would they rewrite, are the facts delivered through the GPT4 tool correct, where does it have flaws… Another option is to ask students to build a course, using AI tools wisely: how can you reduce costs when building a course with AI, what can be done with AI and what needs to be done manually? There are benefits (letting depictions of historical figures tell part of their story or facts) but there are also downsides (how much time in a course can use talking heads based on AI avatars before it becomes boring?). It all comes down to understanding the new medium (AI) and to realize how to embed it, what to watch out for and … be on top of the innovation’s options.

What I liked about your presentation was that you take an agnostic position towards AI in education. While you certainly are excited about the opportunities, you also see the problem being manifold. You stated that “as an educator, you need to play with it enough to understand what is happening”. If a teacher has 10 minutes today to explore AI, what do you suggest they do?

When an EdTech innovation comes along, I tend to look for pioneering curators within the field. These pioneering curators of content are part of my network, which means I can look what they are doing in my news feed (e.g. LinkedIn, Twitter, YouTube as these are more content driven in my opinion). A great curator will immediately test the innovation, try it, look for the educational benefits and risks, and get the word out so that others can learn from it as well. In the case of AI for education, I gladly refer to Alec Couros, as he made a fabulous synthesis of AI for teachers and shared it in his open slide deck here: https://ctl.uregina.ca/lets-talk-about-ai-in-teaching-learning-by-dr-alec-couros (great resource!).

Thank you very much for the classroom visit and this interview!

It was a real pleasure to meet up with the AUW students! They were active and engaging, which really got me energized. Thank you for this great opportunity and good luck to all the students, looking forward to hearing from them in the future!

About

Dr. Inge de Waard is the learning strategist at EIT InnoEnergy, she is a longtime researcher, activist, award-winning learning innovator and (e)Learning coordinator. She developed multiple online & hybrid courses, co-designed AI tools, and embedded learning innovations. Inge coaches and co-creates international, blended curricula with engineers and teachers, and explores innovative learning formats. Her expertise is recognized by peers, resulting in additional co-authored papers, invited talks and keynotes in both academic as well as professional conferences, workshops and seminars. She recently founded the Flamboyant Grays initiative to spotlight role models who changed their lives at any age beyond 50 (FlamboyantGrays.com). Most of all, she likes to connect with people and share stories.

 

 

 

 

 

Print Friendly, PDF & Email

Be the first to write a comment.

Your feedback