Grasping Artificial Intelligence and ChatGPT
Grasping Artificial Intelligence and ChatGPT
BY MELISSA ULSAKER MAAS ’76
All cartoons were written by ChatGPT and illustrated by Bing’s AI image creator, unless otherwise credited.
Many of the world’s greatest innovations were met with a mixture of excitement and trepidation. U.S. President Benjamin Harrison reportedly had the White House staff turn the lights off and on because he was afraid of electrocution. There was a fear that radio would turn people away from reading or conversing with each other, this fear amplified with the introduction of television. The New York Times attacked the telephone, suggesting it would only be used to invade people’s privacy and make society lazy and antisocial. The fear of computers was so severe that “computerphobia” became an actual term. And now—just as we have learned how to navigate and live with the pros and cons of the Internet—we have to evaluate the best and the worst of Artificial Intelligence (AI) and ChatGPT in particular. Feelings are mixed and AI is developing and changing at lightning speed. Is it the tool of the future, or a devil in disguise
What is an AI Chatbot?
British mathematician Alan Turing was instrumental during World War II in creating an electromechanical machine that could break the German’s Enigma codes and later explored the mathematical possibility of artificial intelligence in the 1950’s. AI, the foundation for chatbots, has progressed since that time to include intelligent supercomputers such as IBM Watson. You may have seen Watson beat top human champions Brad Rutter and Ken Jennings on “Jeopardy!”in 2011.
At the most basic level, a chatbot is a computer program that simulates and processes human conversation (either written or spoken), that allows humans to interact with digital devices as if they were communicating with a real person. Whether you know it or not, you’ve probably interacted with a chatbot. For example, those helpful, or pesky, little windows that pop up on websites asking if you need help. Chatbots can be as simple as rudimentary programs that answer a simple query with a single-line response, or as sophisticated as digital assistants that learn and evolve to deliver increasing levels of personalization as they gather and process information. More advanced examples of chatbots include Apple’s Siri, Amazon’s Alexa, and Google Assistant. Other examples of AI we encounter on a regular basis includes facial recognition, predictive text, vehicle navigation, social media algorithms, spam filters, and fraud detection associated with banking.
Generative AI systems are capable of creating new and original content that was not explicitly programmed into the system, such as text, images, or music, speech (voiceovers), 3D models, and games. While a multitude of AI sites now exist, the most talked about, powerful free chatbot, ChatGPT, was released by artificial intelligence company OpenAI November. According to ChatGPT CEO Sam Altman, within one week of its release ChatGPT helped more than one million different users obtain unique answers to their prompts and questions. ChatGPT is an implementation of the powerful GPT-3 (third-generation Generative Pre-trained Transformer) model. It is capable of answering just about any question, writing cogent essays, helping with research assistance, learning a foreign language, and much, much more. As a generative AI language model, ChatGPT can understand natural language inputs and generate human-like responses based on the input provided—creating new and original content.
Why did the neural network refuse to go on a date?
Because it was too busy training itself to recognize red flags!
How Does ChatGPT Work?
ChatGPT works using a deep neural network architecture that is based on a transformer model and is designed to process sequential data such as language, making it particularly well-suited for natural language processing tasks. ChatGPT is pre-trained on a large corpus of text data to learn the relationships between different words and phrases in the English language. This pre-training process allows ChatGPT to generate coherent responses to a wide variety of inputs.
When a user inputs text into ChatGPT, the system analyzes the input and generates a response based on the patterns it has learned during the pre-training process. ChatGPT uses a technique called “masked language modeling” to fill in missing words in a sentence, and it also employs a mechanism called “self-attention” to help it understand the context of the input. The system can also generate text on its own, using a process called “unconditional generation.” This means that ChatGPT can generate text that is not in response to a specific input, such as creative writing or poetry. But it’s important to note that the success of ChatGPT’s responses depends on the quality of the pre-training data and the sophistication of the neural network architecture. The more specific and detailed the question, the better the response.
Be Aware—AI Can Hallucinate
An AI hallucination refers to a situation where the system produces an output that is not accurate or correct, but rather a product of its training data and programming. These hallucinations can be caused by a variety of factors, including incomplete or biased training data, errors in the algorithms used to process the data, or limitations in the AI system’s ability to generalize beyond the data it has been trained on.
The Internet is teeming with useful, accurate information, but is also packed with untruths, hate speech, and misinformation. As Chatbots learn skills on their own by pinpointing statistical patterns in enormous amounts of information, they absorb all text, including explicit and implicit bias. Because of the surprising way they mix and match what they’ve learned to generate entirely new text, they often create convincing language that is flat-out wrong, or does not exist in their training data.
To address these issues, researchers are constantly developing new techniques and methods for improving AI accuracy and reducing the risk of AI hallucinations, but just like humans, AI generation isn’t perfect.
Those hallucinations can be pretty trippy.
Why did the teacher use ChatGPT in class?
To help her students learn about AI and give them
a handy “GPT-ool” for answering tough questions!
AI and Education
Technology innovation isn’t slowing down, it’s only accelerating. OpenAI released a new subscription version in March, GPT-4, and is offering an A.P.I (application programming interface), that other tech companies can use to plug GPT-4 into their apps and products. And it has created a series of plug-ins from companies like Instacart, Expedia and Wolfram Alpha that expand ChatGPT’s abilities. OpenAI, Google, and Meta are building systems that let you instantly generate images and videos simply by describing what you want to see. Microsoft and Google have announced plans to incorporate A.I. technologies into their products. Users will be able to use AI to write a rough draft of an email, automatically summarize a meeting, and pull off many other cool tricks.
Initially, educators were concerned that students would use a chatbot to cheat. ChatGPT can generate slick, well-structured, essays that are several thousand words long on just about any topic queried, from quantum gravity to Shakespeare. And, even scarier, each essay it produces is unique, even when it is given the same prompt again, and it’s nearly impossible to tell who wrote it. In response, a number of U.S. school districts quickly made the decision to block student access to AI sites on school computers and networks. But attempting to ban a program like ChatGPT is not only impossible, it also fails to provide students with the support they need to make good decisions, learn how to use new technology ethically and appropriately, and be prepared for a constantly changing world. A critical part of an educator’s job is to help students fully understand the tools available to them.
Now, far from being just a dream machine for cheaters, many teachers are changing their tune and thinking ChatGPT could actually help make education better. AI is an amazing tool that can’t be ignored, and fully informed teachers can guide their students to the most age appropriate and approved tools through lessons, projects, and discussions. Since ChatGPT came on the scene, a plethora of online webinars and workshops on using AI and ChatGPT have been designed specifically for educators. It’s now evident that generative AI is a teaching aid that can offer personalized tutoring, unlock student creativity, and better prepare students to work with the ever-increasing number of AI systems in the workplace that they will encounter as adults.
The SSSAS MERIT (Making Education Relevant and Interactive with Technology) teams have designed a Digital Citizenship Program to help students navigate digital media, the Internet, and programs like ChatGPT. The ethical considerations for using ChatGPT include bias, privacy, accountability, transparency, and the need for human interaction. By taking these considerations into account, educators can ensure that ChatGPT is used in a responsible and ethical manner to enhance the learning experience for students. ChatGPT is designed to mimic human communication, but it’s still an artificial tool. Therefore, it’s important to be transparent about its limitations and capabilities and to establish clear guidelines for appropriate interactions with the programs.
“The Future of AI”
written by ChatGPT
The future of AI, what will it bring?
A world of wonder, or a frightening thing?
Will machines surpass the human mind,
Or will they simply be a helpful kind?
We’ve come so far in just a few short years,
With machines that learn and wipe away our tears,
They’re smart, they’re fast, and they never tire,
But will they ever feel our human desire?
Perhaps they’ll cure diseases that we cannot,
Or help us solve the problems we’ve forgot,
They’ll drive our cars and fly our planes,
And make our lives simpler without pains.
But what of fears that fill our minds,
Of machines that rule, and leave us behind?
Will they be kind, or will they be cruel,
Will they respect the laws that we’ve made as rule?
The future of AI is yet to be known,
As we build and teach and watch it grow,
But one thing’s for sure, it’s up to us,
To guide and shape it, and gain our trust.
So let us strive for a world that’s bright,
Where AI and humans can both unite,
With cooperation and understanding,
A future of hope and promise landing.
The Middle and Upper School digital citizenship advisories have been focusing on AI education, helping students to use the tool ethically and making sure they understand the pros and cons of using ChatGPT and other programs. Lower School students are learning through integration projects using the online design and publishing program, Canva. Canva has an AI image generator based on stable diffusion technology, which allows the user to generate photo realistic images through an open-source text-to-image model. One project introducing students to AI involves asking them to use Canva to create an image to go with a story they have written.
Colleen sees many positive ways ChatGPT and programs like it can be used, but is also aware of the concerns. Asking ChatGPT to write a paper is clearly plagiarism. However, there is a very good recommended program that helps the user to edit a paper they have written by tightening up their prose. Hemingway is an app that highlights run-on and complex sentences and common errors in yellow or red, adverbs and weakening phrases in blue, and green highlights mark over use of the passive voice. But rather than telling the user how to fix the highlighted words and sentences, the user has to take a stab at correcting it. Each attempt is evaluated and the user can continue to make changes until the highlight disappears, learning in the process.
Generative AI has the potential to be an incredibly effective tool for teachers. Teachers can use AI to create personalized student learning experiences, seek out new activities and engaging project ideas, streamline administrative demands, and create educational content such as lesson plans and assessments. For example, Lower School Teacher Michelle Bruch has used ChatGPT to plan lessons and help elaborate on ideas she has; to improve her writing of difficult emails and weekly newsletters to families; to edit her report card narratives for grammar and clarity; and to create presentations in conjunction with Canva. Lower School Technology Coordinator Kay Ossio has used it to help with the weekly Lower School newscast script. The Lower School Merit team asked ChatGPT to help rewrite the Technology Responsible Use Policy form to make it understandable to a Lower School student.
At the Upper School, Computer Science Teacher Tom Johnson says ChatGPT not only writes code, but does it really well. So well, he has shifted the way he teaches. Instead of assigning code as homework, he is asking his students to write more of it in class. Tom is also having more quizzes, presentations, and discussions in class, in addition to doing frequent check-ins with his students to talk about their work to really gauge where his students are coming from.
To a rap beat…
Dionysus, the god of wine and fun,
He can teach us lessons, even when we run,
So don’t mess with him, or you’ll be sorry,
Respect and kindness, that’s the true glory.
AI can also be very helpful with language learning, assisting language learners by generating interactive conversations, language practice exercises, and even creating new dialogues based on a student’s interests and level of proficiency. Upper School Latin Teacher Kevin Jefferson has challenged ChatGPT in a number of ways. He discovered ChatGPT is able to write fairly well in Latin. He input the students’ new vocabulary words and asked ChatGPT to write a story using them. “After some light editing—its grammar was impressively sound—I had a passage that offered students key repetitions for their new vocabulary and practice for crucial grammar concepts,” Kevin says. He also heard that ChatGPT was able to tell “story x in style y,” so his class asked the chatbot to tell the story of Dionysus and the pirates in the form of rap lyrics. “We had a blast comparing the ChatGPT version to what we had read in the Homeric Hymns.”
As we use chatbots, it’s imperative to remain skeptical, and see them for what they really are.
In Kevin’s Ancient Mythology and Modern World class they watched “Percy Jackson and the Lightning Thief,” and his students had prepared responses to a few discussion questions. After discussing a question thoroughly, Kevin would ask ChatGPT the same question. “The students judged each of its responses, with an eye towards factual accuracy to the movie and the myths, as well as the depth of its analysis,” Kevin says. “I think the consensus was that we were impressed with its generally cogent responses, but acknowledged it was inconsistently reliable.”
Kevin isn’t totally sold on Chat GPT, and his thoughts mirror those of many educators. “It’s been making me think more about Socrates and his insistence that people often claim to know more than they actually do,” Kevin reflects. “The output of ChatGPT often sounds so plausible and convincing, hiding inaccuracies in plain sight. With the heightened awareness of misinformation in our society, I hope that I can instill a healthy sense of skepticism in my students. The more I learn about AI, the more I ponder how my next steps can use AI in class to practice critical thinking skills and prepare for the new world we will soon be living in.”
Clearly, there’s no hiding from artificial intelligence and no ignoring its impact. With a solid awareness of the abilities and limitations of AI, it can be very useful at work, at school, and at home. It’s mind boggling to think about everything AI is capable of, but the largest issue may be grasping how these systems will affect our world before they become even more powerful. To do that we must learn everything we can about AI and educate future generations to utilize it in ways that will enhance their future and make it a dynamic tool for good and making the world a better place.
Why did the person cross-examine ChatGPT?
Because they were skeptical of its answers
and thought it might be a “questionable witness!”