My PSEA Login



Classroom AI

The recent explosion of artificial intelligence tools has the potential to radically reshape education. Whatever your feelings on this disruptive new technology, it demands your attention. Here’s your chance to get up to speed and pick up some key tips to stay ahead of the AI curve.

The promise and pitfalls of a new technological frontier

Tracking the evolution of artificial intelligence (AI) is like tracking the social media habits of preteens – by the time any news reaches you, it’s already obsolete. To say AI is progressing at breakneck speed is a woeful understatement. Since the language-processing AI tool ChatGPT exploded on the scene last November, thousands of new such tools – each more refined and more niche than the last – have followed. And that’s just the ones we’ve heard about.

AI has been operating in the shadows for years – suggesting search results, personalizing ads, learning our driving habits, shaping our social media feeds, and influencing how we think and act in ways big and small. It’s only recently that the technology has erupted into public consciousness as a tool for public use – a tool that is already rapidly reshaping entire industries. For those in the education field, the implications are enormous.

Friend or foe?

Predictions about the future of AI in the classroom run the gamut. On one side are alarmists who foresee a bleak landscape of widespread cheating and completely computerized instruction that replaces humans all together. On the other side are the futurists who embrace rapid AI integration as an essential step to prepare students for a workforce that will grow ever more dependent on this technology. Most likely, the reality lies somewhere in between. But one thing is certain: AI is here to stay.

“Right now, we’re reporting that over 49 percent of American businesses have already integrated ChatGPT into their daily workings, and 30 percent are expecting to within the next year or two,” said Daniel Woleslagle, an educational technology specialist in the Williamsport Area School District who has delved heavily into the research on this subject. “So, you’re talking about something that is possibly changing 79 percent of American businesses within the next year or so.”

As someone who trains educators on classroom technologies and assists them in the best practices of integrating them into their daily lessons, Woleslagle is keenly aware of both the risks and rewards of an AI-driven future. In his district, as in many others, teachers already are using it to plan their lessons, create rubrics, generate ideas for small-group brainstorming, and more. On the flip side, they’re looking at every piece of work a student submits and wondering how much of it was created with AI.

While tools exist to detect plagiarism and other forms of academic dishonesty newly unleashed by the AI revolution, they’re not perfect. That’s because the current generation of AI produces a unique written response each time it’s asked a question. And it’s only getting better. Essays composed by even early versions of chatgpt have passed for average, or better, work by a typical high school or college student. The most recent version of ChatGPT generates responses to Bar Exam questions that are of significantly higher quality than the average law school graduate.

It’s certainly no wonder why a free tool this powerful has been adopted so quickly and eagerly. For students, ChatGPT can offer creative writing prompts and provide useful feedback, help with vocabulary, foreign language translation, math and computer coding challenges, and so much more. For teachers, it can help grade papers, compose lesson plans, create study guides, analyze writing, and on and on.

But maybe technological shortcuts won’t necessarily lead to better outcomes in education. After all, giving everyone a high-powered pocket computer with every bit of human knowledge at their fingertips at all times hasn’t exactly ushered in a golden age of enlightenment.

“Sometimes, overcoming struggles in trying to learn something is more important than the knowledge you actually gained,” said Mike Soskil, a lifelong educator and former Pennsylvania Teacher of the Year who currently teaches elementary STEM in the Wallenpaupack Area School District. “By making learning too easy and by taking away the relationship aspect of education, we can sometimes stunt student growth in areas that we don’t want to.”

Holding out for humanity

In 2017, Soskil co-authored a book titled “Teaching in the Fourth Industrial Revolution: Standing at the Precipice,” that dealt with, among other things, the idea of machines one day replacing teachers entirely.

“The case we made in that book was that in order for us to have prosperity in the future as a society, we would need to balance our humanity and empathy with technological innovation in our schools,” he said.

As a STEM teacher, Soskil is a strong proponent of keeping our classrooms technologically relevant, but he advocates for a more deliberate, slowed-down approach to AI integration instead of using our students as test subjects. He thinks more conversations should be happening now among legislators, administrators, teachers, and students about how to bring this into schools not as a destructive force but in a way that builds relationships and reinforces learning.

“If I was advising a teacher who was looking at this for the first time and wondering ‘How do I bring this into my classroom?’ I would say that the first thing you should be doing is having conversations with your students about it. Embrace the complexity. Because it is a complex topic and there’s not a simple answer here.”

Daniel Woleslagle agrees, although he’s skeptical of some of the more dire predictions about the technology that he’s read.

“When I was in middle school, my math teacher forbade the use of a calculator because, ‘You won’t have a calculator in your pocket at all times.’ But I do,” he said. “It was still very beneficial for me to master my math facts mentally, but having an accessible calculator on my person at all times didn’t break math as we know it.”

Woleslagle remains optimistic about the future of AI in the classroom because he’s convinced that no matter how sophisticated digital sources become, they can never fully replace humans in an educational setting, especially for students with social, emotional, and economic disadvantages. That was one inescapable lesson the COVID-19 pandemic taught us.

“Countless studies have been done following the COVID-19 pandemic, and those results have shown us time and time again that human interaction, connection, and collaboration need to be at the forefront of learning,” Woleslagle said. “While learning can certainly be enhanced using technology, it cannot be a replacement for highly trained educators.”

Putting AI in its place

By keeping a human-centered approach to AI implementation – which means using supportive AI for reinforcement of learning and not foundational teaching – it’s possible to use this technology to our advantage instead of letting it drag us into an uncertain dystopia that many fear. The question is whether that approach can be maintained in the face of a technology that’s advancing faster than our own brains can process it.

Perhaps the ultimate solution is to let machines do what they do best, and let humans do what they do best. We’ll never out-compute a computer. But expecting an algorithm to apply the full scope of empathetic reasoning humans are capable of is equally unrealistic – at least for the foreseeable future.

Woleslagle pointed out that the amount of total data and information created, captured, copied, and consumed worldwide was 2 zettabytes (2 trillion gigabytes) in 2010. Just 10 years later, in 2020, that amount had increased 3,110 percent to a staggering 64.2 zettabytes per year. By 2025, that number is expected to jump to an incredible 181 zettabytes per year. That’s equivalent to 45 trillion full-length movies worth of data in a single year. And it shows no signs of slowing down.

With this sheer amount of information at our disposal growing at an exponential rate, the days of teaching rote memorization seem to be coming to an end. To get ahead in the world of the future, students need to understand how to access and apply that information. In other words, how to be creative. By leaving the data processing to the machines, we can free both students and teachers up to explore new frontiers of learning.

That will require our education system to change, including how we measure student achievement. It also has the potential to be a very positive development. Instead of test scores, we’ll need to focus even more on human interactions and creative output. Teaching students how, not what, to think has always been the goal of a good educator. In the AI-driven world of the future, that goal will be more important than ever.

Tips for successful AI integration in the classroom

For fun

We asked ChatGPT

... to recommend strategies to discourage its use in academically dishonest ways.

Here is what it suggested:
● Look for signs of AI-generated content such as inconsistencies in writing style across a student’s assignments, repeated writing patterns, or the citation of unusual sources.
● Create unique assignments that require critical thinking and original analysis, synthesis, and evaluation rather than simply summarizing information found online.
● Encourage students to communicate when they are struggling with an assignment. Create a classroom culture that values honesty and integrity and makes it clear that you are there to support students’ learning.
● Engage in classroom discussions about ethical issues surrounding AI-generated content and academic integrity. Encourage students to think about the implications of using AI and how they can use AI responsibly and ethically in their academic work.

Academic integriy

One of the main concerns educators have

... is the impact of AI on academic integrity. Educators may employ several strategies to encourage students to produce original work.

For example:
● Talk to students about how student work will be monitored for AI misuse.
● Require students to properly attribute all AI-generated content.
● Communicate why learning to write independently is important.

AI-proof assignments

In addition to the precautions listed above, educators should be aware that ChatGPT finds it difficult to respond to writing prompts that are extremely broad, unspecific, personal, or timely.

Consequently, AI-proof assignments may include one or more of the following strategies:

● Ask students to write about something deeply personal like a favorite place or an exciting day.
● Center a writing assignment around an issue specific to the local community and/or a very recent news event such as a local construction project or school board agenda item.
● Create writing assignments that require the use of multiple, specific, high-quality citations.
● When appropriate, assign students to handwrite essays in class or use software that only allows students to have one tab open while typing text.
● Run writing prompts through ChatGPT before assigning them to students to learn if ChatGPT generates high-quality responses.

Finally, educators should always follow any applicable district policies related to academic integrity and the use of AI technology. If there are no applicable policies, PSEA members should reach out to their local associations so that the local can engage with administration as appropriate.

Key AI resources for educators:

AI Educator Tools from The AI Educator

A repository for AI tools in education for educators.

Open AI (ChatGPT’s) “Teaching with AI”

A guide for teachers interested in exploring ChatGPT that includes suggested prompts, how ChatGPT works, limitations, and efficacy of AI detectors and bias.

ISTE’s (International Society of Technology in Education) AI Hub

Classroom guides (elementary, secondary, elective) for teachers with innovative resources about teaching with AI.

Dr. Chris Clayton’s PortaPortal for Educators

PSEA Assistant Director for Education Services Dr. Chris Clayton provides a comprehensive collection of resources on this topic on his website.

The AiEDU (AI in Education) Toolkit for Teaching with AI

A toolkit with “everything you need to teach AI with ease”.

Article and Video: “100 Prompts for Teachers to Ask ChatGPT” by Alice Keeler –Fantastic resource for educators to leverage ChatGPT to their benefit.

Maybe you’ve been curious about all the AI buzz but have yet to delve into what it is and how it works.

While there are many tools that fall under the “AI” or “machine learning” category, ChatGPT by OpenAIai is the darling of the moment, and the one piece of AI tech every educator should understand. GPT stands for “Generative Pre-trained Transformer” and is a computer model that uses neural networking (a system that mimics how neurons speak to one another in the human brain) to complete complex tasks quickly.

It “learns” by a process of trial and error similar to our brains and is able to sort information quickly by connecting to cluster and process data. GPT3, which debuted in 2020 but was released globally as a free demo in November 2022, is a text-to-text, large language model that generates text by analyzing language patterns.

It can create written responses (that’s where the “Chat” part comes in) and stories, link to research, and even write computer code. Its successor, GPT4, which debuted in March 2023, is also a text model, but is also capable of understanding visual input. This means that it can make sense of photographs, videos, or drawings and not just text.

Additionally, ChatGPT4 has greater processing capabilities and is much more multilingual than its predecessor. Both ChatGPT3 and ChatGPT4 are capable of human language and multiple programming languages, which increases their usage. Version 4 can write computer code, and even help you deploy it and create websites.

GTP5 is only the stuff of rumors at this point, but it’s safe to expect with its release – probably in the next year or two – that we’ll see another algorithmic leap in capability that will have far-reaching implications for nearly every sector of society.

  • 1843 – Ada Lovelace with help from Charles Babbage, creates the Analytical Engine to compute numbers. Lovelace is often called the first computer programmer.
  • 1939 – At the Iowa State University, the Atanasoff Berry Computer (ABC) is developed as a programmable digital computer by the inventor and physicist John Vincent Atanasoff with his graduate student Clifford Berry. The computer weighed more than 700 pounds and was capable of solving up to 29 simultaneous linear equations.
  • 1950 – Alan Turing publishes “Computing Machinery and Intelligence” and proposes the idea of “the imitation game” (later renamed “The Turing Test”).
  • 1952 – Arthur Samuels develops the first computer checkers-playing program and the first computer to learn on its own.
  • 1957 – Frank Rosenblatt develops the Perceptron, an early artificial neural network enabling pattern recognition based on a two-layer computer learning network.
  • 1965 – Joseph Weizenbaum develops ELIZA as an interactive program that carries on a dialogue in the English language on any topic.
  • 1988 – Rollo Carpenter develops the chatbot Jabberwacky to “simulate natural human chat in an interesting, entertaining and humorous manner.” This was one of the earliest attempts at creating artificial intelligence through human interaction.
  • 1997 – Deep Blue becomes the first computer chess-playing program to beat a reigning world chess champion.
  • 2011 – IBM’s Watson, a natural language question-answering computer, participates in Jeopardy! And defeats champions Ken Jennings and Brad Rutter. The televised game marked AI’s remarkable move to the center of human conversations.
  • 2012 – Google researchers Jeff Dean and Andrew Ng report on an experiment in which they show a very large neural network with 16,000 processors detect cat images without any background information from 10 million unlabeled images randomly taken from youtube videos.
  • 2016 – Hanson Robotics introduces Sophia, a humanoid robot, as the first “robot citizen.” With her similarity to an actual human being, ability to see, make facial expressions, and communicate with the help of AI, Sophia was different from those that came before her.
  • 2018 – Alibaba develops an AI model that scores better than humans in a Stanford University reading and comprehension test. On a set of 100,000 questions, the AI model scored 82.44 against 82.30 by humans.
  • May 2020 – OpenAI’s GPT3 is first introduced, and the beta testing begins the following month. GPT3 is a language model that generates text by adopting algorithms that are pre-trained.
  • November 2020 – ChatGPT3 is released as a free demo, which can converse in human-style conversation and generate answers autonomously using large amounts of information from the Internet.
  • March 2023 – ChatGPT4 allows multimodal intelligence by adding picture recognition among other things.