Discussion about this post

User's avatar
regier@comcast.net's avatar

Thanks Matt for this thoughtful piece. My knowledge of ChatGPT is very minimal, so I found the story, information, and suggestions you provided for teachers and leaders--around smartly navigating Chat GPT-- very helpful for increasing my understanding of this tool.

Expand full comment
Terry underwood's avatar

Your experience with your daughter is interesting. I want to push back a bit on the structure of this experience to make a core point which I believe is an obstacle in getting people to entertain using GPT for serious purposes. I believe people who have resisted GPT until now are building more and more strength in the conviction that GPT is dangerous because it facilitates cheating in two ways: 1) deceptive use of text (fraud, plagiarism) and 2) short circuits learning. As the conviction strengthens, the likelihood of regaining an open mind diminishes.

GPT is a specialized digital tool. One common misconception is that it reads and writes. It’s easy to believe it can and does read a book when you ask it to tell you about a book using a title in the prompt. But it does not--cannot--read the actual text. Instead, it scans mathematical vectors to locate patterns of words it calculated in its unsupervised training mode. You used words like essay, three themes, book, write, the title of the book; it used a digital,syntactic parser to identify the verb and object and thereby isolated the parameters of the task. It then scanned these vectors for the most probable patterns of words or word particles associated with the title, the object of the task. These word patterns were assembled during training from book reviews, blog posts, online plot summaries, and the like, which provided nodes in other mathematical vectors. Not once did it read the book.

The word “book” is likely to occur in patterns of meaning where words like “theme” and “characters” appear. These are coded digitally as nodes on a digital frame that statistically interact with nodes like “bookawards” and “bookreviews.” The training involves semantic patterns that occur often in the vicinity of particular letter strings, so two or more words together can identify a node. When GPT misidentified the protagonist, it was not “confused.” Confusion is a human emotion or state. It selected Miguel by way of a signal linking protagonist to a character to a prominent character it coded from a New York Times book review. Miguel was named somewhere as an important character and in the moment of computation crossed a threshold of probability. The machine plays the odds like in Vegas. Knowing that means you have to fact check the bot. It’s not free from statistical miscalculations. This has nothing to do with confusion. In fact, GPT did have an accurate character in the right book. One time the bot told me Jane Eyre appears in the second act of Hamlet.

I personally would not ask a person who has not used a bot before to just use it. Like any tool, you use it according to your purpose. I’m not sure your daughter knew that by asking the bot to write an essay about a book she was actually formulating a command for a calculator to execute. Having a legitimate clear intellectual task with a good reason to think GPT can help is a prerequisite for beginning to understand how to use the tool.

A friend, for example, who has a fear of technology and is dead against the bot, gave me a chance to apply the bot to a real question she had. “Is it safer to use chemically based sunscreen or mineral based? She was amazed. Of course, her cognitive space working with the bots info, she dug deeper to verify. She went back to Google armed with some ideas to accept or reject. Another friend who had been promising me for months to try it, finally sat down with me. He came up with a question of immediate interest to him. As a former Green Beret, he wanted background on the phenomenon of Beret imposters--people who lie about their status to collect higher pensions.

Predictably, he also wanted to verify the information because it was a serious matter to him. Playing around with fictional or vague problems or tasks I don’t think works well. PD for teachers and the bot ought to be carried out via a carefully planned systemwide strategy with experts in using the tool leading orienting workshops. I’m seeing a lot of heads in sand, forcing teachers and administrators into coming up on their own with local schoolhouse strategies. Administrators need to become forceful advocates for organized and appropriate PD.

Expand full comment
3 more comments...

No posts