Is the Most Creative Act a Human Can Engage in the Formation of a Good Question?

Wise, Kevin Kelly is.

Today I’d like to ponder something Kevin Kelly – a fellow co-founding editor of Wired – said to me roughly 30 years ago. During one editorial conversation or another, Kevin said – and I’m paraphrasing here – “The most creative act a human can engage in is forming a good question.”

That idea has stuck with me ever since, and informed a lot of my career. I’m likely guilty of turning Kevin into a Yoda-like figure – he was a mentor to me in the early years of the digital revolution. But the idea rings true – and it lies at the heart of the debate around artificial intelligence and its purported impact on our commonly held beliefs around literacy.

I’ve spent a lot of the last few decades as an interlocutor on stage or as a reporter on the ground, and I find that preparing for interviews requires not just a ton of research, but a rather formal process of interrogation of the facts prior to any actual dialog. It starts with naive, even ignorant queries, and each response yields fresh questions, each of which become more subtle, specific, and pointed. The question is the tool, it can be wielded like a spade in sand, a pick axe against stone, a paintbrush, a hex key, a hammer, a pen. It may well be the most human expression we have – our core differentiator from the stochastic parrots we can’t help but create.

All of this came rushing back to me when I read Jeff Jarvis’ post on the impact of ChatGPT on literacy. In “Writing and Exclusion,” penned prior to the New York School District banning ChatGPT, Jarvis writes “It occurs to me that we will probably soon be teaching the skill of prompt writing: how to get what you want out of a machine.” Indeed. I’d argue we’ve already been in dialog with a semi-intelligent machine for decades – ever since the dawn of search, and certainly since the rise of Google, where every interaction is considered a “query” and every response a “result.”

Back in 2005 I suggested that our schools start teaching what I then called “search literacy” – a formalized coursework to help kids understand how to ask intelligent questions of what was at the time a novel technology:

In an age where the knowledge of humankind is increasingly at our fingertips through the services of Internet search, we must teach our children critical thinking. One can never have all the answers, but if prepared, one can always ask the right question, and from that creative act, learn to find his or her own answer.

Instead, we have leaders that believe that questions have one answer, and they already know what it is. Their mission, then, is to evangelize that answer. That, to me, is a dangerous course. Reversing it by teaching our children to learn, rather than to answer, seems to me to be a noble cause.

I’m not sure any academic institution ever took me up on that call (driven, as I recall, by the Bush administration’s fixation on test scores), but Jarvis has issued an updated version of it:

…writing a prompt for the machine — being able to exactly and clearly communicate one’s desires for the text, image, or code to be produced — is itself a new way to teach self-expression.

In an age of DALL-E, ChatGPT, and large language models augmenting and/or becoming our lawyers, our lobbyists, and our programmers, perhaps it’s time to once again demand our schools teach our children how to ask interesting questions. That’s something I doubt AI will ever get right.

PS – I wanted to ask ChatGPT the question in this post’s title, but it’s clear the service is overwhelmed at the moment….

Leave a Reply

Your email address will not be published. Required fields are marked *