AI and the Ragged Boundary Problem

At the recent edition of The Learning Ideas Conference (June 12-14, 2024), my co-presenter, Jan Greenberg, and I spoke briefly about “The Alignment Problem.” In his book by that name, Brian Christian describes it as ensuring that AI models “capture our norms and values, understand what we mean or intend, and, above all, do what we want them to do.” We addressed the alignment problem as we aimed to answer the question “Is AI appropriate for young children?”

Surely misalignment in AI models that are used to create early childhood curricula or applied to children’s media devices and games raise great concern. As Ethan Mollick states in his book Co-Intelligence: Living and Working with AI, “…we have an AI whose capabilities are unclear, both to our own intuitions and to the creators of the systems. One that sometimes exceeds our expectations and at other times disappoints us with fabrications. One that is capable of learning, but often misremembers vital information. In short, we have an AI that acts very much like a person, but in ways that aren’t quite human. Something that can seem sentient but isn’t (as far as we can tell). We have invented a kind of alien mind. But how do we ensure the alien is friendly?”

The means by which Mollick advises us to address AI misalignment is to discover where the ragged boundaries within an LLM exist, between being accurate vs. fabricating. For so many of the talks at TLIC2024, such as ours, that centered on AI, the problem of “responsible use” reared its head repeatedly, with the alignment problem front-and-center. Issues such as organizational policies around use of GenAI, personal decisions to use or avoid it, legal responsibility for AI-generated accidents, plus health and safety concerns seem most often to reflect the level of conservatism in addressing uncertainty vis-à-vis the alignment problem. But such policies and decisions seem to dance around the problem rather than aim to demystify it. To be sure, machine learning protocol is hardly transparent, any more than OpenAI, Google, et. al, are open (source). But we on the receiving end can nonetheless address the problem.

If no one else has yet done it, let me coin this as “The Ragged Boundary Problem” of GenAI, with due credit to Ethan Mollick for his advice on this topic in his Co-intelligence book. It is only through persistent dialogue with your favorite LLM agent that sufficient context is created such that (1) it returns meaningful and accurate knowledge in its interactions with you, and (2) you are able to make sound judgements as to its worth. Like conversing with another person to solve a problem, you are conversing with a sort of alien intelligence but making judgements along the way. Your task, ultimately, is to know where in the LLM the alien is friendly and trustworthy, and where it is not. As the human in the interaction, your ultimate responsibility is to identify the ragged boundary. It is not an event, but a continuous process.

Mollick suggests following these four principles: 

  1. Always invite AI to the table. It’s ubiquitous and pervasive. Deal with it.

  2. Be the human in the loop. You are the arbiter. Take the role seriously.

  3. Treat AI like a person (but tell it what kind of person to be). Not to anthropomorphize, but you have to tell your AI agent what it is, to provide context, focus, and better knowledge outcomes.

  4. Assume this is the worst AI you will ever use. It’s changing so fast. The next iteration will be exponentially better, and so forth.

The notion (or to some, the misnomer) “prompt engineering” is really just learning how to converse with the alien. Shame on any human who would throw an AI agent a single sentence fragment and apply its response without scrutiny and further dialogue. I think perhaps when Michael Corleone (of The Godfather movies) said, “Keep your friends close; keep your enemies closer…” he was being prescient about AI and Mollick’s four principles. After all, you need to know which is friend and which is enemy. 

It has been argued that one of the greatest concerns about AI is how its application to learning and performance will ultimately change us and what that means to the future of humanity. While AI is not yet AGI (artificial general intelligence), it has the potential to overtake humanity, not by intelligent robots, but the complacent humans who allow it to change what we regard as intelligence and how much control they yield, explicitly or tacitly. In this sense “The Ragged Boundary Problem” is about the human being knowing as explicitly as possible where we can trust the fast evolving alien.

As Dr. Chris Dede explained in his keynote address, there are two kinds of wisdom: Calculative and Practical. The former is called Reckoning and the latter Judgement. The AI of today (and arguably for some time to come) is that of reckoning. Judgement remains in the human domain. For this reason I have always preferred IA (intelligence amplification or intelligence augmentation) to AI (artificial intelligence), at least until it actually becomes intelligent. In the absence of conceptual thinking and judgement, we have today’s GenAI, with all its alignment issues and ragged boundaries.

At some point in its evolution, the android Data of Star Trek acquired an emotion chip. Perhaps that is the first step toward transforming reckoning to judgement in our machines, with many, many more steps to go before achieving human judgement and, ultimately, benevolence. To this end I add one more principle to Mollick’s: (5) Enjoy the ride.

Gary J. Dickelman , EPSScentral

Gary J. Dickelman is a thought leader, strategist, and solution provider for the knowledge ecosystem, including online learning, learning management, learning and performance analytics, knowledge management, and performance support. His latest passion is around evolving a learning and performance science for taming the content beast.

Next
Next

Changing Student Perception through AI-Based Teaching and Learning in a Business Analytics Course