Traditionally, education has focused on preparing learners for a foreseeable future. It equipped them with established knowledge and skills believed necessary for navigating predictable paths such as advancing to the next grade in school, entering college, or joining the workforce. In an era marked by rapid change and profound uncertainty, the fundamental question “what kind of education do we owe our students?” takes on added urgency.
As I discuss in my book Uncertainty x Design, education is inherently future-oriented, but the future itself is unmade. And as we navigate increasingly complex challenges and disruptive events, from pandemics to the rise of artificial intelligence, it's becoming clear that we must expand our efforts to embrace educating for possible futures, empowering learners not just to meet the future, but to actively shape it.
Too often, we trade the uncertainty of preparing students for possible futures for the comfort and predictability of educating them for a foreseeable one. This bargain seems reasonable. After all, who wants to experience the discomfort of deep uncertainty? So we focus on mastering existing domain knowledge. Which is, of course, essential for competence in any domain. But this comes at a cost.
Domain knowledge isn't static; it’s dynamic and constantly evolving. This pressure to cover an ever-increasing amount of academic content, combined with the rise of generative AI, is turning education into a race for efficiency.
When AI tools, like ChatGPT, are treated as glorified search engines or answer machines, we risk reducing humans to digital puppets. Students might use AI to complete assignments for them, and educators might use it to grade them. This creates a bizarre, dystopian loop where AI speaks through us, bypassing genuine learning and human engagement.
Using AI primarily to fetch answers or complete tasks leaves learners unprepared for the unexpected. Focusing only on a foreseeable future reinforces this reductive use of AI. As a result, students do not have the opportunity to imagine and try out new possibilities. They also miss the opportunity to develop their capacity to navigate uncertainty and to shape their own futures.
It gives learners experiences in the here and now — which helps them work with uncertainty and become the authors of their own futures. Central to this approach is possibility thinking2. This is a dynamic process of moving beyond what is and toward imagining and shaping what could and should be.
Possibility thinking thrives on three key elements:
Uncertainty: the catalyst that pushes us to move beyond the known, driving exploration and innovation.
Difference: engaging with diverse perspectives activates our imagination, challenging our assumptions and offering new ways of thinking.
Constraints: structure helps make possibilities actionable and feasible.
We can take a human-centered approach, positioning AI as a partner.
The AI Possibility Lab, which I developed for Arizona State University's Mary Lou Fulton Teachers College, exemplifies this shift. It serves as a space where educators, students, researchers, and community partners can come together to identify actionable possibilities and design innovative solutions to ill-defined problems. The lab features custom AI tools specifically designed to support possibility thinking, moving beyond the limitations of standard AI chatbots.
These tools are distinct because they are:
Augmented with specialized knowledge. They draw on principles and processes from the research literature in possibility studies.
Dialogic and structured. They engage users in structured and dialogic interactions, asking questions to clarify and seeking input rather than simply providing answers.
Perspective-rich. They are designed to provide a range of perspectives, including contrarian perspectives, to foster deeper reflection.
Question-based. They offer question-based feedback, prompting user reflection, critical thinking, and decision-making.
Agency-preserving. They focus on encouraging users to take ownership and responsibility for the possibilities they develop.
Some tools help users make the unfamiliar familiar by drawing connections to known concepts. Others work in the opposite direction by challenging assumptions and introducing diverse perspectives. Additional tools support the development and exploration of narrative scenarios, such as envisioning best-case, worst-case, and alternative futures. Some also assist with action and assessment planning and help users anticipate setbacks.
Participants can use these custom, human-centered AI tools and participate in coursework and workshops designed for those with no prior coding experience and little to no experience with AI. These experiences help people build their own custom AI tools tailored to specific challenges and infused with relevant knowledge and strategies.
Embracing possibility thinking, supported by a more human-centered use of AI, allows us to prepare students to navigate the uncertainties ahead. This approach helps them realize more positive futures for themselves and the world.
——
Professor Ronald A. Beghetto
Professor, Mary Lou Fulton College for Teaching and Learning Innovation, Arizona State University