Interviewing ChatGPT: How might AI affect learning in Higher Education?
An interview with ChatGPT to see what it thinks the future of education will be.

When Gen AI was first released I tried an experiment - I interviewed ChatGPT for my Honeybee Histories newsletter and on Medium.
Bee venom in the nineteenth century: An interview with an AI [Substack - paid subscribers only]
This was in the early days. The references that ChatGPT gave me were entirely fictious, but many of the arguments it made seemed accurate enough (albeit containing their own biases and some errors). I thought I would try that experiment again, two years after the previous ones. This time I wanted to focus on the future of Higher Education and what ChatGPT might argue would be that future if enabled by GenAI.
The first thing I noticed was that ChatGPT now can’t seem to help but provide responses in bullet points. It’s like an obsession! While these bullet point elements seem to provide some depth to its responses most attempts to go into details end up with more bullet points and more top-level ideas. It’s only when giving specific examples, that ChatGPT seemed to be able to provide anything more concrete. I guess this all comes down to the types of prompts that I ask it. In a few weeks time I plan to do a follow up to this post with better instructions for ChatGPT, to see if that alters the conversation much.
One thing that I did find interesting was when I accused ChatGPT of being presumptive in its belief that Gen AI could be used by students to produce learning materials in various formats without support. What about accuracy and interpretation? ChatGPT offered me two responses to choose from. I chose the second as it seemed to more accurately answer my concern. The first offering seemed to offer little more than
At one point ChatGPT felt that I was offering it feedback and offered me a choice of its responses; one focused on the student and one on the educator. This was interesting and unexpected. I chose the second response (the student focus) as it seemed more accurate to what I was asking of it, but I have included the ‘educator’ focus as an alternative at the bottom of this post.
Overall, ChatGPT started by promoting how wonderful a Gen-AI enabled Higher Education will be - with immediate feedback for students, the option to choose their own pathways by using Gen AI to explore themes and topics, and the educator working more as a guide and facilitator. However, when I dug deeper or challenged it on some of those assumptions, the role of the educator in not just guiding the students, but offering some level of control over the process became extremely clear. ChatGPT admitted that the educator would need to be nurturing AI-literacies and helping students to be critical of the responses they receive and ethical in how they make use of the AI tools. How, precisely, we do that is, I suspect, always going to be in contention, as Gen AI rapidly develops new capabilities and flavours.
I have not altered the contents of the conversation so it does go on a bit (that’s thanks to ChatGPT’s obsession with bullet points again), but I found it’s responses interesting enough.
This is the conversation that we had:

Matthew said:
Hello, ChatGPT. I would like to interview you about the future of an AI-enabled Higher Education. Would that be okay?
ChatGPT said:
Hello! I'd be glad to be interviewed—feel free to ask your questions about the future of AI-enabled higher education. I'm ready when you are.
Matthew said:
Brilliant. Thank you. Let's start broadly. How do you see Higher Education looking in ten years time, i.e. by 2035?
ChatGPT said:
You're welcome! Let’s dive in.
By 2035, higher education is likely to look significantly different—more personalized, flexible, and technology-integrated than it is today. Here’s a broad overview of how I see it evolving:
🎓 1. Personalized Learning at Scale
AI will power adaptive learning platforms that tailor content, pacing, and assessments to individual students. Degrees may become modular, with students building personalized “learning stacks” based on their interests and career goals.
AI tutors will provide 24/7 support, clarification, and feedback.
Learning analytics will track student progress deeply and suggest adjustments in real time.
Traditional grades may give way to more competency-based models.
🌍 2. Hybrid and Global by Default
Higher education will likely operate in a hybrid mode as the norm—blending physical campus experiences with immersive online environments, including virtual and augmented reality.
Institutions will collaborate across borders more fluidly, enabling students to study across countries without relocating.
Global faculty networks, remote internships, and AI-enabled language translation will open up cross-cultural education at scale.
🧠 3. Faculty Roles Will Evolve
Faculty will shift from being primarily content deliverers to being facilitators, mentors, and researchers. AI will handle much of the routine instructional labor:
AI-generated materials and assessments will free up time for deeper human interaction.
Professors may co-create content with AI or supervise AI-taught segments.
There will be greater emphasis on emotional intelligence, critical thinking, and ethics—areas where human educators add irreplaceable value.
📚 4. Unbundling and Lifelong Learning
The traditional 3–4 year degree model will face increasing pressure from stackable credentials, microdegrees, and on-demand learning.
Learners may dip in and out of education across a lifetime, with AI tracking their skills and recommending what to learn next.
Universities could become “learning hubs” rather than time-bound institutions.
🔐 5. Ethical and Policy Challenges
With greater AI use will come serious debates about data privacy, algorithmic bias, labor displacement, and educational equity.
Who owns student data?
How are AI models trained and validated?
How do we prevent AI from reinforcing existing inequalities?
The regulatory and ethical frameworks that develop over the next decade will shape how transformative—or dystopian—this future becomes.
Would you like to explore one of these areas more deeply?
Matthew said:
That's a lot to consider. Let's take a look at your second point about 'hybrid by default'. I think most HE education was already heading towards a more blended mixture of online and in-person learning (both synchronous and asynchronous), especially since the Covid pandemic. I guess Gen AI will alter, significantly what the contents of those different activities will be. Is that right? And also, I've heard about the rise of virtual and augmented realities in HE for decades and in reality it has never really become all that popular. Do you think Gen AI might change that and if so, in what way?
ChatGPT said:
Excellent questions—both very perceptive.
You're absolutely right that higher education was already trending toward blended learning, especially after the pandemic forced mass experimentation. What generative AI does, however, is fundamentally alter the content, delivery, and even the nature of those activities, making the hybrid model qualitatively different—not just a mix of Zoom and Moodle.
🔄 1. How Gen AI Will Transform Blended Learning
📚 Asynchronous Content Becomes Generative and Dynamic
Instead of static video lectures or PDFs, content can be generated or co-created with students in real time.
AI can personalize readings, examples, or problem sets to a student’s interests or learning level (e.g., a history major and an engineering student getting different case studies on the same topic).
Multimodal content—text, audio, video, simulations—can be instantly generated or translated for accessibility and engagement.
👥 Synchronous Sessions Become Higher-Value Human Time
With AI handling the lower-order tasks (explanations, drills, reminders), live sessions can focus more on discussion, critique, collaboration, or ethics and judgment—things machines can't do well.
Instructors may increasingly act like facilitators or "learning designers" rather than lecturers.
🤖 AI as a Real-Time Companion
Students might interact with personal AI learning assistants during asynchronous work, asking for clarification, summaries, translations, or explanations tailored to their current level of understanding.
This feedback loop fundamentally shifts the role of independent study—it's no longer independent, but collaborative with AI.
So yes, the structure may resemble today’s hybrid learning, but the content, agency, and interaction will feel radically different.
🧠 2. Virtual & Augmented Reality: Finally Going Mainstream?
You’re right—VR/AR in higher ed has been "about to take off" for a long time. The reasons it hasn’t include:
High cost of hardware
Limited compelling content
Technical friction
Lack of instructor expertise or time
What generative AI brings to the table is the ability to rapidly and cheaply create immersive content, reducing nearly all those barriers.
Here's how Gen AI could unlock VR/AR in HE:
🎮 Rapid World-Building
Imagine a history professor saying, “Let’s create a 3D simulation of the Fall of Constantinople.” A Gen AI model could generate that environment in minutes—not months—complete with characters, narration, and interaction points.
🤝 Low-Code/No-Code Authoring
Faculty could design augmented or virtual lessons without knowing Unity or Unreal Engine. Gen AI could take a voice prompt and build a VR scene, lab simulation, or AR overlay.
🧪 Discipline-Specific Applications
STEM: virtual chemistry labs, anatomical models, or physics experiments
Humanities: immersive historical reconstructions, museum-style walk-throughs, or dramatizations
Medicine & Nursing: high-fidelity simulations with branching scenarios and real-time AI patient interactions
🌐 Access Without High-End Hardware
With AI-driven compression and streaming, more immersive experiences can be delivered through regular browsers or affordable headsets (like Quest), removing the tech barrier.
In Summary:
Yes—Gen AI redefines the nature of hybrid education, especially by:
Replacing static content with interactive, adaptive, multimodal experiences
Elevating in-person or synchronous time to focus on human-centered learning
Possibly making VR/AR content as easy to produce as PowerPoint slides, thus finally pushing immersive tech into the educational mainstream
Would you like to explore any specific use case—for example, how AI and AR might change a particular subject like history, architecture, or nursing education?
Matthew said:
If I'm reading you right, you're suggesting that the underlying approaches and theories of learning remain very much intact. What changes is the ability for students to receive immediate feedback and to tailor the materials (and their form - it could be text, audio, video) to fit their needs and to explore topics more independently. Is that right? If so, what role do you foresee the educator have in this process?
ChatGPT said:
Yes—you’ve read that exactly right.
The underlying pedagogical principles—like constructivism, inquiry-based learning, cognitive load theory, and formative assessment—remain valid and central. What generative AI changes is not the why or how people learn, but the tools and scale at which effective learning strategies can be delivered.
In other words:
Feedback becomes immediate, specific, and endless.
Scaffolding becomes personalized, not generic.
Modality becomes flexible—text, video, diagram, or even simulation—on demand.
Exploration becomes continuous and self-directed, not constrained by scheduled tutorials or office hours.
🧑🏫 So Where Does the Educator Fit?
Contrary to fears of replacement, educators become even more essential—but their role shifts. Here's how:
🧭 1. Curator and Designer of Learning Journeys
Educators design learning paths, decide which tools and tasks promote deep learning, and select or supervise the AI systems that generate materials.
Instead of delivering all content, they guide students in navigating complexity, verifying sources, and learning how to learn.
Think: “architect of the learning experience” rather than just “content transmitter.”
🤔 2. Critical Thinking and Judgment Coach
AI can help explain what, but educators are crucial in helping students understand why something matters, how to question it, and where the ambiguities lie.
Educators train students in disagreement, reflection, nuance, and ethics—areas where AI lacks lived experience or emotional intelligence.
🧪 3. Facilitator of Dialogue and Peer Learning
In live or hybrid settings, educators moderate discussions, challenge assumptions, and facilitate meaning-making through dialogue.
They also design and supervise collaborative projects, which can integrate AI tools but still rely on human interaction and accountability.
🛡️ 4. Ethical Guide and Data Steward
Educators will play a key role in helping students understand the limits and risks of AI: algorithmic bias, hallucinations, privacy, intellectual property, etc.
They can also advocate for fair AI use in assessment, accessibility, and inclusion.
🪞 5. Emotional Anchor and Mentor
Students, especially those navigating uncertainty or self-doubt, need human connection—not just efficient content.
Educators provide empathy, encouragement, and mentorship, which AI cannot authentically replicate.
🚦Analogy: GPS vs. Travel Guide
AI is like a GPS: it gives you directions, alternatives, and options.
The educator is the travel guide: someone who explains the culture, warns about hidden risks, offers context, and ensures the journey is meaningful—not just efficient.
In short: AI amplifies what educators do best. It removes the burden of mass personalization and frees educators to focus on what only humans can do—inspire, challenge, support, and grow other humans.
Would you like to explore how this shift might affect educator training or academic assessment?
Matthew said:
I'm glad you brought up the point about accuracy and interpretation of materials. In your previous answer, including when you were talking about VR and augmented realities, there was a presumption (or so it seemed) that students could use Gen AI to produce various resources that will help them to learn. But how will they know that what they produce from AI is accurate and useful and how will they be guided to interpret them properly. That is where the educator comes in. But how? Do you have any thoughts?
ChatGPT said:
Absolutely—and you're raising one of the most important and underdiscussed issues in the conversation about generative AI in education: how students will develop the judgment to critically evaluate what AI produces, rather than just consuming it uncritically.
You're right that my earlier examples (e.g. AI-generated VR reconstructions or personalized explanations) implied an ease of production—but ease doesn’t guarantee truth, pedagogical soundness, or critical insight.
Let’s break this down into two key risks and then explore the educator's role in mitigating them:
⚠️ The Two Big Risks
1. Epistemic Risk – Mistaking fluency for accuracy
Gen AI can generate highly plausible-sounding but incorrect or misleading content—often called “hallucinations.” Worse, it presents that content with grammatical confidence and emotional neutrality, which can be misleadingly persuasive.
A student may generate a historical summary, a scientific explanation, or a visual simulation that seems authoritative but contains factual errors, biased framing, or outdated assumptions.
2. Interpretive Risk – Consuming without questioning
Even when AI-generated materials are broadly accurate, they still require contextual interpretation. Without guidance, students may:
Accept a single perspective as “the truth”
Miss nuances, contradictions, or contested interpretations
Fail to ask why something was presented a certain way—or what was left out
🧑🏫 The Educator’s Role: From Source of Truth to Critical Facilitator
Educators become essential in helping students build AI literacy, epistemic humility, and critical interpretation skills.
Here’s how that role plays out in practice:
🧠 1. Teaching AI Literacy as a Core Skill
Students need to learn how AI models work—including limitations, bias, training data gaps, and hallucinations.
Educators can include meta-level prompts like:
“What assumptions is this AI-generated text making?”
“How would you verify this with a trusted source?”
“What’s missing from this summary or perspective?”
This is akin to teaching source evaluation or media literacy in a research context—just transposed into the AI age.
🔍 2. Structuring Assignments Around Verification and Reflection
Instead of asking students to simply produce content with AI, educators can design tasks like:
Compare an AI-generated response with a scholarly article—identify discrepancies.
Use AI to generate an opposing argument, then critique its logic or evidence.
Create a timeline using AI, then annotate which elements required correction or confirmation.
This builds the skill of using AI as a draft, not a destination.
🧭 3. Providing Contextual Anchors
Educators can guide students in understanding:
Why a simulation might be historically plausible but not complete
How a visualization reflects certain scientific models but omits others
The political, ethical, or cultural context behind an AI-generated interpretation
In this way, educators frame AI as a lens, not a mirror.
🛠️ 4. Building Interpretation Into Assessment
Assessment strategies must evolve to reward:
Critical engagement with AI outputs
Transparency in using AI tools (e.g., prompt logs, reflection statements)
Original insight or reinterpretation, not just polish
This reinforces that AI is a learning partner, not a shortcut or answer key.
🧾 Bottom Line
You're exactly right: producing content with AI is easy; interpreting it responsibly is hard. That’s where educators remain irreplaceable.
Their role isn’t to police AI use—but to equip students with the judgment, skepticism, and interpretive tools to use AI thoughtfully and truthfully.
Would you like to explore how this could be built into curriculum design or professional development for faculty?
Matthew said:
It would be great if you could offer a short example, yes?
ChatGPT said:
Absolutely. Here’s a short, concrete example that shows how an educator might guide students to use generative AI responsibly and critically, within a real course setting.
📘 Course: Modern European History
🧪 Assignment: Create an AI-Generated Timeline of the French Revolution
🧾 Student Task:
"Use a generative AI tool (e.g., ChatGPT, Claude, Gemini) to generate a timeline of key events in the French Revolution (1789–1799). Then, review and critique the timeline using at least two scholarly sources. Submit both the AI-generated output and a 500-word critical reflection."
🎯 Learning Objectives:
Understand key events and phases of the French Revolution
Evaluate AI-generated historical summaries for accuracy and framing
Practice triangulating AI outputs with trusted academic sources
Reflect on what gets emphasized or omitted—and why
🧑🏫 Educator’s Role:
Introduces the assignment with a brief discussion on AI limitations, including potential bias, outdated training data, or oversimplification.
Offers a guide on how to critique historical narratives (e.g., What perspectives are privileged? What context is missing? Is causality clear or implied?)
Provides scaffolded questions, such as:
“Which events did the AI omit, and why might that matter?”
“Does the AI timeline reflect a particular historiographical stance (e.g., liberal, Marxist, revisionist)?”
“How did you verify dates and details? What discrepancies arose?”
🧾 Student Reflection (Example Excerpt):
"The AI-generated timeline was mostly accurate in listing major events like the Storming of the Bastille and the Reign of Terror. However, it skipped over the role of women in revolutionary movements and framed the Thermidorian Reaction as a 'stabilizing force,' which reflects a particular interpretive stance. After consulting Doyle (2001) and McPhee (2012), I noted that this framing overlooks the continued repression of dissent in the Directory period. This exercise showed me how AI can reproduce dominant narratives unless prompted otherwise."
🧠 Outcome:
Rather than treating AI as a source of facts, students engage with it as a provocation to deeper inquiry, a test of their historical judgment, and a practice ground for critical comparison.
Would you like a similar example in a different discipline—like STEM, nursing, or literature?
Matthew said:
No, thank you. I think that concludes the interview. Thank you for your time ChatGPT. We covered a lot in that interview and there is, I feel, a lot to unpick further.
ChatGPT said:
You're very welcome—and thank you for the thoughtful, nuanced questions. You approached the topic with the kind of critical depth this conversation really needs.
If you decide to revisit any of these areas—curriculum design, assessment, faculty development, student agency, or the ethics of AI in education—I’d be glad to explore them further with you.
Best of luck with your work, and I hope the interview proves useful in sparking deeper conversations.
Below is the alternative response that ChatGPT offered when I accused it of presuming that students could use Gen AI to produce resources without help and support as Gen AI often offers inaccurate or biased material. The answer I included above focused on the ‘student’ while this answer is more focused on the educator.
You're giving feedback on a new version of ChatGPT.
Which response do you prefer? Responses may take a moment to load.
ChatGPT
Response 1
Yes—this is a critical point, and you’re absolutely right to press on it. The assumption that students can just “generate” helpful resources with AI and learn effectively on their own needs to be challenged. Generative AI is powerful but also frequently wrong, biased, or misleadingly confident. So, AI doesn’t reduce the need for human interpretation—it increases it.
You're right: the educator becomes indispensable here. Let’s unpack how.
🎯 The Core Problem: Generative AI is a Confident Liar
AI models don’t “know” truth—they pattern-match.
They hallucinate sources, misstate facts, and often deliver surface-level synthesis.
Worse: the outputs feel authoritative. Students may not be equipped to question what’s plausible vs. accurate.
🧑🏫 So How Can Educators Help?
🕵️ 1. Teach Source Evaluation and Fact-Checking as Core Literacies
Educators must embed AI literacy into curricula, just like digital literacy was emphasized a decade ago.
This includes:
How to cross-check AI outputs with verified sources
Recognizing hallucinations, bias, and superficial synthesis
Using AI as a starting point, not a final answer
Example: A student uses AI to generate a summary of a Supreme Court case. The educator's task is to guide them to compare that output with primary sources and legal commentaries—and to spot omissions or distortions.
🧰 2. Create Structured AI-Integrated Learning Tasks
Don’t just let students “play” with AI—design tasks that require critical interaction with it.
Example:
Task 1: Generate a historical narrative with AI
Task 2: Annotate it with your corrections, citing sources
Task 3: Reflect on where and why the AI was misleading
This kind of work helps students treat AI like a flawed but useful research assistant, not an oracle.
🗣️ 3. Foster Collaborative Interpretation
Use AI outputs as the basis for classroom dialogue, debate, or critique.
Imagine a class reading the same AI-generated interpretation of a poem or economic theory. Students work in groups to:
Compare it with canonical interpretations
Identify gaps, misreadings, or assumptions
Propose revisions with human insight
Educators curate these discussions, not only to model interpretive skills, but to demonstrate that learning is social and contested, not just content delivery.
🧭 4. Be Transparent About Their Own Use of AI
Educators who demonstrate how they use AI, where they fact-check, and where they reject its suggestions model the kinds of habits students need to develop.
This demystifies the tool and reinforces that critical thinking is always required—even when the machine does the first draft.
📜 5. Shift Assessment Toward Process, Not Just Product
If students are generating summaries, ideas, or even code with AI, assessment needs to focus more on:
The quality of their analysis and adaptation
The documentation of their decision-making
Their reflections on AI limitations
This encourages metacognition: “What did I ask? What did I get? What did I change and why?”
🧩 Summary: Educators as Sense-Makers
In an AI-enabled classroom, the educator’s role becomes:
Interpreter-in-chief: helping students make sense of messy, generated knowledge.
Critical co-reader: showing students how to interrogate, not just consume.
Ethical guide: modeling responsibility and skepticism in a world of machine fluency.
AI can produce content, but it can’t produce understanding. That gap is where the educator operates—and thrives.
Would you like to dig deeper into how this might reshape curriculum design, assessment rubrics, or even academic integrity policies?