
The Trust Question | Part 01 of 02
What follows is the first post in The Trust Question, a two-part series examining what higher education is really navigating right now. Drawing on qualitative research with educators and administrators across K-12 and higher education, this post maps how institutions are approaching AI and why those approaches often diverge within the same campus. The thread running through all of them: trust.
Research by Yolanda Wiggins, Ph.D., Associate Professor of Sociology at San José State University and 2025 American Sociological Association Public Engagement and Policy Fellow. For five years she was faculty at SJSU, where her research examined race, equity, and the social implications of AI in higher education. She brings a sociologist’s rigor and an educator’s fluency to evaluating how EdTech can genuinely serve higher education communities.
We have been inside these conversations for the better part of a year. In rooms where decisions about AI feel both urgent and underprepared. With leaders being asked to commit to positions before the evidence is settled. With faculty who are closest to students and therefore most attuned to the actual stakes.
What brought us to this work is a genuine conviction that education is worth getting right. Higher education is navigating one of the most consequential shifts in its recent history, and most of what gets written about it oscillates between enthusiasm and alarm. Neither registers with the people actually responsible for the decisions.
So we started listening. Carefully. To provosts and deans, writing center directors, and faculty who know their students by name. Administrators drafting governance language know it will need revision before it’s even published. What we found across hundreds of qualitative interviews wasn’t a story about technology adoption or resistance. It was a story about trust: who holds it, how it forms, and what happens to the people responsible for it when the ground keeps shifting.
Every campus is a negotiation
Over the course of those interviews, four recurring orientations emerged in how higher education leaders approach AI. We’ve come to call them Innovators, Strategists, Resisters, and Pragmatists. They are windows into how people make decisions under uncertainty, and mirrors of how change management, leadership, and human behavior play out across institutions navigating something genuinely hard.
Innovators believe higher education should lead technological change. They are motivated by a conviction that responsible adoption now is better than reactive governance later. Strategists want evidence first. They move deliberately and only when outcomes make the case clearly. Resisters prioritize ethics, integrity, and institutional reputation. For them, slowing down is a form of principled leadership. Pragmatists are focused on what works: student success, equity, and implementation that doesn’t leave behind the people it’s meant to serve.
Each orientation reflects a different calculus for risk and opportunity, and each represents genuine responsibility. Most campuses are home to all four of these mindsets simultaneously.
A provost might operate as an Innovator, committed to positioning the institution as a leader in responsible AI adoption. The writing center director down the hall might be a Resister, concerned that AI tools are eroding what makes writing a genuine act of thinking. Faculty closest to students often express Pragmatist values, focused less on what AI represents philosophically and more on whether it actually helps students persist.
These perspectives coexist within a single institution, sometimes in productive tension, sometimes in direct conflict. Segmentation, in this sense, is less a sorting exercise and more a map of the conversation already happening across campus. Understanding it is what separates engagement that builds alignment from engagement that stalls before it starts.
What the research keeps surfacing about what institutions need
Across all of these conversations, the ask from leaders wasn’t for more tools. It was for alignment. Tools become valuable when they reflect an institution’s actual priorities, constraints, and values. When they don’t, even well-designed features meet hesitation.
What leaders consistently described was a need for partners who understand the internal negotiations underway and can help think through the trade-offs, rather than arriving with a solution to a problem that hasn’t been correctly diagnosed. For those skeptical of AI, that means language to articulate concerns in ways that can shape decisions rather than shut them down. For those advocating for adoption, it means acknowledging that resistance is often grounded in a sense of responsibility.
Progress doesn’t come from pushing a single perspective. It comes from making the differences visible and working through them directly.
AI is also forcing something that was always deferred: explicit decisions about what institutions value. Speed or rigor. Access or control. Innovation or stability. Those tensions existed long before generative tools arrived. AI is making them harder to ignore. The four mindsets are useful precisely because they surface where alignment breaks down and why well-intentioned conversations stall.
Where the mindsets collide: the academic integrity debate
Nowhere do these dynamics surface more visibly than in how institutions approach academic integrity.
The conversation usually starts with the same question: how do we stop students from misusing AI? After a year of listening across K-12 and higher education, that question consistently turned out to be the wrong starting point.
For most leaders we spoke with, academic integrity in the age of AI runs deeper than enforcement. At its core, it is a question about what the institution believes, and whether those beliefs hold up when tested publicly.
As one administrator put it: “This isn’t really about cheating. It’s about whether we trust our students, and whether they trust us back.”
That reframe has real weight. Overly restrictive governance signals distrust of students. The absence of governance signals avoidance. Leaders are navigating a narrow path: how do you set meaningful expectations without communicating bad faith?
K-12 and higher education are working through this differently, shaped by distinct accountability structures and different relationships with risk. In both contexts, the underlying challenge is the same: how do you build guidelines that reflect what you actually value?
What’s shifting in both contexts is the underlying frame. Many educators are moving their focus from detection toward judgment, from surveillance to discernment, from punishment to responsibility. As one leader told us: “We’re less interested in catching students and more interested in helping them learn how to make good choices.”
In this reading, integrity frameworks are doing more than establishing rules. They are signaling institutional values, telling students, faculty, and the public what an institution believes learning is for.
The question underneath the integrity debate
Leaders across these conversations expressed fatigue with binary narratives that frame AI as either a threat or a miracle. What they’re looking for is language: ways to engage with uncertainty that feel principled rather than reactive, and that can travel across students, faculty, families, and boards.
Every decision an institution makes about AI communicates something to the people watching. What leaders are navigating is how to make those choices in a way that builds credibility rather than erodes it.
At stake is whether institutions can maintain trust while the ground is shifting. That is a challenge that can’t be resolved with stricter rules alone. It requires thoughtful governance, shared understanding, and a willingness to engage honestly with uncertainty.
The institutions navigating this well are the ones willing to say: here is what we know, here is what we’re testing, here is where we will revisit. That posture of disciplined openness is what credibility looks like when no one has all the answers yet.
The second post in this series goes one level deeper. If the integrity debate is really about trust, what does trust actually mean to the people responsible for it? As the research shows, the answer depends entirely on who you ask.






