When Psychology Beats the Algorithm
What years of studying trust computationally taught me about what networks actually measure — and what they miss
Abstract: Every recommendation engine, reputation system, and matching algorithm we build assumes the same thing: that your position in a network predicts your trustworthiness. This article presents findings from PhD research at Birkbeck, University of London, later extended through ACM, that challenge this assumption empirically. Studying high-stakes online platforms — where trust isn’t a nice-to-have but the precondition for cooperation — we found that psychological models of trust, measuring what people actually believe about each other’s competence and willingness, significantly outperformed traditional network metrics. The advantage was structural, not incremental: in environments where reviews are sparse and biased toward extremes, network-based models fail on exactly the population that matters most, while belief-based models apply to the entire network. Aggregating per-capita trust scores by region and comparing them against GDP data confirmed a relationship social scientists had proposed for decades: a 0.98 correlation between computationally measured interpersonal trust and economic output. Trust is not a network property. It’s a belief structure — and the systems we build should be designed accordingly.
The assumption nobody questions
Every recommendation engine, every reputation system, every “people you may know” feature operates on the same underlying logic: your position in the network determines your value in the network.
It sounds reasonable. PageRank works this way [2]. So does academic citation analysis. So does LinkedIn’s algorithm when it decides which connection request to surface. The math is clean: centrality measures, clustering coefficients, eigenvector scores [3]. If you’re well-connected to well-connected people, you must be trustworthy. Or at least useful.
I spent years testing whether this was actually true. The answer is more interesting than “no.”
Where this started
During my PhD at Birkbeck, University of London [4], I studied a category of online interaction I defined as Online Social Networks of Needs (OSNN) — platforms where interactions start online, require significant trust, and evolve into real-world collaboration. In a later extension of the work, collaborating with researchers at the University of Palermo, we published a review paper with additional findings through ACM [1].
Not social media. Not content sharing. Platforms where trust isn’t optional — it’s the product.
Childcare was the case study — deliberately chosen as the highest-trust category in a seven-level taxonomy I developed for OSNNs. Platforms like Uber and Airbnb sit lower on the scale: interactions start online and require a physical exchange, but you’re trusting a stranger with your commute or your spare room, not the care of your children. Childcare sits at the top because the asymmetry is extreme: one party holds almost all the risk, and the trust required is non-reversible in a way that rating a driver four stars never captures.
The platform had rich data: detailed profiles, reviews, hiring outcomes, geographic distribution. We could see not just who connected with whom, but who actually matched and collaborated successfully. Ground truth — the thing most network studies don’t have.
The question was straightforward: what predicts successful collaboration better — where someone sits in the network, or what people actually believe about them?
Psychological models won. Decisively.
We tested traditional network metrics — betweenness centrality, closeness, eigenvector centrality — the standard toolkit that dominates social network research [3]. Then we tested the Castelfranchi-Falcone psychological trust model [5], which breaks trust into three components:
Opportunity — can this person actually do what’s needed, given time and location constraints?
Ability — do they have the competence to deliver?
Willingness — do they intend to follow through?
Trust in this framework is multiplicative. Zero on any dimension means zero trust, regardless of how the other dimensions score. You can be the most capable person in the network — but if people don’t believe you’ll actually show up, the math collapses.
The network metrics weren’t terrible predictors. But the psychological model — measuring what people actually believed about each other’s competence and intentions — significantly outperformed them [1].
Position in the graph tells you who knows whom. It doesn’t tell you whether anyone would stake something meaningful on that connection. And in high-trust environments, that distinction is everything.
There’s a deeper structural problem, too. The higher the trust required in the OSNN scale, the fewer the reviews — and the ones that do exist are heavily biased toward extremes. In childcare, the vast majority of parents and providers never leave a review at all. They interact, they match or they don’t, and they move on in silence. This means most existing trust models — which depend on review data to place users — fail on exactly the population that matters most: the quiet majority.
We tried factorisation machines, which are designed to handle sparse data, and they performed reasonably. But the Castelfranchi-Falcone model outperformed them significantly — and for a fundamental reason. CF-T doesn’t need reviews. You can use reviews as ground truth to calibrate the model, but the trust assessment itself — opportunity, ability, willingness — applies across the entire network, including users who have never written a single review. That’s the difference between a model that works on the vocal minority and one that works on everyone.
From trust scores to economic output
The trust scores were computable per user, per interaction, per region — and the research was designed from the outset to test what those regional aggregates might reveal.
The theoretical foundation is well established. Social capital — most commonly defined as the aggregate of resources linked to durable networks of mutual recognition [11] — sits at the intersection of interpersonal and social trust. Govier argued that when a society has social capital, almost everything becomes easier because people can turn to others for information and assistance [9]. Castelfranchi made the direction of causation explicit: social capital is a macro, emerging phenomenon, but it must be understood in terms of its micro-foundations — interpersonal trust [5]. And Putnam took it further, arguing that these dynamics are self-reinforcing: societies converge toward equilibria of either high cooperation, trust, and collective well-being, or the opposite — defection, distrust, and stagnation [6].
The prediction was clear: if you could measure interpersonal trust computationally at scale, you should see it track economic output. So we tested it. We aggregated per-capita trust scores by region and compared them against local GDP data.
0.98 correlation.
Regions where people demonstrated higher trust behaviours on the platform — more willingness to engage, more successful high-stakes matches, stronger competence assessments — mapped almost perfectly onto regions with higher economic output [1]. The trust scores are per capita, so this isn’t an artefact of population size or user volume. It’s a strong quantitative confirmation of a relationship that social scientists proposed decades ago and have been trying to measure with surveys and civic participation proxies ever since. Platform interaction data, it turns out, can measure it directly.
What this means for how we build systems
Most platforms treat trust as a byproduct. Use the service enough, accumulate reviews, and trust emerges from the aggregate. The architecture assumes that reputation is a sufficient proxy.
The research suggests otherwise. There’s a meaningful difference between “do people rate this person highly” and “do people believe this person can and will deliver what they promise.” The first is sentiment. The second is trust. They correlate, but they’re not the same thing — and in high-stakes contexts, the gap between them determines outcomes.
This has design implications. Instead of reducing trust to star ratings and review counts, systems could model the underlying belief structure: does this person have the practical ability, the demonstrated competence, and the perceived willingness to deliver? More complex to implement, but potentially far more predictive where it matters most.
Matching algorithms improve too. Instead of proximity and preference overlap, you can match on complementary trust profiles — pairing people whose belief structures about competence and reliability are mutually reinforcing. In the childcare context, this meant better placements. In other domains — freelancing, collaborative work, peer-to-peer services — the same logic applies.
Beyond topology
The dominance of psychological over topological predictors suggests something uncomfortable for the network science community: content might matter more than structure.
Not as a universal claim — PageRank works for web pages precisely because the content of links matters less than their pattern [2]. But for predicting human collaboration and trust? What people actually think about each other appears to be more important than how they’re arranged in a graph.
This doesn’t invalidate network science. It suggests that for the class of problems where trust prediction matters — matching, recommendation, risk assessment — we might be optimising the wrong layer. Structure is the skeleton. Beliefs are the muscle. And the muscle does the work.
The research process, briefly
This was my PhD research — the theoretical framework, the OSNN taxonomy, the data collection architecture, the trust model, and the empirical validation were all part of the doctoral work at Birkbeck, University of London [4], supervised by Alessandro Provetti. The later ACM publication [1] extended the findings in collaboration with Pasquale De Meo from the University of Palermo.
One challenge worth noting: traditional trust research uses surveys and interviews [7]. We needed to infer trust beliefs from platform interaction data — profile completeness, response patterns, review content used as proxy measures for trust dispositions and competence beliefs. Not perfect, but effective enough to demonstrate significant predictive differences.
The work went through ACM peer review and is published in the ACM Digital Library [1]. The validation process improved the research significantly, forcing precision about claims and thoroughness in testing alternative explanations.
Looking forward
This research was completed before the current wave of AI agents entered the conversation. But the questions it raises are becoming more urgent, not less.
When one party in a trust relationship isn’t human — when an AI agent is negotiating on your behalf, managing your schedule, making purchasing decisions [8] — what happens to the belief structure that underpins trust? The Castelfranchi-Falcone framework was built around a critical assumption: that willingness implies intentionality, and intentionality implies a cognitive agent [5]. But we now have systems that exhibit something that looks a lot like intentionality. They reason, they plan, they persist toward goals, they adjust strategy. Do they satisfy the framework — or break it?
And the economic correlation raises its own questions. If interpersonal trust generates social capital [5][9], and social capital drives economic output [6], what happens to that relationship when a growing share of digital interactions involve non-human participants? Is agent-mediated trust still trust in the Castelfranchi sense? Does it still generate social capital? Or does it produce something functionally efficient but structurally hollow?
These are questions I intend to explore in a future article — extending the framework that underpinned this research into the new landscape of autonomous agents and potential artificial general intelligence. The theoretical boundaries of trust, drawn for human cognition, are about to be tested.
The foundation is here: trust is not a network property. It’s a belief structure. And the systems we build should be designed accordingly.
Ylli Prifti, Ph.D., writes about AI, cognition, and engineering culture on Weighted Thoughts.
The full research is available through the ACM Digital Library. If you’re interested in trust modelling, social capital measurement, or the implications of agentic AI for trust systems — connect on LinkedIn or reach out.
References
[1] De Meo, P., Prifti, Y., & Provetti, A. “Trust Models Go to the Web: Learning How to Trust Strangers.” ACM Transactions on the Web, Volume 19, Issue 2, Article 12, Pages 1-26, March 2025. https://doi.org/10.1145/3715882
[2] Brin, S. & Page, L. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” Stanford University, 1998. https://research.google/pubs/the-anatomy-of-a-large-scale-hypertextual-web-search-engine/
[3] Freeman, L.C. “Centrality in Social Networks: Conceptual Clarification.” Social Networks, 1(3), 1978. https://doi.org/10.1016/0378-8733(78)90021-7
[4] Prifti, Y. “Online Social Networks of Needs.” PhD Thesis, Birkbeck, University of London, 2024. https://eprints.bbk.ac.uk/id/eprint/52517/
[5] Castelfranchi, C. & Falcone, R. “Trust Theory: A Socio-Cognitive and Computational Model.” Wiley Series in Agent Technology, 2010. https://doi.org/10.1002/9780470519851
[6] Putnam, R.D. “Making Democracy Work: Civic Traditions in Modern Italy.” Princeton University Press, 1993. Also: “Bowling Alone: The Collapse and Revival of American Community.” Simon & Schuster, 2000.
[7] Mayer, R.C., Davis, J.H., & Schoorman, F.D. “An Integrative Model of Organizational Trust.” Academy of Management Review, 20(3), 1995. https://doi.org/10.5465/amr.1995.9508080335
[8] Prifti, Y. “The New Units of Economics in Software Engineering Are Undecided.” Weighted Thoughts, February 2026. https://weightedthoughts.substack.com/
[9] Govier, T. “Social Trust and Human Communities.” McGill-Queen’s University Press, 1997.
[10] Rousseau, D.M., Sitkin, S.B., Burt, R.S., & Camerer, C. “Not So Different After All: A Cross-Discipline View of Trust.” Academy of Management Review, 23(3), 1998. https://doi.org/10.5465/amr.1998.926617
[11] Portes, A. “Social Capital: Its Origins and Applications in Modern Sociology.” Annual Review of Sociology, 24(1), 1998. https://doi.org/10.1146/annurev.soc.24.1.1


