I've been thinking some more about the issues I raised in my earlier How to run a conspiracy posting. Here are some ideas that build on that.
I was originally thinking of a situation where the idea of Coolness is what I'm going to call global: everyone agrees what it means to be Cool. At most we might disagree about whether to trust that a given person is Cool, but that's only because we have incomplete information - we all agree on what it would mean for a person to be Cool if only we could be sure of the evidence. The situation would be where everyone is cooperating in an activity that is illegal or at least socially disapproved, hence my calling it a "conspiracy." For instance, the Mafia - everyone wants assurance that a possible member is not a police informant before they will trust that that person really is a legitimate member, and everyone agrees on what a police informant is. The killer app I actually had in mind was fandom communities on Livejournal, which want to remain "open" in some sense, but which I think would benefit from not being examined too closely by the non-fandom general public.
When Coolness is global, it makes sense for trust to be transitive because everyone has a common goal. Transitivity is a math term - it means that if a relation carries through two steps then it also goes directly. For instance, equality is transitive: if A equals B and B equals C, then A must equal C. In a global-Coolness environment, if Alice thinks Bob is Cool and Bob says that Carol is Cool, then it makes some sense for Alice to trust Carol as being Cool, because she can be assured that Bob's definition of "Cool" is similar to her own. But the kind of Coolness that's most important on social networking sites isn't global, it's local. Alice trusts Bob because Bob doesn't diss Alice's teddy-bear fetish; Bob trusts Carol because Carol doesn't diss Bob's marble-top-table fetish; but Carol thinks Alice is a pervert, and Alice will be unhappy if Carol reads Alice's Very Secret Diary entries. If Coolness is local, then it's not enough for the system to just classify everyone as Cool or Uncool; you have to classify everyone as Cool or not from the point of view of each of the other users.
The current state of the art is to do a global Cool/Uncool determination, have some default rules, and then allow users to make direct exceptions. They usually soften the determination by having different levels of Coolness. Facebook is a good example. To see my profile, first you have to have a membership at all, which is easy to achieve, but requires that you at least provide an email address and go through a sign-up form. Then you have to be in one of my "networks," which are basically geographically based. Even then you only see a limited version of my profile, and to really get access, I have to personally authorize you (which I can do regardless of "networks"). Livejournal works similarly, though with more access given to the default no-account general public. Both systems basically sacrifice any possibility of automatic transitive trust, because it's just too dangerous in a local-Coolness situation. The friends of my friends are not necessarily my friends.
Here's something I think is key: even though local Coolness means trust isn't transitive, nonetheless local Coolness is almost transitive. If you're my friend, then I'll probably like almost all your friends. The odds are good that a randomly-selected friend of one of my friends will also be someone I like. The trouble is that the exceptions are catastrophic; there are just a few of my friends' friends who I emphatically do NOT want to trust.
So if Coolness is almost-transitive, maybe instead of ignoring any possibility of it being transitive, we could pretend it's transitive and then deal with any exceptions. In order for that to work, I have to be able to create negative links in the friends graph - not just "these are the people I trust" but also "these are the people I don't trust." I'm not aware of any social networking system that really supports that. Several allow something vaguely resembling it, like Livejournal with its "banning" (which actually only applies to comment-writing and similar) but they don't implement it in a way that works the way users want. In the Livejournal example, what users desperately want is to be able to say "This journal is public except for Alice, who isn't allowed to read it." That's impossible, of course, because Alice can always log out to read public postings, but it being impossible doesn't stop users from wanting it or even mistakenly believing that it exists. Someone who wanted that effect would have to make their postings friends-only... but then they'd sacrifice the "I want to publish to people I don't know" goal. You can have public or private, but not "public with exceptions."
Facebook has something similar in that users can designate some "friends" as only seeing limited versions of their profile - so you can list someone as a "friend" without their really being able to read your secret stuff. Such people could be called "friends without benefits." That works better on Facebook than it would elsewhere because Facebook doesn't allow the general unlogged-in public to read the site at all, so someone can't avoid a restriction by logging out; they'd need to create a sockpuppet account, and that in turn is difficult because of the need for getting the sockpuppet into the right networks and named as a friend. But neither system really lets you designate someone as positively untrusted, what might be called an "enemy" as the opposite of a friend. I think the main reason such a designation isn't allowed is to reduce the potential for what users call Drama.
What if we accepted that some Drama is inevitable, and allowed anti-friend links in the social network? In that case, I think it might be possible to build on my ideas from the "conspiracy" article to build something that would work well for local trust. Here's an outline of my current proposal:
The global sponsorship under the "conspiracy" system would determine whether people are allowed into the system at all. My expectation would be that this would be a low barrier to entry; its main function would be to reduce sock puppets and spammers. Because you must sponsor people in order to get any readers at all, I'd expect sponsorship to be easier to attain then under the "conspiracy" system; but because of keeping the no-pairs rule, there'd be a need to avoid sponsoring anyone who is already too popular. My hope would be that that would encourage a moderately connected web of trust in which everyone would tend to be connected to everyone else, but not too closely, avoiding the "small number of superstars" situation seen on systems like MySpace.
This scheme allows for transitivity - if I trust people then I'll be inclined to trust the people they trust. However, it's not pure transitivity because it has a limit on number of links. Publishing to my friends and friends-of-friends doesn't mean I also publish to everyone in the connected component, most of whom are random strangers. On the other hand, I get to choose how broadly I'll publish by choosing my constants for the trust determination - unlike the problem encountered with Friendster (note: it's been years since I used Friendster and I don't know what it's like today) where the designers set an arbitrary limit of four edges, with no good justification apparent. It's unavoidable that there's some tradeoff, but letting the users choose it seems like a win.
The distrust links are to address the main problem of transitive systems: my enemies are closely connected to me, so "closely connected to me" is not good enough for the system to determine that I will like a person. This design is supposed to echo the practical observation: trust is almost transitive, but with important exceptions. I can attach distrust links to the people I really don't want reading my site presence. The default, within my circle of trust, remains to trust anyone closely connected to me. However, the fanout in the network means that most people in my circle will be at the edges of the circle, and unable to sponsor others into my circle alone. Thus I can have some warning of potentially unwelcome people on the horizon before they're actually allowed into my circle. It's left to me to police that, and to enforce any "You sponsored an unworthy person so I should distrust you as well" retaliation as far as local trust goes. Key: Drama avoidance is no longer such a high priority for the system.
The exponential formula for determining trust could be called a bell and whistle, and I'd expect some kind of interface that would try to simplify it from direct exposure to users, but the idea I have in mind there is that it gives the user access to two important behaviours: if I'm connected to a person with only one path, I want to be able to say how long that path can be before the person will be called "untrusted"; and if there are multiple paths, they should be able to combine to some extent to allow a more distant connection than I'd allow through a single path. By setting the constants appropriately I can choose different behaviours in between constant-radius (like Friendster) and must-have-k-endorsements (like "How to run a conspiracy").
It's even possible to imagine that I could choose a different pair of constants for different kinds of access, even down to the individual posting level - so Very Secret Diary Entries are only visible to my direct friends or people sponsored by two of my direct friends, but general announcements are visible to anyone within five links; and in all these cases the few exceptional individuals I've designated for distrust, are banned from reading at all. The sock-puppet problem with "public except for you" access is reduced by the global requirement for people to log in to read at all. The global system in effect has constant-1 equal to 1, giving the pure "conspiracy" rule, in order to allow unlimited growth.
One thing I can imagine happening is paths along which not everyone is trusted. Imagine what happens if I set constant-1 to 2/3, constant-2 to 1/2, I sponsor Alice and Bob, they sponsor Carol and Dave respectively, and Carol and Dave both sponsor Elsie. Then I trust Alice and Bob (each one link away from me, 2/3 is greater than 1/2). I don't trust Carol or Dave (each two links away from me, 4/9 is less than 1/2). But I trust Elsie (two paths to her, each of three links and worth 8/27, total 16/27 which is more than 1/2). I'm not sure whether that's a good thing or not. One the one hand, it seems counter-intuitive that I'd trust Elsie when she's only connected to me through people I don't trust. On the other hand, it is the case that I have two significant independent endorsements of her. What's going on is that even though Carol and Dave don't meet the threshold, I have a significant nonzero amount of trust for each of them, and so they can combine to give me a reason to trust Elsie. This kind of situation could be eliminated with an additional rule that trust paths cannot contain untrusted people. I don't know if it's really a problem that should be eliminated, though; maybe these kinds of independent partial recommendation really should add up like that. In any case the proposed system gives the user a fair bit of control over such things if they consider them to be a problem.
Comments on this article are disabled due to spam.