Facebook sure is getting beaten up recently. There's even a crowd-funded initiative to replace it with something open, called Diaspora -- everyone on Facebook is talking about it.
Yet it wasn't even two full years ago that Facebook was the darling of the ditherati. For a while it seemed as if nearly everything Facebook did was hailed as the future of messaging, perhaps the future of the Internet — or maybe the Internet didn't matter anymore, except for Facebook. Even obvious scams got VC funding, so long as they were on Facebook. But with just a few missteps -- which they appear to believe were nothing more than misunderstandings -- everything's changed.
The first tipping point, it seems, was Facebook founder Mark Zuckerberg's statement that he doesn't believe in privacy, with the obvious connotation that therefore he doesn't have to concern himself with it. But as researcher danah boyd responded, privacy isn't just about whether or not you share stuff -- it's whether or not you have any control over what you share, and who you've shared it with. And besides, it doesn't matter whether Zuckerberg believes in privacy. Facebook's users still do.
The problem Facebook has created for themselves is not that nobody wants to share information about themselves; it's quite obvious that there's a lot of sharing going on. It's that with each new feature, Facebook changes the social dynamic. Before, Facebook users felt and believed that they were sharing with their friends, and with particular networks they'd chosen; it was a closed environment, with borders that were clearly defined and understood. And if a few advertisers got to peek in, well, that was the price of admission. But now, after many changes, much of that same information is entirely public -- unless each user individually goes through a set of complex steps to opt out.
It's akin to sending email to a private mailing list, only to have it forwarded to a reporter and published. Sure, it's always possible that that can happen, but we can either trust each other to abide by the social contract to not do such things, or we can't trust each other at all.
Facebook's staff may well have thought to themselves that if they asked users to opt in to having their data shared with a much broader audience, the users would decline -- and they were probably right. But by ignoring that insight and making it opt-out instead, they showed a severe lack of respect for their users. Without that respect, there can be no trust. Without that respect, your users will turn on you -- because they were never really "your" users to begin with.
Once trust is lost, what do you do? Can it ever be regained? Or will trustworthy behavior have to be forced upon them by regulators?
We at CAUCE have pondered this same question over the years in terms of companies who used to send spam, and have since learned not to. Some people will never forgive them, no matter what they do. Others won't see what the big deal is, because the spam never affected them personally. But most will fall somewhere in the middle, never quite trusting the company not to spam them again.
That middle area is the most Facebook can hope for at this point, and the way to gain it is to start viewing everything in terms of "what do users think is going on," rather than "what do we want users to think is going on?" More than anything else, they have to ask themselves: "are we being respectful towards our users? Are we allowing them the choice and control they believe they already have?"
It sounds like they're already thinking in this direction, or at least they want us to think they are -- but that doesn't mean users' perceptions will change overnight. Facebook won't be forgiven that easily, especially if their PR tactic is essentially "oh, you just didn't understand what a wonderful thing we're doing." They'll have to patiently explain their thinking in an honest way, keeping corporate doublespeak to a minimum -- and stay consistently respectful for a very long time.
Even then, success may not be measured by a decrease in angry blog postings. It also won't be measured by a decrease in the number of people deleting their accounts. Trust is much more nebulous than that. If anything, it'll be measured by whether anyone's willing to try new features when given the opportunity to opt in -- and that, too, could take a long time.
If they stick with it, and they're open and transparent about the change, then they could continue to be the largest and most successful proprietary social network that has ever existed...at least until some other tipping point occurs.
The first step (but certainly not the last) towards saving the internet from spam, malware, and other abuse is to keep your own network clean.
A friend of CAUCE, who wishes to remain anonymous, offers these tips and resources to help you identify problem traffic emanating from your network, and clean it up. Though primarily written for ISPs, many of the items below should apply equally well to any network owner.
Zero-point: Problems which aren't identified don't get fixed. So...
First and foremost, proper identification of the ISP's IPs in both RIR (APNIC) and rDNS. Along with that, working and properly processed Abuse e-mail contact for APNIC and "abuse@domain" for the generic rDNS primary domain. Correct domain whois goes hand-in-hand.
Complaint Feedback Loops and other abuse reporting mechanisms: Spamhaus and Word To The Wise both have links to get started on those, and ISPs serious about cleaning up should subscribe all their IP ranges to as many of those FBLs as they can handle. (The best for spam detection would be subscribing to all of them but volume can get quite high so they may wish to pick and choose what fits their needs the best.)
That includes SpamCop, but it's worth its own mention. Unlike most other FBLs, SpamCop reports spamvertised URLs as well as spam source. Note that it has both direct spam reporting and "Summary" reports which provide IP-by-IP reporting for a subscribed range on an hourly or daily basis.
www.abuse.net can help them direct spam reports to the right place. SpamCop seems to look at Abuse.net, too.
CBL offers rsync of its data within terms of use posted on its website. An ISP with that data can use grepcidr across its IP ranges to identify currently active spam-bot IPs.
Spamhaus PBL provides participating ISPs with CBL's list bots in the respective ISP's IP ranges, so that's another easy way for ISPs to get that same data.
Botnet C&C and malware related IPs identified by the FIRE group can be found by ASN with http://maliciousnetworks.org/ .
Senderbase.org, Trustedsource.org and Senderscore.org websites all have searchable reputational information which can help an ISP corroborate reports they get with a wider sample of traffic...very useful.
I'm sure there are more such resources, I'd be interested in them and I hope others will chime in, but for an ISP which is already overrun with spam issues, those websites should at least give them grist to start grinding away at the problems. I suspect the more difficult challenge will be to get them to actually back the effort.
Any ideas? Post them in the comments, and maybe our anonymous friend will join in too.
CAUCE North America is financially supported by our organizational and individual members.