Opinion | Twitter’s Handling of ‘Fake’ Accounts and What That Implies About the Company’s Risk Management Strategy

By: | December 14, 2018

Joanna Makomaski is a specialist in innovative enterprise risk management methods and implementation techniques. She can be reached at [email protected].

Just imagine on an unsuspecting day you are walking into the office through the parking area. You notice a poster is hung on every passing pillar and post in the parking garage. This poster contains an enlarged picture of you, your phone number and horrible untruths and lies about you. Colleagues walk by and realize it’s you on the poster and give you looks of disdain and disapproval. HR is waiting at the door. They want to speak with you. How would you feel?

Sadly, a dear friend of mine just recently experienced something similar. Their experience inspired this column.

My friend did not see posters; they saw vile posts online. My friend was made aware by their HR department of abhorrent and offensive tweets being posted from a Twitter account with my friend’s photo, name and personal information. These posts were being made for months with the express intention to cause harm and disparage.

But there was a problem from the outset. My friend does not use Twitter. Never has. They do not even know how to use Twitter.

Someone was impersonating them and was getting away with it for months. My friend reported the situation to Twitter and after investigation, the fake account was suspended. Even with the exonerating action by Twitter, my friend felt that damage was done. Their reputation was tarnished. Tiny seeds of doubt about my friend were unfairly planted.

When we set up a Twitter account, we enter into a contract with Twitter. Twitter-users must follow rules: “In order to protect the experience and safety of people who use Twitter, there are some limitations on the type of content and behavior that we allow,” so says the Twitter User Agreement.

Twitter rules do not allow you to engage in hateful conduct, share people’s private information without permission, impersonate someone or create fake accounts. That is reassuring to read; nonetheless, my friend’s impersonator did all of this for months undetected.

Twitter admits they do not monitor users: “We do not actively monitor users’ content, and we do not edit or remove user content except in response to a Terms of Service violation or valid legal process.”

It appears that Twitter is not really watching. Have they abdicated their duty of oversight to others? Or are they worried that first amendment rights would be threatened if they did act?

Twitter also tries to protect themselves by including a limitation of liability clause. Paraphrasing, it says that, by hook or by crook, Twitter will only ever be liable for damages costing up to $100.

I repeat: One. Hundred. Dollars.

So, what should be the duty of care owed to users? How should non-users be protected? Should Twitter take ownership of authenticating their users? Verifying user legitimacy?

It appears they can with the Blue Verification Badges program. The badge lets users know that an account of public interest has been authenticated by Twitter.

But wait, according to a note on Twitter’s website: “Please note that our verified account program is currently on hold. We are not accepting any new requests at this time.”

What happened?

With Twitter being in the spotlight daily as the host service to known-fraudulent robot users and maestros of political mayhem, I am disappointed to see this note. Twitter seems to be veering in the wrong direction — away from upgrading their oversight and risk management and running straight into the cross hairs of certain liability. &

More from Risk & Insurance