Days after Robin Williams' death, his daughter received gruesome messages from anonymous users, including photoshopped pictures showing bruises around her father's neck and tweets accusing her of his suicide.

Two users, @PimpStory and @MrGoosebuster (both have been suspended since the incident), tweeted manipulated images of Williams in a morgue, accompanied by messaged like "look at what he ... did to himself because of you." One tweet, calling 25-year-old Zelda Williams a "heartless b****" was retweeted 25 times.

The barrage of bullying drove her offline early Wednesday. She tweeted her exit...

...and released a statement on Instagram:

On Thursday, Twitter Vice President of Trust and Safety Del Harvey released a statement on the matter, saying the company will be updating its security policies:

We will not tolerate abuse of this nature on Twitter. We have suspended a number of accounts related to this issue for violating our rules and we are in the process of evaluating how we can further improve our policies to better handle tragic situations like this one. This includes expanding our policies regarding self-harm and private information, and improving support for family members of deceased users.

It's a step in the right direction, but Twitter's policies may not be enough to stem the flow of trolls that abuse users across the board. The abuse doesn't just take the form of inappropriate or mean comments—trolls will create account after account, so the victim has to repeatedly block them. It's not a minor nuisance—it's harassment.

Twitter's current system allows users to report abuse, document harassment, and eventually get bad accounts taken down. But it's a boilerplate method, says media critic and feminist activist Soraya Chemaly, and it's unfortunate that only a high-profile incident like Williams' can change Twitter.

"While I am truly sorry for what the Williams family is experiencing during this time, I am concerned that it takes an event like this to bring heightened attention to a problem that so many face every day," she told The Washington Post.

Twitter's page for online abuse links to a form for reporting abusers, and also provides guidelines to what to do about the abuse.

In the case of Zelda Williams, who deleted her Twitter and Instagram profiles, her followers—numbering a combined 280,000 through her Twitter, Instagram, and Tumblr—immediately flooded her feed with messages of support.

Others do not have such influence. As The Daily Beast's Tauriq Moosa wrote in a column chastising flimsy online abuse policies:

Zelda Williams received horrific messages and has left social media indefinitely. She’s not the first, and she won’t be the last. Women give up entirely or don’t tackle topics that mean a lot to them because they are unable to, due to the volume of angry men threatening them with violence and rape. On and on it goes. There will be another woman like Williams tomorrow, only most of us won’t know.

The non-social media online community has been taking steps to curb abusive activity. In particular, gaming companies have assembled strategies to encourage players themselves to monitor other playersLeague of Legends, which has more than 67 million active players each month, uses a "player behavior team" of psychology, cognitive science, and neuroscience experts to study how harassment works, and as a result, came up with and implemented reforms.

These techniques included turning off the chat function, and having players manually turn them on if they wanted. This way, players didn't just have an automatic platform for saying negative things. Instead, they would have to first turn on chat, therefore increasing positive chats instead of spur-of-the-moment negative ones.

In essence, it's a strategy that promotes the old adage to "think before you speak," and it's one Twitter and social media outlets could pursue further. If users thought before tweeting, mindless online harassment might just be overcome.