Turning Social Media From a Problem Into a Solution
The shooting incident that took place last week at YouTube had less to do with guns than with the failure of the police to act on information in a timely way and the inability of social media to be anything but part of the problem.
Google has been giving this issue little more than lip service, but I expect it has become motivated to do more, given that YouTube — not some distant school — was the latest target. Funny how personal risk can change perspectives.
However, in both the Parkland and YouTube events, social media — Facebook in the case of Florida, and YouTube in the case of YouTube — seemed to help inflame the attackers, or at least did nothing to reduce their anger or eliminate the threats.
Foreign governments apparently want to interfere with elections and polarize U.S. citizens, which showcases abuse of an incredible power. Why can’t that power be used in better ways — say to keep kids and Silicon Valley employees safer?
It can, and I’ll suggest how before closing with my product of the week: a new satellite box for the TiVo service.
The tech industry has a problem, as I pointed out in a recent column and as the book Technically Wrong spells out in detail.
That problem appears to be worst with social media companies, which have exhibited nearly complete disregard for their users — who aren’t their customers — and even disregard for their home countries.
At the heart of the problem is the disconnect between those who provide revenue for these “free” firms and those who use them. Mark Zuckerberg years ago became rather famous for pointing out that those who used his service were stupid, though he used a far more interesting term at the time.
Since then he has insisted that he’s changed his mind, but I watched him in an interview on TV last week, and it seemed he still thought we were all dumb f*cks, claiming that Facebook shared only what users put on the service to share. In other words, “what’s the problem again?”
I’m starting to think the name “Zuckerberg” should be the new single word alternative to “tone deaf.”
We know that foreign operators used social media services aggressively in an effort to influence the outcome of the last presidential election and to change public opinion on a national level. In short, social networks have been used aggressively to harm U.S. citizens.
We also have learned that in both the Parkland and the YouTube shootings, social media should have been aware of an attack threat long before one occurred.
Whether we agreed to share information or not, we certainly did not sign up to weaken our country or to facilitate mass shootings.
Social media services have demonstrated the power to change opinions, and they certainly have the data and resources to identify threats. The thing is, they need to manage these capabilities appropriately and at scale. I interviewed a company last week that has a solution.
Darwin Ecosystem + IBM Watson
Darwin Ecosystem is one of a new class of companies that is artificial intelligence-centric. In this case, it uses the IBM Watson platform to analyze writing to determine personality types and changes in personality.
One of the interesting things it did during the last election was to analyze the candidates. It even created a dynamic graph so you could look at each key personality trait individually.
One of the interesting findings was that, over time, the personality differences between Clinton and Trump seemed to converge, while Sanders remained largely the same. It’s arguable that the convergence appeared to hurt Clinton and assisted Trump’s victory (though with all of the social media influence, this cause and effect is questionable). Another finding was that, at least with regard to certain personality traits, Trump was surprisingly close to Obama, also suggesting a connection between personality and success.
What Darwin Ecosystem focuses on, however, is analyzing writing to determine employee problems at every level in a company. For instance, a board could use it as a tool to gauge whether a CEO was taking direction and becoming the CEO it wanted, as opposed to becoming far too enamored with abusing the privileges of the office. Management could use it to make sure an employee was well utilized and not burning out or even becoming violent.
Anecdotal evidence suggests this tool also could be used to determine if a child or adult was becoming depressed and suicidal. In theory, it could be used to determine if someone was developing homicidal tendencies.
Like all AI systems, this one uses pattern analysis and deep learning to make determinations about the authors of written works. As with most new AI systems, Darwin Ecosystems’ tools can be used at scale.
This suggests that were a solution like Darwin Ecosystems applied to services like Facebook and YouTube, and tied into a response system modeled after what the Russians allegedly used against the U.S., you could have a strong defense against violence.
This same model could be used to identify traitors, bullies, trolls, and a whole variety of bad actors as well, and then appropriate steps could be taken to alter their behavior. Basically, what it would amount to is human programming at scale. We’re already doing it — we just don’t apply it properly to our defense.
What I’m suggesting is a solution that would start with an AI-based threat detection system that first would identify those who were displaying violent characteristics. Then it would be followed by two threat mitigation programs. One would feed posts designed to point the individual to nonviolent forms of rebellion or attack, while another would notify authorities who then could respond at the appropriate level.
The system could generate and send to law enforcement a report ranking threats and attacks by their potential, so law enforcement or social services could respond appropriately.
The mitigating posts sent to the emerging attacker would provide information on the collateral damage associated with attacks, how attackers were punished or killed, and credible posts suggesting meaningful alternatives that could result in better outcomes than violent acts.
They could include pointers to suicide prevention or other services that could deal with whatever the AI, or the flagged human moderator, might decide would be the best program to move the target off the violent path.
Instead of allowing technology to be used against us, we could be using it to keep us safe.
These giant social media companies have become not only a danger to the nation, but also a danger to their own employees. Their near total disregard for their users has created an environment where they really should start thinking about building defensible fortresses rather than open campuses. If they don’t change their behavior, their consistent lack of regard for users undoubtedly will result in more violence.
However, there are AI tools that could be used to neutralize blossoming attackers and/or alert authorities about related impending harm, rather then being used to manipulate the populace in general.
These services ultimately will manipulate us in some ways. However, they could be used to help keep us safe. They’ve done little to address other mass shootings, but given that their lives are now on the line as well, maybe they will step up.
By the way, for anyone thinking of a better perimeter camera system, I also ran into Umbo Computer Vision last week. Its intelligent (neural network) camera system is capable of identifying a variety of approaching threats and eventually should be useful in identifying an approaching known attacker.