The recent hate-driven Pittsburgh Synagogue shooting is clearly a tragedy.  In spirit, we are all members of the Tree of Life Congregation.

As we read of hate crimes all too often, it is easy to get frustrated as we wonder “what can we do”?  This is exactly why I co-wrote Countering Hate with Haroon K. Ullah and have remained on the path of asking and answering this question over time.

Here is one simple and powerful preventive measure that we can enact as quickly as we like and one important debate that we still need to have as a nation.

The Bad-Actor API (application programming interface)

(first published in with Dr. Victoria Romero)

In the aftermath of 9/11, investigators, federal agencies, and Congress realized that the information we would have needed to detect and thwart the plot had been available.  What was missing was a systemic ability to connect the dots, see patterns and view the big picture.

Since 9/11 the government has made great strides in sharing data between law enforcement, defense and intelligence organizations, but meanwhile, the world has moved. The new battlefield is cyberspace. And much of the most critical data in cyberspace are controlled not by government, but by internet and social media companies.

The efforts that such companies, particularly Facebook and Twitter, are undertaking to battle orchestrated efforts to spread disinformation, hate, and extremism are admirable, but their approaches have a fatal flaw.

The platforms are each working in isolation, seeking out bad actors based on activity on their own platform, then removing them and the content they created. It is laudable that they want to halt the spread of these actors’ messages, but their approach is leading us down the same path that resulted in 9/11.

Sophisticated bad actors’ strategies are cross-platform. You may not even be able to identify a bad actor if you are looking only at their posts on Facebook. It is not possible for any one platform to identify sophisticated adversaries by examining only data from their own platform. Critical patterns emerge only when data from a wide range of sources are combined. Limiting the search to only one (or even a few) sources is like trying to examine an elephant through a soda straw.

The 9/11 Commission Report emphasized that the critical tool to implement was a better system of information sharing. Government entities clearly heard and implemented this message. But 17 years later, we are at another inflection point of equal importance that requires partnership and cooperation between the public and private sectors.

In the recent hearings on social media in the House and the Senate, the focus was mainly on the past election and identifying fake content. What was missing was a proposal or any specific idea that could improve how we see patterns, gain insights and protect our citizens. That would allow us to make the next big leap.

We have an idea that is very simple, powerful, and easy to implement. It doesn’t require social media companies to do anything extraordinary. It does require an attitude of cooperation, a willingness on all sides to tone down the rhetoric and a desire to build positive partnerships.

The idea is to ask each social media channel that attracts bad actors to build and make available to certain partners a “bad actor API,” or application programming interface. Currently, when social media providers identify a bad actor’s account, they delete it and all the data with it. This makes it impossible for others to study these accounts’ behavior and learn from it. A bad actor API would allow third parties to access extensive public data about these wrongdoers for research purposes, and ultimately prevention.

It’s not a new concept since APIs are already routinely used by social media channels to share user information with third parties. They help advertisers build plans and help an array of partners understand what customers or potential customers may be doing. It’s a widely accepted way to learn together.

When we want to promote or sell something, we fully embrace the use of APIs and the data that comes with it. For some reason, however, we don’t do this for bad actors.  Instead, we applaud social media platforms for merely deleting accounts and information which is thus never seen again.

This information should be retained, and the companies should make the API available to third parties whose mission would be to combine these data with other data sources to identify patterns.

Data scientists will be able to see those patterns more quickly and they should help us understand behavioral signatures, potential plans of action and other significant information.

If the public and private sector are to accomplish this goal, both will need to place more attention on the power of doing something right together.

Deleting accounts, today’s primary tool, is not the answer. If fake content reaches us for a few days and then is stopped, does that negate its impact? The answer is no. People have already been disinformed. The damage cannot be undone.

We don’t buy more Kleenex to treat the flu. We do research and develop vaccines. Society needs to build systems that enable us to act more like an R&D team. We can make much more progress in battling hate if we work as one team.

We need one leader in Government and one in the private sector to agree on this point and get us started.  Who will it be?

Then, let’s pull up the 9/11 Commission Report and read the section that discusses “a different way of organizing government to unify the many participants in the counterterrorism effort and their knowledge in a network-based information sharing system that transcends traditional government boundaries.”

A Conversation We Need to Have

It is a conversation about privacy and how we identify and track those who are escalating on the continuum from bias to hate to extremism and, in some cases, violent actions.

In the U.S. today, law enforcement can track social media behavior, but they cannot act on it alone unless a super clear threat is made.  We are taking the view that we cannot do anything to stop someone in advance of a negative action, simply because they said something.

The only problem with this approach is that extremists don’t explain their next actions in advance, so we can catch them.  It’s never worked that way and never will.  We must be able to take action of some type when we know someone is either about to break or has simply crossed into the world of extremism.

In general, I understand this right to privacy.  But when a person is escalating over time in their use of hate speech and they are showing behavioral changes that are symbolic of those who may take action, could there not be a level of “alarm” where either police or mental health professionals or others may get involved?

It is rare for a hate-filled extremism action to occur without any prior evidence.  It can happen, but it is unusual.  Normally, we can see a journey similar to how we map out a customer journey where a person is progressing from bias to hate to extremist views.

Since privacy is very important and is viewed as a personal right or privilege, depending on how you look at it, it deserves a larger conversation on what we are willing to accept and what we are not.

As we have these conversations, let’s remember to not let the fake news discussion or partisan politics divide us.  The disinformation wars will continue, but we can all keep our heads on straight and debate and decide on key societal issues that can decrease hate with time.

We are experiencing a new type of risk that requires new thinking.

We owe it to ourselves, we owe it to the memories of our friends and we owe it to the Tree of Life Congregation.

You can visit us at for more information