A little over 10 years ago, we were toasting social media as the great innovation that would enable CRM to traverse the mythical last mile between vendors and customers. Every analyst had something positive to say about social and its future role in CRM.
I wrote a paper in 2004 forecasting that social networking (there weren't yet products, really) and analytics would be tightly interwoven into the fabric of the vendor-customer relationship. Paul Greenberg invented a cottage industry on social, and others followed.
A lot of things beyond social and analytics had to go right to achieve the social CRM vision. For example, there was a mobile revolution going on at the same time, and machine learning had to leave the lab and become embedded. Also, code generation was reaching new heights, and cloud computing had to become dominant.
Those things happened.
It was a perfect storm of innovation, and it delivered significant new capabilities that businesses and consumers could employ. For the first time, vendors had a reasonable shot at answering retail's oldest question: Which half of the marketing budget was being wasted?
Now, we're suddenly faced with a dilemma. Having integrated social into our CRM systems, can we still trust it? Is it a business enhancer or a hindrance. Some have abandoned it already.
The lifecycle of any disruption, including social, has its seasons. There's wild exuberance and euphoria at what the new whiz-bang thing can do, and it's always followed by disillusionment when users discover that some of the euphoria was misplaced.
For the last couple of years, we've been trying to rationalize the disillusionment stemming from the fact that social media worked exactly as it was supposed to when foreign spy services sought to upset a U.S. election. We cringed knowing that the vendors that seemingly could do no wrong secretly sold access to personal and behavioral data to bad actors.
But make no mistake, social is now too valuable to society as a business and social networking medium for it to vanish or even be diminished. What's needed now, as it is in the lifecycle of every disruption, is a way to harness it to be a continuing source of stability and good. That's not easy to do, though it has been accomplished with every major disruptive innovation that has diffused into society.
Over the last year numerous voices, including mine, have called for some form of regulation. Regulation came to the telephone, electricity, natural gas, petroleum and other vital industries. Cable providers had a whiff of regulation when the Obama administration set forth rules for common carrier status. Those rules were rescinded by the current administration, but I don't think we've heard the last of it.
Last week, Mark Zuckerberg, founder and CEO of Facebook, threw a curve ball at the lifecycle and suggested a small amount of regulation for social media in a Washington Post op-ed. It was a curve ball because inventors and entrepreneurs never ask for government involvement, especially when a disruptive innovation is still in its exponential growth stage.
Nevertheless, "From what I've learned, I believe we need new regulation in four areas: harmful content, election integrity, privacy and data portability," Zuckerberg wrote.
To argue against any of that is to take issue with puppies, kittens and apple pie -- but that's not to say that Zuckerberg goes far enough. In fact, I'd suggest that he simply is trying to steer the discussion away from more difficult subjects -- for example, the business model.
Just Doing My Job
One of the reasons that social media performed so badly by simply doing its job over the last couple of years is that vendors believed they were simply selling advertising -- eyeballs served up to whichever client paid the going rate.
However, regulating harmful content, election integrity, privacy and data portability will get us no closer to a solution than we currently are, because each is a nebulous idea lacking definition and standards for enforcement.
I've said before that true regulation doesn't start with these or other undefined ideas but with people. We should be empowering people to police social media primarily by themselves. I have proposed a two- or three-tier user certification policy to prevent excesses based on the certification and regulation society administers to plumbers, beauticians and other professionals.
As Easy as 1, 2, 3
The bottom use level would be much as you see it today. People could access social media as usual but there might need to be an upper limit on the number of people one user could reach. For easy math, let's say that number is 1,000. Many people would object, so perhaps 2,000 would be better.
Note that the average human has the mental capacity to keep about 150 relationships active -- it's called the "Dunbar number," for Oxford anthropologist Robin Dunbar, who originated the concept. The original idea of social networking, which social media embodies, was to keep up with your Dunbar's worth of friends, so 1,000 connections is lavish overkill.
The second level of social media use would apply to professionals who use it in business and commerce for things like marketing campaigns or charities. At this level, it would become important to identify each user concretely. There could be no more "Mad Dog" monikers, for instance. Self ID and maybe a registration number would be needed to launch campaigns and other outreach efforts that would exceed the 1,000 or 2,000 threshold.
Second-tier users also would be required to demonstrate competency and an understanding of proper use. Think of it as a driver's license, and passing a test that shows you know which side of the road to drive on -- nothing onerous, but enough to show that one knows enough to keep the rubber side down.
The third and possibly optional level would consist of the ninjas who keep things going. They might not necessarily work for social media companies; they could be a small and independent group capable of creating standards.
Placing responsibility on the shoulders of practitioners is an old story, and it works well for plumbers and all the rest. It also prevents systemic bottlenecks when bureaucrats try to interpret nebulous things like harmful content, election integrity, privacy and data portability.
My Two Bits
Adopting a two- or three-tier self-regulation scheme with clear identification and responsibility-sharing would do more than any legalistic attempt to define things like "harmful content." So why is Zuckerberg arguing for his points? I suggest a lot of it is tied up in the business model.
You can chase harmful content, election integrity, privacy and data portability till the cows come home without harming the business model. You'd consistently come up short too. The business model is flawed, because it supports a business that has become a platform using approaches designed when social media was just a service.
Social media platforms need to traverse the same path that fast food vendors did more than half a century ago. McDonald's, for example, went from selling burgers on the corner to becoming a franchiser, real estate company, and raw materials provider. (It still has some stores, but that's not its core business.)
Doing the same in social media would mean splitting many of the companies into platforms and apps companies. It would give more latitude to entrepreneurs -- including CRM vendors -- working on those platforms too.
This means one or two new business models would be needed, as well as very serious discussions with investors. That's a big lift, and I think it's the big reason Zuckerberg is happy to discuss harmful content, election integrity, privacy and data portability. However, as we know from history, regulation is coming -- and once the discussion is really under way, you can't predict the outcome.