In management and operations, we track and measure and rank a number of metrics and key performance indicators. Inevitably, as people begin to understand the metrics by which they are measured, ranked and compensated, they begin to alter their activities in an attempt to improve their status – against the metrics, against (or in alignment with) perceptions on their performance, and more-often-than-not against their peers. It’s a competitive world, and while performance-based compensation is nothing new, in an uncertain economical climate, there is heightened observance around individual and peer performance. We’re more aware of / interested in how we are performing, especially in relation to how our peers are performing.
As a software evangelist at a fast-growing SharePoint ISV, I’m in an interesting position in that my peers at my company are not my “competition” by any degree – they are the leadership of the company. I am not part of a team of evangelists – there’s just one person doing what I do: me. Instead, I consider my peer group to be the folks out in the community, which presents an interesting metrics dilemma, as our actual job descriptions may vary widely, so the metrics by which we are measured also differ widely.
One of the fuzziest metrics by which evangelism can be measured is online influence. Traditional models push toward inbound links and lead generation. The idea is to generate content and buzz, and participate in activities that lead to increased web activities, and, in short, more downloads and registrations, which equals leads for our sales teams. The problem, however, is deciphering successful evangelism from other forms of marketing. It’s a difficult task. Leads rarely come through a single source: person sees ad, clicks on link, goes to website, takes an action like downloading a whitepaper or a product demo. More commonly, a lead can be tracked to multiple downloads, views of multiple videos, participation in a webinar, and visit to a booth at an industry event. How much weight do you give to each marketing endeavor to accurately weight the success of each component? Traditional models would point to the last event/activity before purchase, but does that paint the right picture of influence?
Number of downloads, number of views, number of inbound links are all good measures, but they don’t tell the entire story. It doesn’t help that the consumption of data across different platforms (aggregators, reprints on blogs, reads on Facebook) do not always count against the total, making many interactions with your content untraceable. It makes capturing data difficult, and can skew metrics for these measurements toward events or content that may not be as successful as perceived.
So it was with some excitement that I investigated Klout. Not that this online reputation analysis tool is “the” solution to my data problems, but what it does provide is another data point on which I can track evangelism success. And more data is almost always a good thing (with the right filters).
In an article for the May 2012 edition of Wired entitled “Popularity Counts,” author Seth Stevenson outlines some of the obvious inequalities in online reputations: “Some contacts had hordes of Facebook friends but seemed to wield little overall influence,” he explains. “Others posted rarely, but their missives were consistently rebroadcast far and wide. (Klout founder Joe Fernandez was) building an algorithm that measured who sparked the most subsequent online actions.”
Fernandez, according to Stevenson, sees Klout as a form of empowerment for the little guy, “pinpointing society’s hidden influences.” If part of the role of evangelism is creating content and ideas that move quickly through a network of engaged followers and thereby generate buzz, then Klout seems to be the tool for measuring that activity.
Going back to my own operations and project management experience, the problem with relying too heavily on performance metrics is that people will change their behaviors to optimize those metrics, which makes oversight and proactive management of those metrics a necessity. A great example is your average call center focusing too much on mean-time-to-resolution, or the time it takes to close a ticket. Too strong a focus on this one area can drive the wrong behaviors, such as agents closing tickets before a problem in entirely resolved (from the customer’s perspective). This one trend has been a major driver of customer dissatisfaction (DSAT) within support organizations.
As with any measurement, your Klout score is just one metric, one slice of the overall evangelism performance dashboard, if you will. Stevenson made some similar observations:
“But after a while I noticed that they seemed to be stuck in an echo chamber that was swirling with comments about the few headline topics of the social media moment… Over time, I found my eyes drifting to tweets from folks with the lowest Klout scores. They talked about things nobody else was talking about. Sitcoms in Haiti. Quirky museum exhibits. Strange movie-theater lobby cards from the 1970s. The un-Kloutiest’s thoughts, jokes, and bubbles of honest emotion felt rawer, more authentic, and blissfully oblivious to the herd. Like unloved TV shows, these people had low Nielsen ratings – no brand would ever bother to advertise on their channels. And yet, these were the people I paid the most attention to. They were unique and genuine. That may not matter to marketers, and it may not win them much Klout. But it makes them a lot more interesting.”
What will be interesting over the next 3 to 5 years will be how online reputation management will become more integrated into other platforms. Salesforce is already looking at integrating Klout scores into customer profile information, and many key brands (hotels, restaurants, merchandisers) are already providing “perks” to those who can most influence their brands. How this data will be utilized against more traditional web analytics will be an important step in truly understanding the influence of evangelism.