Wednesday, September 30. 2009
This is funny - a report comes out today that notes that about 2/3rd of US people don't like being tracked online for advertising purposes, as reported in the NYT (its due for full release later):
Now, this should be no surprise, surely. As Kara Swisher puts it, this study:
"should surprise only the people behind the Beacon debacle shows that a majority of Americans of all ages don’t like being tracked online by advertisers."
So who do you think is pooh poohing this - why, Google of course. Matt Cutts takes the lead apologist role - but plays the ad hominem card (which is always the sign of a weak case in my view):
No surprises there either - as Mandy Rice Davis said, "well he would say that, wouldn't he".
And the beef behind his beef? None, as far as I could see - no counter evidence, no attack of the factual base. At what point does dissembling stop being good and start being evil?
But why the instant response with such low grade ammunition - well, timing, as they say, is everything - and this is becoming a hot potato:
I thus expect to see an equal but opposite conclusion report out soon funded by groups friendly to Google et al, arguing that this is all wrong and arguing for self regulation being the best option in this best of worlds. But, as was pointed out last year (as the Crunch bit) on the FT's Maverecon blog:
And the lessons of the last 12 months have shown how little the bankers's have possessed ability to self regulate
BBC trying to get social on Facebook - Torygraph?
Speaking at the Social TV Forum 2009, Anthony Rose, the BBC’s controller, vision and online media group, confirmed the corporation was working on developing a Facebook application but “the proposition was not quite there yet”.
According to "a source close to the BBC":
Twitterers have already adapted that network for real time chat around BBC programs, more could certainly be done with a little API from their friends to increase our watching pleasure. There was a fascinating little point made about commercial opportunities too:
However, according to the source, the BBC has been hesitant about moving quickly on this development due to fear of seeming too aggressively commercial in the digital space. For, outside of the UK, a Facebook application would potentially create a valuable new revenue stream.
Planning for the day the Licence Fee goes away? Interesting to see the BBC out front here - one does wonder what Channel 4 and ITV are doing strategically in all this media maelstrom?
I was one of the panellists at Chinwag's "Search is Dead, Long Live New Search" last night, where - ostensibly - we were talking about the rise of real time search and its impact on the older incumbents, ie Google. However, given that many on the panel (self excluded) and in the audience were from the SEO (Search Engine Optimisation) industry, it more turned into a "Future of SEO" talk at times, which I found fascinating in itself.
As I was on the panel (other members of panel can be found in the Chinwag link above), I didn't have time to take copious notes on this discussion but overall the narrative gist - as best I got it - is:
- its been great over the last few years having just one incumbent - Google - to optimise against
Not a great prognosis, all in all, it would seem. Anyway, I was there to talk about Search itself, so this was the gist of my stuff in answer to the various points raised (I know roughly what I said as I wrote it down on the proverbial back of a paper napkin ):
1. Big Changes in Search in last 5 years
The 5 year look back was the arms race between Google and SEO companies as Google Gnomes battled to keep the Google algorithms one step ahead of the optimisers. Sob stories of online Mom & Pop stores kicked into penury by Google algorithm adjustments were staple fare. The 3 year horizon saw the emergence of new types of media - blogs, video etc - as valid searchable items, which blindsided old school SEO for quite a while (and also Google' algorithmatists). We saw the rise of Technorati and early "social network" based search. The 1 year horizon is the rise of Real Time search, driven mainly by the opportunity provide by the first real time ecosystem, Twitter. (There were real time search engines before, its just that they were searching specific niches or corporate data as no real time consumer system existed, apart from the blogosphere)
2. What do we think is going on now and in the near future
Real Time search is still small, but fast growing - it punches above its weight as it is "on the zeitgeist" - you can see this in real time as Facebook, Google et al have been forced to respond with major changes in thier own architectures to something that is so small. However, real time it is not a replacement for more backward looking search, over time they will blend. The 1 year horizon is the continuing rise of real time and massive innovation in the space. The 3 year horizon is integration, as well as Google coming under increasing margin pressure from these new searchers and new "old style" searchers such as Bing. In 3 years time they will be losing market share and margin (but will be the heavyweight for at least 5 years, probably more like 10). In my view Google is acting increasingly like an incumbent (The Borg, but fluffier as another panelist put it ) and increasingly outsourcing (and buying) innovation, but a number of the other panellists disagreed on this.
But, the key in search is no longer "search" per se - that is to an extent commoditising. The key is filtering to get relevant results, and - in my view anyway - this is where the real battleground for search will be going forward, as relevance is what drives customer usage (which is why real time punches so far above its weight)
3. And 5 years out into the Future?
The 5 year horizon is the ability to search in the "Deep Web" (stuff not yet visible to search engines) as companies make more data accessible, and Filtering to maximise relevance and minimise cr*p.
I also noted my co-panelists responses to this one:
On Big Bandwith, there was quite an interesting discussion on Video search, someone mentioned that YouTube is now the second biggest search engine by search after Google proper. We also had a useful exchange at the time (and afterwards over drinks) about how search and discovery would work for Web TV (in a world of millions of "channels" what does the "Online EPG" look like? In addition there was an aside into the interesting trend for TV to blend with real time ecosystems, eg people commenting via Twitter on programs they were watching at the time (eg #bbcgqt - BBC Question Time - for example - see our notes at the time) and whether TV could align itself with real time systems to counter the Web TV threat.
As always, a fascinating evening, and conversations afterwards at the bar were also very good value - measuring the sociality of websites anyone? - and I look forward to seeing others' writeups which I'll link to as I see them.
If you watch Mad Men, the TV drama about a 1960's Advertising company, there is a subplot about the rise of a TV advertising group within a larger more traditional media based Ad company. The shift from Print to TV advertising in the 1960's was the last great upheaval in Adland, and most of the great Ad agencies of today were the ones who bet right then. Many great names at the time fell by the wayside. However, Adland is undergoing another upheaval now, from TV to the Web. And if one was going to call a day when the writing on the wall became clear, today may as well be it. From the FT:
The internet has overtaken television to become the UK’s largest advertising medium, according to a report by PwC for the Internet Advertising Bureau.
Although its probable that TV advertising will rise again for a while (the 17% drop is recession based) its clear that soon Web advertising will be the dominant form of Ad spend - which is only rational as its increasingly where people - especially the sort of people advertisers want to reach, i.e. those with money - are. The reason for its success is feedback loops and that most boring of things, ROI:
Online advertisers are able to discern more precisely than from other media how many people have seen a commercial and whether they have made an immediate purchase. This “accountability” had helped online increase its market share.
In other words managers are still extremely keen to see which half of their advertising budget works, and drop the excess. The lesson for this generation of Mad Men was clear some years ago, but this is pretty much the sign that the game is up and the play from now on is to make the best effort one can not to be left behind in the TV to Web transition, as many great old newsprint based agencies (who are thus no longer with us) did in the 1960's.
Its also very strong proof (if more was needed) for the TV industry that the Web will force the same structural changes on them that it has forced on newspapers and radio - There, but for the bandwidth, they go. Follow the money, as they say.
Tuesday, September 29. 2009
A few days ago we were looking at Augmented Reality (AR) and thinking about the potential of the emerging early plays, and how most of the ideas were old* (how many times has the "find a great restaurant in the neighborhood that you can visit" use case been used to flog dodgy mobile apps in the last 10 years?), but the combination of smart phone and big, cheap(er) bandwidth has now changed the dynamic. However, one curious thing struck us - there was virtually nothing in the literature about Augmented Reality for Enterprises (most under that term in Google are Star Trek based, and the most prominent article searched under "Augmented Reality for Business" was written in 2001, for example.
Anyway, what followed was a bit of mental freewheeling which I thought I'd put down here, to get others' creative thoughts flowing. So, in some semblance of an order:
Why AR Now?
Combination of the penetration of increasingly smart phones with various types of sensor (GPS, Camera, motion detection etc) plus good coverage of high bandwidth at increasingly affordable costs drives a "tipping point". Potentially.
Why AR for Enterprises?
Enterprise AR already exists in early forms - the Helpdesk person who has access to a lot of supplementary customer data is using an early form of AR. Some field service apps have online access to extra data. Also, RFID sensors and suchlike have been "augmenting" the ERP and CRM systems' data for some time.
Consumer AR seems to be purely mobile based - is this how Enterprise will go?
No, and its probably not even how consumer service AR will go, for 2 reasons:
Where will early Enterprise AR applications be used?
Early applications of similar types of technology have in the past been deployed first among staff away from the office - salesmen, drivers/deliverymen and field service staff. Next up has typically been staff requiring data but not at a desk - shop floor, warehouse, front of house etc. One could also imagine "reverse AR" devices which are more dedicated to picking up AR data in other companies' sites when one is visiting them.
What about RFID?
RFID will be a key component of Enterprise AR as a lot of the AR will probably be about taking and interpreting data from RFID systeme (and other simpler forms like Bar Codes). This could be one of the main differences between Enterprise and Consumer AR, in that consumers by and large won't deploy RFID infrastructure (though we can imagine businesses that will, specifically to feed consumer AR data applications)
What's the Business Case?
As with all information handling tools, a combination of faster/better information has a number of knock on impacts, typically around:
Which Industries will deploy early?
Probably those with large numbers of staff outside the office, and where the responsibility / value of those people "getting things right" is high. Also, potentially very large enterprises where it is impossible to meet everybody in the normal course of events.
What sorts of systems will be deployed?
Probably a combination of systems with new and useful metadata in them, plus ones with interfaces to older company systems that extract and re-present the existing data in more usable ways. These new systems will have to be partly online and near real-time to be most useful. One of the issues will be security as multiple formats and data sources will need to be accessed (unless the whole service is a walled garden) so we would expect layered apps with various levels of access
How does this square with Enterprise 2.0?
To us, Enterprise 2.0 thinking at the moment seems to be taken up too much with the social network aspect of Web 2.0, and the other vectors have been largely ignored. This is more part of "Olde 2.0" - about getting useful data collated above a single device rapidly to the point of use. At the moment, most "Enterprise 2.0" implementation is of blogs, wikis and social media front ends - these will be datastreams into the AR, but will need judicious filtering. (In fact we predict that filtering will be a major growth technology in enterprise and consumer AR)
*Most of the AR apps emerging today existed in some form 10 years ago at MIT and other R&D labs around the world. What has changed is the application of Moore's Law over 10 years and rational data pricing behaviour by mobile companies
Monday, September 28. 2009
One of the most interesting "non market" markets in Social Media right now, and one that is a real poke in the eye for online Social Capital/Whuffie/etc fans is the Twitter Suggested User List, or SUL. Being on the list is the difference between having a few thousand followers (or maybe a few tens of thousands if you were very well known in social media circles) and having many hundreds of thousands of followers.
This is because the vast majority of new Twitterers these days are joining the service without the network of friends that earlier adopters had, and are instead being pointed to the SUl for suggested people to follow. Ergo, being on the list means you get hordes of followers.
So what drives inclusion on the list? From observation, those who make the list are fairly eclectic, and one suspects the only real common factors is they are Friends of The Founders (or at least have some arm to twist). Robert Scoble's post here makes an interesting list of people who are not on and could be - though there are too many Friends of Robert there in my view - but his point is made that the SUL is no way meritocratic and I'm sure anyone could make a similar list. I recall being amazed that the Guardian's Tech blog was on the SUL, and the BBC's was not.
Jason Calacanis was not on the list last year, and his public plea to get on was to offer $250,000 as that was what he valued it at. He's still not on it, and languishes at the 75,000 folower mark whereas say Mashable's Pete Cashmore has 1.54m. I would argue that pre-SUL, they were similar sized lights in the New Media firmament - or at least certainly not at 20:1 difference
Now, students of the InterWebz will know this as Yet Another Power Law (YAPL). where those wot have get more and those wot haven't get shafted - its a long tale of woe. But the Twitter SUL is also that most dangerous of antisocial tools, a non-market market. It has enormous influence in the Twittersphere (and increasingly outside it, as PR agencies etc start to look at one's number of twitter followers as signs of influence outside the social network) but is capriciously generated (like the favours granted the by kings of old) rather than meritocratic, and gives great power to the holders of the keys.
It also completely f*cks up any attempt at getting equitable Social Capital (aka Whuffie) systems going in Social Media if one can have these Deus in Machina ratchets going. In fact, its even less fair than the "buy your way to success" model used by celebrities, as by and large with them you know what is going on.
Its also a strong sign that Social Networks can be organisationally more Feudal than Friendship based, more like Kingdoms than Democracies. All those things we fought for over 500 years in real life social networks are being overturned in electronic ones, and given we are holding these up as potential "Government 2.0" organisms......
OK, OK perhaps thats getting a bit too melodramatic, but you get my drift*....and every Social Media sleazeball has worked out the power of this, especially as most of the SocMed Whuffie fluffballs haven't yet clocked the risks, so the arbitrage is huge.
So, how to solve? Scoble's view is that one should:
Get rid of the list altogether. Turn off follower counts for everyone and come up with a new “engagement score” that is more focused on how you use Twitter and how people engage with you. That’s more important anyway than how many followers you have, especially since so many followers are lurkers at best or bots and spammers at worst.
As we explained above though, it is unlikely that the SUL owners will get rid of it, its far too valuable a tool (and may well one day be revenue generating). Other options to neutralise it are:
- Onramp systems (eg Tweetdeck) generate their own (and they should, given the power it drives)
I like the last one most - lets call it the Twitter Liberation Front! Next up we need a Manifesto! Something like:
"We, the Tweeple, will not be guided in our followership by the narrow interest of a twadre of self appointed twapitalists, and demand that:"
Carry on in 140 character points of order......we shall call it the Communitiest Twanifesto.
Join Today, the revolution starts in your laptop! You have nothing to lose but your Whuffie!
Update - on another tack, Howard Lindzon makes the point that:
If this Twitter Suggested List is the best Twitter has to offer after 3 years, they should be ashamed. The new web is about discovery and filtering, but forcing people to do it because ‘it’s easy’ or others will do it, is a ridiculously lazy and negligent. If I need Twitter’s homepage to discover JetBlue and Dell, Tony Robbins and Ashton Kutcher…continue to count me as unloyal to the brand. If I must use Twitter’s home page to make these discoveries still in 60 days, the investors should have a clause to get their money back.
So, the Twitter Liberation Front could be doing good by doing well
(*A serious aside - the point about Digital Feudalism vs Real Life Democracy is treated in jest here, but is quite serious given the impact and reach of these systems and the uses to which they are being put)
Friday, September 25. 2009
Telecom.TV's Leila Makki and The Really Mobile Project's Vikki Chowney teamed up to do some interviews in this video shot at the TEDxTuttle event (we did quite a bit of the behind the scenes stuff).
This morning I read about Twitter taking $100m and being thus valued at $1bn, a classic case of a small investment implying a valuation that would not occur if a full buyout or IPO was made (actually, this one is not too bad compared to Facebook) so I Twittered:
If I buy 0.000000000001% of Broadsight for 1p our valuation will be £1trillion #itssillytovaluebizonsmallstakes
Turns out that Jason Fried from 37 signals had much the same idea, but he followed it up with a masterful post that takes a pop at the whole "valuation of non revenue producing give it away for free" business:
In order to increase the value of the company, 37signals has decided to stop generating revenues. “When it comes to valuation, making money is a real obstacle. Our profitability has been a real drag on our valuation,” said Mr. Fried. “Once you have profits, it’s impossible to just make stuff up. That’s why we’re switching to a ‘freeconomics’ model. We’ll give away everything for free and let the market speculate about how much money we could make if we wanted to make money. That way, the sky’s the limit!”
He notes that:
37signals is now a $100 billion dollar company, according to a group of investors who have agreed to purchase 0.000000001% of the company in exchange for $1.00
Its a brilliant satire (who said Americans don't understand Irony?*), especially as its so true. I urge you to go and read it right now.
The naivete of the tech blogging and commentating classes was also made clear yesterday with the news of Dopplr's sale - the coverage on many of the blogs (see here) was shockingly naive (you want to cull your RSS reader inflow - use this as a way to judge), as were many of the commentators. If this is the best that Citizen Tech Journalism can do when something apart from pimping is required, its pretty p*ss poor. Kudos to TechCrunch Europe's Mike Butcher for actually doing something more incisive.
*Re Americans not understanding Irony as a stereotype - this post on TechCrunch and many of the comments make it why it exists
Thursday, September 24. 2009
A very interesting talk by Clay Shirky is transcripted over at Nieman Journalism Lab, its all about the Future of Accountability Journalism. In essence, it argues that we have had decent investigative journalism for most of the 20th century accidentally, in that advertisers paid over the top owing to limited print inventory, which drove a cash surplus, and that allowed newspapermen to do what they really wanted, which was print fascinating stories from their Man in Havana or whatever. Advertisers who demurred were left in no doubt about the paucity of other options, and this also allowed newspapers to stay free of commercial pressures, and thus they were able to bite the hands that fed them.
This, of course, is no longer the case. The Newspapers have been dis-aggregated and the stuff people will pay for has been siphoned out (classifieds, fluff content etc) leaving newspapers with the expensive stuff (investigative journalists) but no-one willing to pay for it. And as Shirky notes, that despite the New Media Utopianists' plaudits, this is Not A Good Thing:
....I also want to distance myself from the utopians in my tribe, the web tribe, and even to some degree the optimists.
He cites historical precedent, ie the hundred years following the printing press as a time of confusion, and thus hypothesizes there won't be a quick solution now either:
His solution seems to be to recognise that nearly all the investigative journalism eggs are in the News Industry basket, and thats a systemic failure - eggs need to be distributed across multiple, smaller baskets in a Darwinian manner, and a thousand flowers should be left to bloom:
Earlier on he makes an interesting point, ie the Internet has driven a shift away from pure commercial models, to other types of funding:
....there’s three methods for creating public goods. You go to the market, right? Not public goods, but rather things that are accessible to the public. You can go to the market, and things in the market are created when revenues can reliably exceed expenses. And then you expect some company to set itself up and provision.
Well, I can see how all this produces much wringing of hands and furrowing of brows in the USA, but in the UK I think the public good model is already existant in a useable form - its called the BBC. Shirky feels that the Investigative Eggs should not be placed in one basket, however - but as the BBC is publicly funded and thus eminently API-able, it could drive a whole ecosystem of social production - and it need not be Freeconomics, for, as Shirky notes elsewhere its probably a lot cheaper to pay the social sphere than the full time commercial sphere journalist.
But, and this is an observation rather than a question - why, if the horoscope reading, classified circling, coupon clipping classes were actually just subsidising the 10% who wanted to read about weighty issues, was it not possible to dis-aggregate them in the Print world? Why not a pure classified paper, a pure crossword paper, a pure coupon paper etc? And this is where it gets interesting - they did exist, but never achieved massive penetration like newspapers. There are only so many horoscopes you can read in a day. To be sure, many low-brow papers tried to run without any expensive investigative journalism (the British red tops like the Sun for example by and large replaced Our Man in Havana with Our Girl in Havering, with her top off) but - and this is I think the critical issue - they tend not to attract the sort of people many advertisers want to reach - wealthier ones. This, in my view, is why newspaper barons over the ages - many very commercially astute - have carried on doing well by doing good.
Thus there was a healthy market for papers that served well read - and wealthier - people. To my mind, Shirky may thus have missed a trick. Those people haven't gone away - look at the FT and Economist today! What may be occurring is that the disconnect between new media and old media models has driven a value gap temporarily in advertising revenues.
Why do I say this? Take me for example - I am the same bod whether I spend an hour reading the Times online or off. Its the same hour of my attention, you can divide my income past and future by hour and find its the same share of my net present value whether I read online or off. And yet it is far cheaper to serve the same Ad to me online. Doesn't make sense. Worse than that, I don't "do" horoscopes, hardly do a crossword, can't be arsed with coupons* and never watch ads when I'm hunting a classified bargain. You wanna reach me - talk to me when I'm reading interesting, thought expanding stuff. That is why I think that, whereas Clay is right in that there is a 70 year Schumpeterian shift, a 100 year resettling of the ways etc etc, there is also a 5 year arbitrage in the valuation of my attention, and that will be closed. And in closing that, News value will re-emerge.
But just in case, lets start talking to the BBC about open news API's................
* unless the discount is really good, like 50%
Tuesday, September 22. 2009
You suspected this all along, didn't you? You looked at those extreme power laws in social network usage (0.9% creators, 9% commentators, 90% consumers), the shenanigans at Digg and Wikipedia, and the massive disparity in Twitter followers between oiks and slebs - and the attention paid by the PR Ho's to the latter in this most non-hierarchical of all mediums - and began to suspect that not all in The Crowd were equal. And, after the Long Tail was debunked last year you suspected the Wisdom of Crowds was probably also not quite what it once was (see footnote below on what it really is). Well, now research from Carnegie Mellon proves you were right - from RWW:
Yup, turns out that 1% are running (ruining?) all the crowdsourcing schemes too:
It's not surprising then to discover that, when it comes to review sites, it's again small groups that are in control there too. Some sites, including Amazon, attempt to address this discrepancy by allowing users to vote on the helpfulness of reviews - a much easier process than having to write a review yourself. Also, local business finder and recommendations site Yelp implemented ways for business owners to respond to what they feel are inaccurate reviews by way of an owner comments feature. Unfortunately, despite these efforts, the small groups still remain in control of these so-called "popular opinion" features.
And never mind the flacking! As you can imagine, there are schemes abounding about how to solve this - reviewer activity transparency, vetting by social network, censorship of overly positive or negative views etc, but the best is just to be aware:
Perhaps it's time we give up the idea that the "wisdom of the crowds" was ever a driving force behind any socialized, user-generated anything and realize that, just like in life, there will always be active participants as well as the passive passerbys.
Another thought I have is how to "nudge" people into a more equitable form, some way of reducing your influence the more posts you make for example. One could have a declining rate of gaining social capital based on volume over time, modified by voted perceived value from "small cap" low volume users. The options are fascinating.
But fundamentally, the theoretical basis on which a thousand crowdsourcing startups have been launched is flawed, and all the sticking plaster in the world won't solve that.
Update - very good reminders by Taylor Davidson and David Jennings in the comments that in the original book "The Wisdom of Crowds" it made it clear that some very specific conditions were necessary for it to operate effectively. These are (Wikipedia):
Diversity of opinion - Each person should have private information even if it's just an eccentric interpretation of the known facts.
Most of the commercial "wisdom of crowds" commercial applications blithely ignore these, and use unfiltered social network input, so are prone to all the power law errors, madness of crowd problems etc. I forgot to note this difference between the original theory and commercial execution in the post, so apologies. The prime point remains, however - most commercial "crowdsourced" applications are flawed.
(Page 1 of 5, totaling 45 entries) » next page
More Broad Stuff
Poll of the Week
Will Augmented reality just be a flash in the pan?
Creative Commons Licence
Original content in this work is licensed under a Creative Commons License