Saturday 21 December 2013

What is the state of the art in Android sharing?

I meet a lot of people in my job. Consequently I get to see lots of different companies trying to create the best possible Android sharing experience. Just about all of them start off with the standard sharing system based on intents and the standard chooser dialog.

It's compatible with just about all devices but it gives users an alphabetical list of applications even though many of these may not make sense in the given context. For example the list below shows the stock Email app even though I'm a Gmail user who has never configured the other email app on my tablet. This dialog also has issues for users who install lots of apps and its alphabetical ordering means we're starting to see developers gaming the system in order to be at the top of the list.
There's also the problem that the intents system doesn't let you customise the recipient other than by passing in a set of key-value pairs so developers (like The Guardian and Soundwave) offer an explicit Google+ share button in the action bar for users who are signed-in with Google+. This gives their users direct access to interactive posts. They also offer the standard chooser dialog as well.

The Gallery and Keep apps try to remember the last N apps the user has shared and present them to the user by using the ShareActionProvider added in Ice Cream Sandwich. Shazam's implementation is built on the same idea but does something slightly more sophisticated with it. It shows a list of the apps I've recently shared content to but adds various social services to the top of the list. It also knows which social service I signed-in with and adds it to the action bar as a separate button. The assumption is that I'm more likely to want to share newly discovered music to that social network than with the random assortment of apps on my device.

Snapette and Fancy implement simpler variants on the same idea. They hardcode a small set of social services (including Google+, Twitter and Facebook) even if they're not installed on the user's device. Clicking on those options takes users to a sign-in dialog before they can share. In their defence Fancy does offer a 'More' button that goes to the standard chooser dialog. This offers an escape hatch for users who want to share to contexts other than social networks.

Another alternative can be seen in the Spotify app. They make their own version of the standard chooser dialog but add a Spotify button to the top. That's because the standard chooser dialog doesn't give you much control over the order or the membership of the list it uses.

Unfortunately if you make your own chooser dialog you're going to have to expend a lot of effort to make it resemble the real thing. Or you can just show a simple list.

So what do I recommend? Ideally you should use the ShareActionProvider but nowadays a lot of apps are finding that deep integration with social services drives significant traffic and engagement. In that case...

If the screen is large enough then you should have your preferred share option in the action bar next to a button that launches a custom share list that shows your preferred apps with a More button that sends the user to the standard share dialog. This pushes users towards the developer's preferred networks (these could be the app's network or the services that the user has used to sign-in) but still gives users a way to get to all of apps they've installed.

On smaller screens you should have a share button that sends the user to a custom list containing the developer's preferred networks with a More button that sends users to the standard chooser dialog.

This approach balances simplicity of implementation, predictability (users shouldn't have to wonder why options are appearing and disappearing from their chooser dialog), extensibility, value to the developer and responsiveness to device size. This may seem complicated but fortunately a large amount of this can be implemented using the ShareActionProvider and its support for fine-grained tracking of history.

This is a complex and subtle topic with many different approaches being explored by lots of very smart people. I'm not going to pretend that this blog post is the final answer. After all, there's always the option of building something completely specific to your needs.

Wednesday 11 December 2013

Migrating to Google+ Sign-in in 5 minutes

Are you looking to understand the available strategies for migrating your existing Google login solution to Google+ Sign-in? Well… You’ve certainly come to the right place
Who are you. You are you
If you're using OpenID1, OpenID2, OAuth1 or OAuth2Login then we have a detailed migration guide:

I strongly recommend reading it. Or at least skimming it since the social login market is bigger and more complicated than it seems. The following is merely a high-level restatement of the migration guide for people who aren't really sure which of the aforementioned technologies they're using.

If your existing system captures the user's email address using a Google identity solution then you can just:
  • migrate to Google+ sign-in
  • ask for the email OAuth scope
  • fetch the user's email address using one of our recommended approaches
  • look up the user in your database by email
  • associate them with the existing record that matches that email address since Google guarantees that the email addresses are valid
If your existing system doesn't capture the user's email address then life gets interesting.

If you're sure you're using OAuth1+OpenID2 then you can follow the instructions here: which tell you how to fetch your old identifier and find out the equivalent identifier with Google+ Sign. Once that's done you can just associate the new identifier with the existing record and the user can sign-in in future with Google+ Sign-in.

If you're using something else then you can ask the user to sign-in twice: firstly with your existing Google identity solution then with Google+ Sign-in. Now that you have both identities you can associate them in your database. Once a critical mass of your users have gone through this process then you can stop using the legacy identity solution. If you have to use this option then I would also recommend reading Michael Mahemoff's experience report from's migration: since I got the idea from him.

Wednesday 13 November 2013

SCNA 2013 in memes

What was SCNA 2013 about? Well...the following memes kept coming up.
  • 10 thousand hours
  • Dreyfus model
  • Reasoning by analogy (chess, film making, agriculture)
  • TDD
  • Clojure
  • Code katas
  • Autonomy, purpose and mastery
  • Agile
  • Craftsmanship
  • Design Patterns
  • Javascript
  • Apple Macs
  • Apprenticeship
  • Rails
  • Professionalism
  • Quality
  • Community
  • Family
I also did a little lightning talk in which I tried to get people to change their minds about the Singleton pattern.
 Ceci n'est pas une Singleton

Monday 7 October 2013

The screens in our future

Through historical accident, television has come to be seen as the first screen with our mobile devices as the second screen. This implies a mental model that's usually driven by television people who really mean that television is the primary screen whilst all others are merely secondary screens. When these devices are viewed from the perspective of the programme makers that is correct. But things change if we look at them from the perspective of the user.

When we look at usage it becomes clear that mobile devices (currently just phones and tablets) are the primary screen whilst television and desktop computers are the secondary screens.
Mobile is my screen.
Tablet is our screen.
TV is their screen.
-- Simon Davies and Ade Oshineye
The above quote is a synthesis of points made by Simon Davies and Ade Oshineye during an Internet Week panel discussion. They reflect a way of looking at screens/devices that may have surprising amounts of explanatory power.

The mobile (phone) screen is the most personal screen. It tends to belong to one person who carries it everywhere and seldom shares it. That person will customise it with a combination of apps and bookmarks that's practically a fingerprint. 

The tablet is large enough to be viewed by multiple people and is often shared within a family. As such it accumulates layers of choices from all the people who have used it. However it still tends to feel like a device that belongs to a small set of people. Whilst both of these devices empower people by giving them control over usage and content, the television is different. 

Whilst it may belong to one or many you're never allowed to forget that the content on your television is decided by other people. They decide what you can watch and when you can watch it. As for customisation…forget it.

The final step is to realise that we shouldn't be thinking about a fixed number of screens. We're facing a multi-screen future. That means there are going to be N categories of M screens in your life. And the values of N and M will only increase over time.

This is not a problem we will solve by building responsive websites with a fixed number of breakpoints. This is not a even problem we will solve by pouring the same content into a fixed number of buckets or screen sizes.

This is a Cambrian Explosion of contexts, interactions between contexts and user journeys across contexts. Time to adapt.

Plenty of room for more screens

Monday 30 September 2013

Minimum-viable identity provider

What's required in a minimum-viable IDP (Identity Provider) in 2013?

When I talk about viability I really mean "competitiveness" and I suppose what I'm really asking is what does it take to get RPs (Relying Parties) to integrate and users to authenticate with an IDP?

The list of requirements below was first published in a presentation I gave at Over The Air 2013. It's the result of hundreds of conversations with RPs and user over the last few years.

Valuable accounts. Are the accounts attached to real people who have been SMS verified? Does the IDP fight off attempts to create fake and spammy accounts?

Security. Are the accounts stored using salting and hashing? Do users authenticate using multiple factors? Are precautions being taken to ensure that user accounts are protected?

Rich profiles. Does the IDP offer data that you can use to personalise your service such as profile data, photos and social/interest graphs?

Ubiquitous APIs. Does the IDP offer ReSTful APIs, native SDKs, client libraries in various languages and support for RTL languages?

Escape hatches. Does the IDP lock-in RPs and/or users? Can the RP obtain the user's verified email address so that the user has the option of using a different IDP with the same RP account? Is the RP forced to build their own post-registration flow?

Business model. Does the IDP make money or otherwise benefit from providing this service? Do they have a compelling incentive to stay in this business?

The final and most controversial ingredient is scale. Most people would say that all other things being equal an IDP with more accounts is better than an IDP with fewer accounts. I'd suggest that it's better to have accounts that are appropriate to the service the RP is trying to provide. For instance for Statigram the best IDP is Instagram but for the best IDP is Twitter and, of course, for any service that uses Google services (like Youtube, Android, Drive, etc) the best IDP is Google+.

The items listed above are just the necessary but not sufficient features of a viable IDP. Successful IDPs will still have to identify and provide additional value in order to get widespread adoption by users and RPs.

Wednesday 25 September 2013

Beyond the NASCAR

People in the identity world worry a lot about the NASCAR problem. 

They worry that showing a large set of buttons will hurt conversion rates (because of the paradox of choice) and confuse users who don't remember which IDP (identity provider) they used on a particular site+device combination. 

Stack Overflow NASCAR

I don't think that's a big problem nowadays. 

That's because we're down to a fairly small set of viable identity providers (henceforth known as IDPs). Most of the others are either dead(MyOpenId), new (Amazon Login) or only useful in specific niches (GitHub, Instagram, Tumblr, LinkedIn etc).

If we look at Stack Overflow's data we see that 5 IDPs are used by 98.6% of visitors but everybody has to deal with the cognitive load of choosing between the 12 buttons and 1 form field. 

Reducing the set of buttons to 3 would still give users a choice whilst reducing cognitive load. By cutting down to just these 3 IDPs they'd have covered the vast majority (in Stack Overflow's case 92.02%) of potential users of their site and greatly simplified the experience. 

However if you prioritise providing access for 100% of your users over providing the best possible experience for the majority then you have several alternative strategies available to you:
If your goal is optimising the percentage of users who sign-in and making sure those people get the best possible experience then here's what I suggest:

  • Use Google+ and Facebook buttons (there are also going to be scenarios where Weibo/Renren/VKontakte are appropriate additions). 
  • Then use checkSessionState() and FB.getLoginStatus() to find out if the user is already signed in to Google or Facebook. The mobile SDKs have equivalent APIs.
  • Then suggest whichever account the user is already using by putting that button first and/or making it bigger. 

We've even published a guide to handling the scenario where the user is signed-in to both IDPs and you can automatically bypass the sign-in screen. 

However there's still the situation where a user prefers different IDPs on different machines: for instance if they work in a company that blocks Facebook at the firewall or they prefer Google+ on Android but Facebook on iOS. For those users a naive NASCAR implementation leaves them with one account on your service per IDP. 

The easiest solution to is to ask for the user's email address and use that to correlate all the accounts they use to login to your service. That way the user never has to worry about creating duplicate accounts. Of course this does restrict you to IDPs who can offer a verified email address. 

The only IDP this excludes is Twitter. If you are using Twitter as an IDP then you'll have to either capture (and verify) the user's email address in a post-registration step. asking for email after using Twitter for identity

Sometimes you have to do things the hard way. Usually it's because you have large numbers of accounts with unverified email addresses (for instance if you used a standard OpenID IDP or used Twitter without capturing and verifying email addresses) or you're migrating users from one IDP to another. 

In that case you have to provide a 'connect flow.' This is where the user signs-in to your service with one IDP and you ask them to 'connect' with additional IDPs. Afterwards you know that the same person owns that set of accounts even if they have different email addresses associated with them. 

Connecting accounts on Soundcloud site

The heuristics above mean that the NASCAR anti-pattern doesn't have to harm conversion rates or UX.

If you'd like to learn more about this stuff I'll be attending Over The Air 2013 where I'll be walking people through examples of these heuristics in production and talking about the multi-device multi-platform post-NASCAR future of identity. Join me. 

Wednesday 18 September 2013

Why doesn't this blog allow comments?

Should your blog have comments? That's one of the perennial questions that every blogger faces. Are comments a way to bring in vital feedback from the-people-formerly-known-as-the-audience or are they merely a mechanism for enabling strangers to spew hatred and bile on a page with your name attached?

Not now

Historically my position has been "comments are bad, run away." My reasons included:
  • I really don't want to deal with spam. The only thing worse than having spam on your blog is using moderation systems that mean I have to read every spammy comment in order for you to get a better experience. I like you but I don't like you that much.
  • BuzzGoogle+ is a better conversational network than my blog. Every post on my blog ends up on my blog's Google+ Page as well.
  • BuzzGoogle+ also has the advantage that there can be multiple conversations by completely disjoint communities about the same blog post.
  • BuzzGoogle+ emphasises Real Names and Serial Identity. This means I can look at people's activity stream to see what they've been posting about, commenting upon and sharing. Of course, just because you're using your real name doesn't mean that you won't say or do things that I find objectionable but which your community finds laudable.
  • I agree strongly with Derek Powazek that your right to free speech stops where my territory starts.
  • I think it's a terrible idea to put everybody who has an opinion on a topic into the same room. That invariably leads to name-calling because they have so little common ground or shared vocabulary. For every person who understands the topic and wants to discuss nuances there'll be 10 people who would like a clearer explanation of the fundamentals. 
All of the above are good and sound reasons for disabling comments. So why have I just enabled comments on this blog?

The main reason is that I have new technology and I want to see if, just this once, technology can solve a social problem. The secondary reason is that I'm interested in aggregating the conversations around my blog posts. My hope is that this aggregation will help me discover people who are saying interesting and insightful things about what I've written.

I could be wrong but I live in hope.


Wednesday 19 June 2013

In Praise Of Shadows

I bought In Praise Of Shadows by Junichiro Tanizaki in a Dutch museum. It's an admirable corrective for anyone who feels that their taste has been overwhelmed by any particular aesthetic.

In praise of shadows

"a man who has a family and lives in the city cannot turn his back on the necessities of modern life" p3

"I always think how different everything would be if we in the Orient had developed our own science" p13

"how much better our own photographic technology might have suited our complexion, our facial features, our climate, our land" p16-17

"Of course this 'sheen of antiquity' of which we hear so much is in fact the glow of grime. In both Chinese and Japanese the words denoting this glow describe a polish that comes of being touched over and over again,  a sheen produced by long years of handling--which is to say grime." p20

"elegance is frigid" p20

"Sometimes a superb piece of black lacquerware, decorated perhaps with flecks of silver and gold -- a box or a desk or a set of shelves -- will seem to me unsettling garish and altogether vulgar. But render pitch the void in which they stand, and light them not with the rays of the sun or electricity but rather a single lantern or candler: suddenly those garish objects turn somber, refined, dignified. Artisans of old, when they finished their works in lacquer and decorated them in sparkling patterns, must surely have had in mind dark rooms and sought to turn to good effect what feeble light there was." p23

"The quality that we call beauty, however, must always grow from the realities of life, and our ancestors, forced to live in dark rooms, presently came to discover beauty in shadows, ultimately to guide shadows towards beauty's end." p29

"For the painting here is nothing more than another delicate surface upon which the faint, frail light can play; it performs precisely the same function as the sand-textured wall." p32

"This was the genius of our ancestors, that by cutting off the light from this empty space they imparted to the world of shadows that formed there a quality of mystery and depth superior to that of any wall painting or ornament. The technique seems simple, but was by no means simply achieved." p33

"And there may be some who argue that if beauty has to hide its weak points in the dark it is not beauty at all" p46

"we find beauty not in the thing itself but in the pattern of shadows, the light and the darkness, that one thing against another creates." p46

"A phosphorescent jewel gives off its glow and color in the dark and loses its beauty in the light of day. Were it not for shadows, there would be no beauty." p46

"It struck me that old people everywhere have much the same complaints. The older we get the more we seem to think that everything was better in the past. Old people a century ago wanted to go back two centuries, and two centuries ago they wished it were three centuries earlier. Never has there been an age that people have been satisfied with." p59

"I would call back at least for literature this world of shadows we are losing. In the mansion called literature I would have the eaves deep and the walls dark, I would push back into the shadows the things that come forward too clearly, I would strip away the useless decoration. I do not ask this be done everywhere, but perhaps we may be allowed at least one mansion where we can turn off the electric lights and see what it is like without them." p64

Monday 8 April 2013

Open always wins?

"Open" is one of my tribe's worship words. It is a word that is beyond criticism, analysis or critique except from professional trolls.

So what does it mean when people say "open always wins?" It means that because TCP/IP, HTML and Apache won then open systems, open standards and open source will always win given a long enough timeline. If the open solution isn't winning yet then we just have to wait.

This may seem like a strawman but Chris Saad bluntly stated "Whether it’s a year, a decade or a century, Open. Always. Wins."

I disagree. Mere openness isn't enough. Just because your product or service is open doesn't mean it's destined to win. Plenty of open solutions have 'lost' but we tiptoe past that particular graveyard. We either pretend that we don't remember its denizens or that they're merely sleeping.

Whilst I have a religious belief in openness and standards I can see the difference between what I want to be true ("open always wins" and "next year will the year of Linux on the desktop") and what is actually true. I want open systems to win but I'm also aware that this isn't guaranteed.

In fact when open solutions win it's because they:
  • have superior User Experience
  • have superior Developer Experience
  • give each user/developer/company more value than the equivalent closed solution
  • create a larger (and thus more valuable) market/network than the equivalent closed solution
  • co-opted the existing closed solutions
  • do something that no closed solution can match
  • commodify existing closed solutions thus rendering them unprofitable

Despite this I'm always surprised by the number of people who believe that openness is a sufficient condition for success. I'd even go so far as to suggest that if the only quality a solution has is its openness then that's a good indicator it's going to fail.


Sunday 31 March 2013

Speakerconf 2013

A man, a hat, ... What is Speakerconf? Speakerconf is a small (roughly 16 attendees) invite-only conference where everybody who attends gives a presentation about a topic that's currently on their mind.

Speakerconf 2013 was educational, fun and humbling--all at the same time. It featured a wide range of speakers talking about a wide range of topics. Everything from UX to constraint programming to microservices to model checking to tail-call optimisation in Java 8 got covered.

The breadth and sophistication of the talks means that in every session at least some us were completely befuddled whilst others were making connections across disciplines that don't normally share the same conference let alone the same room. For every talk about computation tree logic that went completely over my head there was a moment when I got to introduce people to the ideas in composing contracts or As We May Think. Many of the other attendees told me they had a similar experience. This resulted in an environment that was unusually conducive to respectful and enlightening conversation.

If you get invited to attend a Speakerconf then I strongly recommend you accept.

Wednesday 20 March 2013

Why do we bother with APIs?

I love APIs
Sometimes people wonder why we bother building APIs since it seems they can end up being used in ways that compete with our own products.

There are idealistic reasons for building APIs, as outlined by Jonathan Rosenberg, but there are also commercial benefits even if you don't share that philosophy. The main one is that APIs reduce the friction involved in making your services more valuable. They make it easier for other people to add data to your services. 

They also attract more users to your services by effectively advertising them on other people's sites. As well as increasing your visibility APIs also ensure that users are more likely to try your services since the risk of lock-in is reduced. If you have at least a CRUD API potential users know that there will be a  mechanism for extracting their data if something better comes along or if your services change in ways they don't like.

The other benefit of APIs is that they lower the cost of experimentation and increase the set of potential experimenters. These experiments can serve your users in two ways. Firstly they can handle niche use cases without cluttering the user interface of the application. Secondly some of these niche use cases may turn out, after a period of refinement, to be useful for mainstream users or for attracting completely new sets of users.

Another thing we've learned the hard way is that if you don't give people an API or you give them an insufficient API they'll resort to screen-scraping and hacking in order to unlock the value in your product. This can create dependencies on things that were never meant to be stable or it can lead to the emergence of widely-used but unofficial APIs

That behaviour can harm your product, your developers and your users. For example it can lead to a mismatch in expectations when some developers believe they're using an official API with established deprecation and change management policies. You also have to ensure that the APIs you create don't damage the product, for instance, by making it very easy to spam or game your system.

Providing an API, no matter how good, is just the start. The next challenge is to make something valuable enough that developers will use it in the absence of some extrinsic compulsion.

Firstly this involves making something that's easy to experiment with. So it should be easy to copy-paste a personalised URL into a browser and see a pretty-printed dataset.

Then you have to offer a path from there. The path starts with letting people play even if they don't understand your service all the way to the point where they understand your abstractions and the specifications you're using.

People should be able to go from playing in the browser to playing at a terminal with curl/wget to playing with an OAuth-enabled HTTP client to playing with your specialized wrapper libraries for your API to building businesses upon your platform.

But you can't just stop there. If you want to go from merely offering an API (typically a set of CRUD operations on your product's datasets) to building a viable platform you need to solve some difficult problems:

  • how does your platform, as opposed to your product, generate revenue or value for you?
  • how does your platform generate revenue or value for those who build upon it?
  • how do you respond to and/or incorporate the innovations that will be built upon your platform?
  • how do you nudge developers into creating more value than they capture from your users and your platform?
  • what happens to this surplus value? Is it being re-invested in the platform or siphoned off?
Even if you solve all these problems you don't have any guarantees of long-term success. The transition from API to platform to ecosystem is difficult and most APIs don't make it. However APIs can still help developers create new possibilities along the way.

Tuesday 19 March 2013

What do you mean 'we'?

"The Moving Finger writes; and, having writ,
Moves on: nor all thy Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all thy Tears wash out a Word of it."

The web that Anil Dash wrote about wasn't lost. It was rejected. 

Dash himself rejects it when he uses a commenting system that only allows Facebook users to comment. Daniel Tunkelang rejected it when he abandoned his blog in favour of a network that gives him higher levels of engagement. I reject it when I use Instagram to take and share photos just because it's more convenient than the alternatives. 

My initial response to Anil Dash's The Web We Lost was a mixture of amusement at his rose-tinted nostalgia, annoyance at his revisionist history and bemusement at his usage of Facebook comments. As time has gone on I've realised that Dash is not a hypocritical finger-wagging reactionary but just another sensible person making sensible decisions about the networks that will generate the most engagement for his content. Of course these sensible decisions happen to clash with his stated beliefs.

The mainstream of humanity actively rejected the web-that-was rather than accidentally let it slip away. They rejected it for much the same reasons they rejected the prospect of running their own power generator. It turns out that using a central power grid gives you a better quality service for less effort which frees you to focus on the things you really care about. Humanity rejected a vision of the web where everybody runs their own websites because it turned out that most people don't care as much about maintaining infrastructure as the geeks who formed the majority of the web's users 10 to 20 years ago.  That's why every time I see someone, for instance Clay Shirky, who has been cheerfully running a compromised blogging engine on his own domain for years I shudder at the idea that we once thought self-hosting was going to be the norm.

Felix Salmon's article was one of the first responses that acknowledged this problem. It made me realise why Dash's article reminded me so much of the distress of the privileged. That's because the 'we' who lost something is the set of middle-aged geeks who miss the way things used to be and want to roll back time to a world where only geeks could harness the power of the web. Like scribes bemoaning the advent of universal literacy the comments section of Dash's post is full of people saying how much better things were when communication tools were difficult to use and restricted to a sophisticated elite.

This makes me sad. The dream of the early web was that by removing the Gutenbourgeois as gatekeepers we would create the possibility for new voices to be heard. Wish granted. 

Unfortunately the technocratic response to these new voices was to dismiss them as an Eternal September of clueless newbies. It's as if the web was better before all these 'other' people turned up and started making choices 'we' don't like. It's as if all those developers choosing to build upon technologies with clear value propositions (build upon this platform and you'll get users and paying customers) and good DX were wrong. It's as if the billions of non-geeks were either ignorant, misled or suffering from false consciousness when they chose closed systems with great UX.

Robin Sloan has a refreshing perspective on this issue. He writes, on Medium, that we've reached a point where our taste has outpaced our skill. Our taste means we demand that an acceptable website must have lots of qualities that are beyond the skill of the average individual. By framing the issue in terms of taste and skill he shows why the pendulum is unlikely to swing back. Running a sufficiently high quality web site, as opposed to a web presence, is so hard that the amateur web looks like a wasteland of dead blogs, unmaintained websites and broken linksAgain and again and again sensible people choose better UX or a larger network over a more open, decentralised or federated service. But what if this flight to quality isn't a problem?

What if all those billions of people made intelligent decisions that made sense for them? What if the people saying that the past was better than the future are the ones who are wrong? What if we reject this mythical past in favour of a new future where we try to build new things that people use because they're better solutions not because they claim superior morality?

Appeals to a bygone era where the web was more open but less diverse aren't going to inspire the construction of a better future as history teaches us that "convenience wins, hubris loses." Instead those appeals sound like the beleaguered art critic moaning that "taking a picture feels like signing up to some mad collective self-delusion that we are all artists with an eye for beauty, when the tragicomic truth is that the sheer plenitude and repetition of modern amateur photography makes beauty glib." When Dash writes that there's "an entire generation of users who don't realize how much more innovative and meaningful their experience could be" but can't point to any examples it sounds like yet another hollow claim that things were better when we were young.

Maybe things really were better when we were young but I've learned to distrust appeals to bygone golden ages. Instead I want to hear people talking about vibrant futures. I want to see people working on new ideas that may not work out but which open up new possibilities. I want to see new people making new things. I want to see people making new things with all the uncertainty and doubt that brings.

This is why I'm increasingly hopeful about efforts like IndieWebCamp and ParallelFlickr. These are people building things that are useful primarily for themselves and possibly for others. That's how we'll invent a new and better web.

Jaiku forever

Friday 1 March 2013

A world of social login

Who are you?

We've known for years that passwords are bad.

They're bad for users because they tend to use the same weak password across multiple sites which means they're only as safe as the least secure site they use. They're bad for developers because the sign-up process loses a large portion of potential users. They also force every developer to jump through all the steps required for a world-class identity system:

  • multi-factor authentication
  • the forgot password dance
  • a salted and hashed password database
  • etc.

Despite all this, passwords and the password anti-pattern are still prevalent.

Social login isn't a panacea but in the long run the only viable solution is delegating authentication to a small set of high quality identity providers. It has to be a small set to avoid the damage to conversion rates caused by the NASCAR problem. They will be high quality since the market is so competitive that low quality providers (where quality is a measure of the experience/value provided to users, developers and publishers) will find it hard to acquire and retain users. The market will be competitive simply because various entities have realised that social login is the backbone of any successful ecosystem so they're making the necessary investments.

This is sub-optimal but the OpenId dream (where every user runs their own server and their own OpenID endpoint) ran aground on the twin rocks of user apathy and security. Even if the dream had survived that it still didn't have a good answer to the major publishers who wanted to know what they would be getting in return for the extra effort of supporting OpenID. If you think OpenID Attribute Exchange and PAPE are solutions then you may be wearing the complicator's gloves.

The only questions left are:
  • who will be these identity providers
  • what will be their business models
  • how will we assess and choose between them
  • how will we keep them honest
  • how much control do they give users
  • do they help developers build better and more valuable services as time passes
  • will they become gatekeepers that constrain future innovations

This moves us to a world where users authorise developers rather than particular apps or web sites. As a result once you give a developer access to your information you give all of their services and apps access to your information. Technologies like OAuth2's bearer tokens mean that developers can easily pass access to a user's information back and forth between their mobile apps and their back-end systems.

In this new world developers will have to deal with multiple competing identity providers who each impose their own constraints and policies in order to protect their users. As a result developers will have to start thinking in a more sophisticated way about the way they propagate identity between their different systems, track the provenance of user data and honour the conflicting policies imposed by multiple identity providers. They'll also need more nuanced terminology. It won't be enough to think solely in such crude terms as "public" versus "private". Developers will also have to be aware of the subtle distinctions between "obscure" versus "secret" and "public" versus "publicised".

In return we get a world of social login where you bring your identity, your interests and your community to every app, service and device rather than just the ones built by identity providers with unified privacy policies.

Friday 22 February 2013

The Google+ Sharelink Endpoint: doing it right

If your site has a Google+ sharing feature that uses this URL: then you're doing it wrong. You're using unsupported and undocumented functionality. Don't.

You should be using a sharing URL that looks like this:

That's our official sharelink endpoint. It is supported, monitored and maintained. The URL you're using right now is an internal part of our +1 button's Javascript API so it's subject to change because we don't expect anyone else to be depending on it.

The documentation for the sharelink endpoint is here: It even offers a set of standard graphics that you should be using for consistency with the rest of the web.

In short: don't be like the guy in the photo below.

Helvetica heretic