Wednesday, March 25, 2015

Inclusivity is a Bad Thing.

(This post inspired by Michelle Lyons-McFarland; check her out at among other places)

There are a lot of white dudes in IT talking about "inclusivity" in their culture and how it's important, as if "inclusivity" is an end-stage boss they can beat, or a card they can move from the "actionable" to the "done" part of their burn board. It's often used in concert with "diversity" (which is another tricky concept that I might go into later), and is touted as a good thing in and of itself.

I can claim a lot of things as a cis white guy in IT, but here's one thing that you should probably trust me on: Inclusivity Is A Bad Thing. It's bad for the individual, it's bad for communities, it's bad for teams and organizations, and it's bad for society as a whole.

It's bad because at best it means nothing, and at worst it means a deliberate and willful choice to avoid making decisions. I'm not even a big fan of the phrase "be inclusive" because it, too, is a move towards avoiding action and choice, rather than possibly making a stand and risking some sort of outcry or backlash or (in extreme cases) horrible harassment. Inclusive is the wrong word. It's the word that nerds and PR flacks use to say something without saying something.

No, if you want a stronger organization, a stronger team, a stronger community, you must include people. If you want a better range of colour and gender and backgrounds in whatever it is you're trying to build or improve, then you, both individually and collectively, must act to build or improve. And that means changing the words used. "We want to be inclusive" is a passive statement, and implies that the problem is not you or your organization, but all of those silly people who can't figure out how awesome you are. "We want to include more women and minorities" is better. "We want to include more women" is much better. "We are working to make our organization more friendly towards GLBT folk" is much, much better.

Organizations require work. Good organizations require lots of work. Some of that work is deciding who will or won't be a good fit for your organization, and determining the rules about how to admit and how to exclude individuals. Because the truth of the matter is that not all people fit in all organizations; that's the nature of both people and organizations. And for that matter, not all of the people who the organization thinks will fit will actually fit. It's possible to create a team that wants to have more women on it but decides not to hire a woman. It's possible to create a company that wants more black programmers but doesn't hire every black programmer.

The set [all the people everywhere] is not a good fit for anything other than a definition of population. Necessarily, organizations will not want to, but need to, exclude some people. Clear and easily understood exclusion principles are hard to implement, but ultimately will improve whatever group you're trying to create. Exclusion and exclusivity aren't a priori bad things, as long as they're clearly understood and communicated.

Thursday, March 05, 2015

The Cognitive Gap Of Why

So a non-trivial number of people whom I respect and enjoy have made the very same mistake about a bunch of inter-related application usage patterns specifically about social media tools and the infrastructures therein.

That's a complicated starting sentence, so let me give a specific example (which is just the latest in a long line of argumentation all of a theme): the excellent CGP Grey made an argument about Youtube and why it can't be better at serving up videos and be more like Netflix when presenting content. It's an excellent point, to be fair: Youtube is fantastically bad at serving up content that I want in the way that I want it when I'm trying to watch stuff, and I'm not even a publisher; CGP Grey's problems are at least twice the difficulty level from mine.

The problem is, of course, that the problem reverts to a very old axiom that I've used since I heard it the first time: nearly every question that starts with "why" can be answered with "money".

Netflix and Youtube have two fundamentally different business models. For Netflix, their customers and their users are the same people: the audience for Netflix is the people that gave them money, and so they are motivated to deliver a good user experience because not doing so will cost them money. Their Ops focus is stability, reliability, deliverability, and service. Their UX focus is about getting photons into eyeballs as quickly and as efficiently as possible. Their goal as a company is to satisfy the viewer.

For Youtube, though, the users and the customers are two entirely different groups. Youtube doesn't make any money from the person who comes to look at the videos they host; in point of fact, they arguably cost money for Youtube. In fact, content-uploaders aren't the customers, either, which is hilarious because Youtube wouldn't exist without the people who upload stuff. No, the customers for Youtube are the advertisers and aggregators that want the data about the users. That's what Youtube is selling, even over and above the ads on top of the content itself; they're selling data about what users are watching.

The same is true of social media sites like Twitter, Facebook, and Google Plus. The people who use those sites are not the people that the sites care about, at the end of the day. It's why Facebook won't set their algorithms to display status updates in explicit chronological order. It's why Twitter is changing the methodology of the timeline. It's why Google Plus doesn't disable plus-one sharing, even though nearly everyone who uses G+ hates it. The people that use the sites are not the audience. They're not the customers. The customers are the people who pay Facebook, Twitter, and Google for data about the users.

If a service isn't charging you for using it, then you are the service model.

The answer to nearly every "why" is almost always "money".