15:58, Wednesday 5 Jun., 2013

Okay I've been blocked on a bit of writing for about two weeks. And since it appears I can't think my way out of a paper bag, how about we have some random links from my open tabs.

Fingers crossed, now unblocked.

Interconnected

A weblog by Matt Webb, CEO of BERG, makers of BERG Cloud and Little Printer.

Korbo, Lorbo, Jeetbo.

You're probably looking for my email address or the syndication feed.

You can get updates to this blog on Twitter: follow @intrcnnctd.

I'm @genmon on Twitter. Also find me on Flickr and LinkedIn.

20:23, Wednesday 15 May.

I've been doing some competitive landscape analysis around connected products/Internet of Things platforms -- I'll write up my thoughts soon. During research I touched on Bluetooth 4, which seems like it could be the connective tissue of a peripheral ecosystem around smartphones just as USB was for peripherals around the PC.

And in this section, I hadn't included Apple's MFi Program in the list (MFi is hardware and certification for iPod, iPhone and iPad.) Greg asked me why. Well, I said, they don't do enough UX integration, and besides, I don't want to give them any ideas. If they did what I think they should do, they would totally own connected products.

But hell! The Big 3 are full of the smartest technologists on the planet!

It's not for lack of ideas that they aren't doing this.

So here's how Apple, or Amazon, or Google could totally become the platform for the future world of connected products, and - with a connected products platform of my own - the thought that one of them might make a move like this is what keeps me up at night.

Amazon

Starting point: With the Kindle, Amazon have an amazing chip that has global connectivity via 3G. They also have a billing model where the content provider pays for delivery (currently $0.15/MB for Amazon.com deliveries to the US, which explains why you don't get many graphics-heavy books on the Kindle). This kind of billing infrastructure is hard.

What happens: Amazon apply their genius for service oriented architecture (SOA) to Kindle's Whispernet functionality, take advantage of their economies of scale, and provide wireless chips that any developer can use. Just as they SOA'd their storage requirements into S3, and their server farms into EC2 - now both services that are the tarmac of the modern web - they couple this SOA'd hardware connectivity with Amazon Web Services, and create the perfect platform for connected products. Of course Amazon also own an identity system with associated credit cards/payments platform. Plus they really get APIs.

Amazon would own connected products. You wouldn't build on anything else.

Apple

Starting point: The emerging smartphone peripheral ecosystem (appcessories and whatnot) is built around Bluetooth 4, the low power wireless standard that Apple have been including in their products since 2011.

What happens: Right now dealing with appcessories on the iPhone sucks (claiming and syncing), so Apple add some minor UX support, adding hardware products to the homescreen with a parallel to Newstand called Nightstand -- a virtual table for physical things. You associate each product with your Apple ID. Then, to solve the problem that connected products need to talk to the web without a smartphone present, they activate the Bluetooth 4 already present in the Apple TV (and maybe add one to the Airport Express), and make it so that any product that can connect via your smartphone can also connect via any Apple TV you've signed in on using the same Apple ID. For bonus points, iCloud is used for the messaging layer, so any data sent via the Apple TV also shows up on your iPhone. Of course Apple owns an identity system with associated credit cards, fully capable of micro-payments and subscriptions.

Apple would own connected products. You wouldn't build on anything else.

Google

Starting point: Android. Motorola.

What happens: Google take cheap cellphone guts - the peace dividend of the smartphone war -and use Motorola to release a development platform that runs Android, rebooting the Android @Home program that was launched back in 2011 with smartphone-controlled lightbulbs. In this new 2013 world of Arduino and Raspberry Pi, hardware is way more accepted... but loads of people already know how to develop for Android. So developers flock to this new platform. You're not locked into Google's hardware, because Android hardware is commoditised down to the CPU, unlike similar offerings from Amazon or Amazon. The UX is provided by Android apps, of course. Google Cloud Messaging is used to link the connected hardware to regular ol' websites that developers build themselves. Websites are easy, and Google trusts the web. The platform is a great combination of open and familiar. Google also owns an identity system, and a payments platform.

(A note: I don't think Google could pull off the Apple model of a peripheral ecosystem built around Bluetooth 4. Google doesn't have enough non-smartphone presence in the home, and Android fragmentation would be a major problem -- especially Samsung's ownership of the front room via the Smart TV platform, which would put the two companies at odds.)

Google would own connected products. You wouldn't build on anything else.

Who I'd back

I wouldn't back any of 'em.

It's true, if any of the Big 3 made a move like this, you'd be dumb to use anything else for your Kickstarter project or new hardware company. It would be great. So many common problems would be solved.

But I'd be sad. We'd be stuck with a platform that met our imaginations only of today. It wouldn't evolve; big companies are too slow.

We're only going to discover the weird and wonderful opportunities of connected products once we've rolled our sleeves up and got our hands dirty. How are connected products going to change our homes, our offices, our cities, our social lives? Who knows. It'll take years to find out. And at that point, maybe we can have a dominant platform. That'll be fine. Until then there's BERG Cloud and a dozen others to help figure it out. There will be more. Let a thousand flowers bloom!

14:25, Tuesday 14 May.

How To Price Your Hardware Product, Marc Barros:

The mistake most hardware startups make is they don't charge enough because they don't think of the problems they will encounter at scale. They don't calculate the real cost to deliver their product to a customer's door, they leave no margin to sell through retail down the road when opportunities arise, and they can't easily raise the price after it has been set.

Covers some good points that you need to take into account, beyond your profit margin:

All points that are easy to forget when you're looking at the bill of materials for whatever the core component is.

Here's one of Barros' examples using top-down pricing: $200 retail means you get $101.80 from your customer. A product cost of $58.10 means you have a margin of $43.70, or 42.9%. He recommends shooting for a margin of 50%. All reasonable, sensible, I like his summary for this: Don't be afraid to charge more. Long term, your loyal customers will thank you for staying in business. You're not thanking your customers in any way if your low margins mean you have to skimp on customer service, or developing improvements to the product they've invested in.

To my mind, there are two disruptions that make this take on pricing difficult.

Kickstarter

I think about Kickstarter hardware projects in two categories. There are those made for love not money. (And that's cool -- hardware products, like any creative act, can be made for 1,000 true fans with the potential - but not requirement - to break through into the mainstream. I love it.) Then there are those where Kickstarter is about getting mindshare, learnings, and the infrastructure to build the products that come after this one -- there's no profit requirement. That's cool too: In an established company, products sit underwater for a long time before they break even.

These projects are low margin, funded by love and future expectations, and - because Kickstarter is also a great distribution platform - they don't need to build in retail margin. Consequently the prices are lower than equivalent non-Kickstarter projects.

Amazon

I use Amazon as a proxy for the shifting sands of new business models. The Kindle is sold at cost, or below: It's all touchscreen, PCBs, and battery. Where do Amazon make their money? Well, nowhere yet... they're a notoriously low margin, long term view company. But once they make $3/month additional sales, the Kindle Fire moves into profit. But think about this... if the $159 was sold with the same markup suggested by Barros, we'd see a RRP of $547. Insane.

This isn't new. Cellphones have been subsidised by carriers for years, their high up-front offset against monthly bills. Car financing is common. DFS functions more like a credit company then a sofa store.

But it's becoming more common in the hardware world as subscription relationships become more accepted -- and more necessary. When products connect to the cloud, the cost structure changes once again. On the one hand, there are ongoing network costs which have to be paid by someone. You can do that with a cut of transactions on the platform, by absorbing the network cost upfront in the RRP, or with user-pays subscription.

We're finding product categories dominated by one business model or another. It's hard to enter a subscription-dominated category with a straight-forward retail model. Your product will look too expensive.

It's not as easy as it once was.

Enough product companies are operating at zero margin, or on some alternate business model, that pricing hardware is no longer as simple as making sure you have the right margin.

15:19, Thursday 9 May.

Open in my browser right now:

  1. Planar Choerographies. We're all familiar with stable orbits in a two body system: it's how the earth goes round the sun. The earth describes a big circle, the sun a little one, and both are centred on their mutual centre of gravity. It turns out there are stable orbits for n-bodies too, and they're lovely. I wonder what it would be like to live on a planet in a seven on a butterfly solar system.

  2. Wired interview with Bill Gates. Saved 5 million lives, and he's funny? Dammit.

  3. The pitch deck Buffer used to raise $500,000. Great pitch deck. A simple story, well told.

  4. Chris Dixon on hardware startups. A big factor in why hardware is possible now? The peace dividend of the smartphone war. (Chris Anderson.) Chris Dixon lists a few points to keep in mind: Manufacturing (no Amazon Web Services for production); defensibility (no network effects); planning (it's not agile); B2C vs B2B (attention vs margins).

    I'm gonna add four other points of differentiation from software. One is distribution, both attention and fixing shipping things. Is hard. Incumbents win. Second is funding: margins are lower, you have working capital tied up in stock, the pipeline is slower. Third is complexity. Connected products (and that's my concern) have mechanical parts, embedded software, connectivity/protocols, and cloud software. These need to move in sync, and it's hard to tell what takes the lead. The fourth point is business model -- the business model of products is already moving into flux. It's about to go chaotic.

    Further reading: Indiepocalypse (Andy Biao): For hundreds of years, publishers across every industry - book publishers, record labels, film studios, videogame publishers - solved problems for artists in four major ways: being, Funding, Production, Marketing, Distribution. And the internet is disrupting all four of these simultaneously.

    The way I think about this is the "fat middle." In each industry - say, news - we've had the dominant head (New York Times) and long tail (round robin newsletters). In music? Dominant head of stadium tours and U2, and the long tail of bar gigs. The internet's flattened the curve, and a fat middle has arisen. In news, major blogs: Engadget, the Verge, etc. Music: see all of YouTube.

    So... a fat middle of hardware? Yup. It's happening. Cool.

Now I can close my tabs.

16:15, Saturday 4 May.

Recent press on BERG Cloud, the new Dev Kits and Little Printer:

And one slightly older piece:

09:51, Thursday 2 May.

The good thing about rolling your own blogging system is that you're in control of your data and your destiny.

The bad thing is that you have to live with all the ridiculous choices made by you-with-13-fewer-years-experience.

So every time I have anything to say, there's a several hour (day, week, month) long throat clearing process where I have to check what the syntax for the blog posts is, see whether the rendering and publishing code still works, and get the whole thing working on my laptop again.

You'd think that this barrier to entry would result in me only posting when I had something really, really worthwhile to say. Where my desire for public exposition was so strong that it would carry me through all the pain and hurdles.

No. I write posts when I'm procrastinating, or when I'm at an airport. Today I'm procrastinating.

Update: Why on earth does my blog template put a little dot after "May" in the date?? Apparently me-some-years-ago is a lazy coder who can't be bothered to truncate correctly. This is lazy: strftime("%b. %Y")

14:32, Tuesday 3 Jul., 2012

Companies I would start if only I had the time, #3 in a series (previously, previously):

I would make a Facebook Camera for the explicit purpose of getting acquired by Facebook.

Facebook have announced they're going mobile first. They need to: half Facebook's traffic comes from mobile rather than PC, but mobile traffic does not currently directly generate any meaningful revenue.

There are lots of rumours about Facebook working on a phone.

They shouldn't make a phone. They should make a camera.

Features the Facebook Camera should have

The Facebook Camera should be a better pocket camera, with native Facebook and re-imagined for sharing, plus core communications functionality. It should have wifi and optionally 3G.

The camera is a "second device" which lives alongside the phone and doesn't compete with it. This sidesteps Facebook around the highly competitive (and increasingly locked-in) space of iPhone and Android, and avoids the need to launch with a full app store.

Facebook are interested in camera apps (they have two: their own, and Instagram). They should make the hardware.

A better pocket camera:

The viewfinder screen should be front-facing, on the same face of the camera as the lens.

Social photos aren't like what I'll call "posterity" photos. They're not portraits and landscapes. Social photos include the photographer in the picture -- you hold the camera out, and point it back at you and your friends. Or you point it at the view behind you and include yourself in the frame, to prove you're there. A front-facing viewfinder would be perfect for this, and it would also make the physical product visually distinctive when shown in adverts and magazines ("self-evident" product design is essential for marketing).

Native sharing:

Sharing happens in real life too. One usage of digital cameras I saw - before the iPhone came along - was that a few photos would be kept, undeleted, on the memory card, usually of cats, kids and significant others. These photos are for showing off.

There should be a dedicated "photo wallet" Facebook album, and the front-facing screen should be used for a dedicated showing off function.

A "core communications" device:

Although I see the Facebook Camera as a second device, alongside the phone, this networked device should support all core communications:

Given this list, I suspect the Facebook Camera would undermine many of the reasons to carry a full-featured app phone.

Music isn't required. Wearing your headphones is anti-social when you're hanging out with your friends.

If you want the killer feature... Facebook should build on Facebook Chat to support video, and make this camera a video chat device. Hangouts (easy, social video chat) is the stand-out amazing feature in Google+, and Facebook should be looking to compete.

Product design:

The Facebook Camera should have accessible product design which is cool without being weird (the Nokia Lumia does this well), mass market without being tacky (the Kindle does this well), and distinctive without being bizarre (think of the original iPod). It's got to look like a camera crossed with an iPod Touch with your friends inside.

Reasons Facebook should make a mobile device

It makes sense to make hardware, because physical products are high engagement.

Facebook's model (as I understand it) is to record every single action every person takes, with metadata of time, place, and location in the social graph. This substrate, and the tools to manipulate it, has a good chance of being the underlying foundation of whatever it is comes after the Web. The Web started as a document repository, it's all about nouns. Facebook has the potential to be as big, but all about verbs. The "social network" aspect of Facebook is part of its bootstrap: the way Facebook gets into the position that it's natural that all verbs run through it. The next step in the bootstrap, to move down into the foundations, is that Facebook will become a platform for other social networks. Instagram is the first major one.

Any drop in engagement in the social network (for example what happened to Digg or MySpace) risks this entire future.

As a defensive play, a mobile device is essential. Facebook's mobile usage is increasing, but they can't make any money out of ads on mobile. So they're in a desparate double bind: So long as Facebook on mobile is popular but not commercially useful, it's good for mobile operators and OS providers because it boosts service usage, but it's bad for Facebook because it cannabalises desktop usage.

But when their mobile service is popular and becomes commercially viable, the mobile operators and OS providers become conflicted gatekeepers who will either undermine the ad experience or get a piece of it themselves by undermining Facebook as a whole. We're seeing signs of this already. Half of the mobile market is owned by Android, made by Google, who also make Google+, which means Android will threaten Facebook.

The way to escape this trap is for Facebook to make a mobile device.

Reasons Facebook shouldn't make a phone

The phone market is really, really contested, and really, really hard. Phones are the centrepiece of Apple, the most valuable and most inventive company on the planet. Phones are the focus of Google, the Web's most inventive company, and a fierce and increasingly motivated competitor. Both Apple and Google have been working hard on lock-in for one or more OS generations. Phones are the one of the points of both attack and defence from the previous generation's largest technology firm, Microsoft. Phones are where one of the largest technology companies there is - Samsung - can just about keep up. Phones are the rocks on which the biggest of the big technology players have come unstuck: Nokia and RIM.

To have a phone now, you need the phone, a sufficiently incredible offer to get customers to break with phones they love (most of those people are in 18-24 month contracts), a whole app and developer ecosystem, hardware manufacturing and distribution, access to the network, access to a media content system, access to a physical media playback system, and to be butressed by a multi-device ecosystem like tablets or music players.

Facebook could enter this market, sure, but why bet the business on winning in such a competitive space?

Here's the thing: You don't need to make a phone to make it in mobile.

Reasons Facebook should make a camera

Five years ago, the iPhone was released into a world of desktop PCs and bulky laptops. Laptops were never truly mobile devices, and the iPhone (and Android) made a lot of sense in that world, over the previous generation of smartphones from Nokia and RIM. "App phones" were more like mini computers.

The product landscape has changed. The iPad is phenomonally successful, and other tablets look pretty neat too. The trend with laptops is towards ultrabooks, where the MacBook Air is setting the pace -- the Air is almost instant on, super light, and has an incredible battery life. It's way more mobile than any previous laptop. Alongside these product shifts, the cloud has emerged as the home of data. When I lost my laptop recently, configuring a new laptop was as simple as signing into iTunes, Dropbox, and GMail.

In this world of iPads and (hopefully) upcoming tablets, does the bells-and-whistles approach of iPhone and Android make as much sense? I don't think so. I think a new, simple category of pocket devices opens up. It's not going to be another music device, those have vacated the pocket. It might be a gaming device, but the iPhone has grabbed that niche.

But it could be a camera.

A camera that also dealt with core communications (email, chat, maps, Facebook) would meet some of the same needs as a phone without competing with phones directly.

Cameras are both highly personal and highly popular, like music players were when Apple launched the iPod. That's a good place to be. It's full of love.

And cameras fit right in with Facebook's position at the world biggest online photo service (in 2010, Facebook had 2.5 billion photos uploaded every month), just as the iPod fit with Apple's position in the music sector with iTunes.

Last, the camera sector is ripe for re-invention and new features.

The bottom end of the market has been softened up: the iPhone has replaced the compact camera as most people's camera of choice. But it doesn't take great photos, and it's okay but not particularly good at letting you share and socialise around photos. So the iPhone has not protected its position as a compact. And although the former compact sector has been adding features like crazy - smile detection, wifi uploads - none of the device manufactures really get software or social networks.

On the high end, the professional cameras have turned into excellent prosumer models -- which is neat, but they're definitely not social: they're portrait and landscape cameras. You can see a few manufacturers attempting to innovate: Nokia have their 41 megapixel camera, Polaroid have launched a digital camera, Sony have their compact DSCL, there's Lytro and their lightfield camera, and Samsung have actually launched cameras with front-facing screens, etc. But nothing has traction.

Evidence Facebook is working on something mobile, maybe even what I'm suggesting

Facebook is breaking up their mobile app into lots of different apps for particular functions, which is what I'd expect if they were going to launch their own device: they'd want Facebook features to be top-level features on whatever that new device was, and creating them in HTML (the language of the Web) on iPhones means they can re-use these apps still in HTML on whatever their hypothetical new device is.

They obviously care about cameras: the single app that doesn't parallel a feature on the Web is a dedicated camera app. And then there's Instagram, which is Facebook's second camera app.

How I'd make this business work

If I was Facebook, I'd be getting ready for a hypothetical future device by preparing all my functionality to make the jump. Currently Facebook are breaking up their single iPhone app into lots of little microapps. Makes sense. Then I'd talk to Sony for the manufacturing.

But I'm not Facebook. So I'd either do a start-up with a hardware accelerator (equity is exchanged for contract manufacturing), or I'd prototype and then pitch to joint venture with Facebook itself.

The thing is, the nature of products is changing. It doesn't make sense to think of cameras as straight-up-and-down products -- you have you consider what a camera is as a service, and what it is as media. That is: how does the camera meet the service offering of "taking and sharing photos" as easily and wonderfully as possible? And how does the camera let photos take their place as objects in the communication and entertainment media of social networks? The industrial design is almost secondary.

And traditional product companies - even Apple to an extent - don't think like this. Web companies do, but so far hardware has been out of their reach. Until Web companies figure out how to do hardware, there's going to be an interesting gap to fill.

18:14, Tuesday 22 May.

Companies I would start if only I had the time, #2 in a series. (Previously, FuelBand for alpha waves.)

Instagram for webpages.

Hear me out:

Instagram has proven there is a mass appetite for creativity and personal expression. Look at the popular photos on Instagram: girls, pets, and sunsets; well-shot and quirky. Facebook, by comparison, is a desert -- a gridded Excel spreadsheet of relationship changes and status updates. When at last they added the possibility of creativity - of beauty and of ugliness - in the shape of Facebook Timeline banners, people leapt at it.

(Note: I'm obsessed with Instagram. I think it's brilliant. A demonstration that people in a social group when left together and given the right tools develop deep skills and a rich culture.)

The mass creativity is what I really miss about MySpace. Check out Ze Frank talking about MySpace in 2006: sure the ugly pages were a joke, but ugliness was also a sign of a huge amount of experimentation, of personal expression, of wit and one-upmanship, of tribes and remixing. Culture in action!

The granddaddy of mass creative expression online was GeoCities, started in 1994 and now dead but archived. A giant metropolis of people speaking in HTML - the bricks and cement of the Web - learning from one-another, improving their skills to speak better -- having conversations by creating and sharing. GeoCities is the roots of present-day maker culture. And it was enabled by the very thing that makers are right now injecting into the manufacturing world with open source hardware: view source. View source! See how any webpage is constructed, then copy-and-paste parts of HTML and use it yourself! What a great way to learn.

There's no "view source" on the iPad.

That smells like a gap in the market.

Productizing "view source"

We'll start with an Instagram clone for the iPhone and iPad. Instead of photos, users would share webpages written in the app itself. There's view source, of course.

What we'll do...

Pages would be a fixed width and height, and there would be a file-size limit.

We'll also have a few features to invite expression:

There's Facebook integration for sharing. The hope is that people make little webpages with poems or aphorisms in place of writing status updates, and share those each day instead.

While I was writing this, Panic launched Coda 2, their code, HTML and CSS editor. It's remarkable for its UI -- do watch the tour, and look out for the smart styling menus: it doesn't just help you type the syntax to specify a colour, it presents you with a colour picker. So yeah, we'd try and license some of the Coda technology.

Instagram for webpages

This is Instagram meets GeoCities meets Diet Coda meets Twitter art meets About.me meets Tumblr meets social scrapbooking.

We'll know we're doing it right when half of the pages are ugly.

Money: Initially we'll find revenue from brands because people follow the brands they like.

The long-term plan is that this service invents, popularises and owns a new media type, in the same way that Twitter "owns" 140 character updates and Instagram "owns" square photos. You end up with a generation of people highly literate in HTML authorship and this new media type, and they associate this literacy - this superpower - with this particular service.

A year down the road we'll add form inputs, a super simple programming language for back-end processing only (not mixed with the HTML), and a custom micropayments widget. This is possible because it's a controlled viewing/authoring system. Bingo, we have an economy. I'm sure we can think of something to do with that.

09:43

Ze Frank's 2006 defence of ugly MySpace pages as markers of mass experimentation and the democratisation of design:

For a very long time, taste and artistic training have been things that only a small number of people have been able to develop. Only a few people could afford to participate in the production of many types of media. Raw materials like pigments were expensive; same with tools like printing presses; even as late as 1963 it cost Charles Peignot over $600,000 to create and cut a single font family.

The small number of people who had access to these tools and resources created rules about what was good taste or bad taste. These designers started giving each other awards and the rules they followed became even more specific. All sorts of stuff about grids and sizes and color combinations - lots of stuff that the consumers of this media never consciously noticed. Over the last 20 years, however, the cost of tools related to the authorship of media has plummeted. For very little money, anyone can create and distribute things like newsletters, or videos, or bad-ass tunes about "ugly."

Suddenly consumers are learning the language of these authorship tools. The fact that tons of people know names of fonts like Helvetica is weird! And when people start learning something new, they perceive the world around them differently. If you start learning how to play the guitar, suddenly the guitar stands out in all the music you listen to. For example, throughout most of the history of movies, the audience didn't really understand what a craft editing was. Now, as more and more people have access to things like iMovie, they begin to understand the manipulative power of editing. Watching reality TV almost becomes like a game as you try to second-guess how the editor is trying to manipulate you.

As people start learning and experimenting with these languages authorship, they don't necessarily follow the rules of good taste. This scares the shit out of designers.

In Myspace, millions of people have opted out of pre-made templates that "work" in exchange for ugly. Ugly when compared to pre-existing notions of taste is a bummer. But ugly as a representation of mass experimentation and learning is pretty damn cool.

Regardless of what you might think, the actions you take to make your Myspace page ugly are pretty sophisticated. Over time as consumer-created media engulfs the other kind, it's possible that completely new norms develop around the notions of talent and artistic ability.

Here's the video.

09:54, Wednesday 16 May.

I was waiting for a bus the other day, and had a pretty good time stand there, wool gathering, contemplating the world, thinking about the various things I needed to do, etc.

And on the bus after I thought: I don't give myself enough time to stop and think.

And then I thought: I don't give myself enough time to exercise either, and what I did in that case was buy a Nike+ FuelBand and monitor how many steps I take each day. (There was a surprise there: Factoring out exercise, there's a huge variation in my regular everyday activity, a four-times difference between quiet days and active days although they feel much the same.)

So I bought a MindWave from Neurosky which is a portable electroencephalography (EEG) headset with dry sensors. That is, it measures faint electrical activity on my head to read my brainwaves, and it's "dry" so I don't need to soak the sensors in saline or anything like that.

In theory it should be able to measure when I'm concentrating, when I'm excited/agitated, and when I'm relaxed.

It comes with a dongle to plug into my Mac so I can read the data from it using the MindWave developer tools. (In retrospect I should have bought the MindWave Mobile which uses Bluetooth and can also connect to the iPhone.)

It's a shame the MindWave doesn't store data itself -- if I want to get long-term readings then I will have to keep it paired with my Mac and store and analyse the data there.

Why? Because I'd like to wear this the whole time, and become more mindful of how much time - and for how long - I'm concentrating, reflecting, etc. And over time, being mindful of this, could I see whether I'm happier/more productive/more creative when I spend (say) regular time each day reflecting, or long periods of time on a single day concentrating, and so on.

Companies I would start if I wasn't doing this one:

The models currently in this space are exemplified by two companies, both based on Neurosky's technology:

Neurosky themselves have an app store.

But I think these companies are missing a trick. I'd like to introduce focus, good design, and vertical integration, and take lessons from successes like Nike+ and Foursquare.

I would love to take the Neurosky MindWave technology, have it store data for later syncing as a Bluetooth Smart Device, make it look great, wrap a FuelBand self-awareness and goals iPhone app around it, build in a mood tracking feature for feedback - maybe correlate it with email and calendar/todo list activity, Twitter/Facebook updates (for another mood datapoint), and Foursquare (for location) - and sell it as a headband.

You would share the time you'd spent reflecting each day on Facebook. There would be challenges, and self-awareness. I might bootstrap a distributed network of gym instructors for meditation (we'd have a marketplace for subscription yogis).

Kind of a cross between Brain Age (or Brain Training depending on your territory), FuelBand, product sales plus subscription services, quantified self, and mental well-being.

The really interesting stuff would happen when we start using machine learning across vast amounts of data from tens of thousands of individuals, all submitting brain wave and activity/mood data. We'd data-mine like crazy. What would we learn? It would be a little like 23andme, the data-mining + pathologies + gene sequencing company, and a little like Knewton with their personalised, adaptive learning. Maybe we would end up saying things like:

You know you need to be on top form in 5 days? We know from past behaviour and by looking at people like you that you need to spent 30 minutes more per day in uninterrupted quiet reflection in order to achieve this. Here's your goal. Go!

There's not quite a business here, not at launch... but after you find out what combinations of which mental states over a day promote what kind of behaviours, and you can help people be mindful of that? There's something really big there, I'm sure.

I wish I had more hours in the day.

Right now

I am using my MindWave and playing Blink/zone to explode fireworks whenever I blink. When I don't blink they don't explode, when I do blink they do. It works surprisingly well. It's a weird experience to have something I regard as so interior picked up by a computer.

Continue reading...

All posts made in May. 2012

Archives

2013 June, May. 2012 July, May, April, March, February, January. 2011 May, March, February, January. 2010 December, January. 2009 February. 2008 December, November, September, August, July, June, May, April, March, February, January. 2007 December, November, October, September, July, June, May, March, February, January. 2006 December, November, October, September, August, July, June, May, April, March, February, January. 2005 December, November, October, September, August, July, June, May, April, March, February, January. 2004 December, November, October, September, August, July, June, May, April. 2003 December, November, October, September, August, July, June, May, April, March, February, January. 2002 December, November, October, September, August, July, June, May, April, March, February, January. 2001 December, November, October, September, August, July, June, May, April, March, February, January. 2000 December, November, October, September, August, July, June, May, April, March, February.

Interconnected is copyright 2000—2013 Matt Webb.