≡ Menu

The Blackberry Playbook is now the first tablet to gain FIPS certification, which means that it meets US government standards for data security and encryption.  Playbook also won Best in Show and Best of FOSE in handheld devices at the federal government IT conference in Washington DC this past week.

This certification and these awards certainly reinforce RIM’s position that the Playbook is the first “professional grade” tablet on the market, and may be a good indicator of how the market will evolve – Android and iPad devices for consumers, and Playbook for professionals.   Now, what will Avaya and Cisco do?  Both companies have announced  business focused tablets as well, but built on Android.

Enhanced by Zemanta
{ 0 comments }

Like a lot of other folks, on Wednesday I was playing with the newly launched video chat capability on Facebook.  Done in partnership with Skype, it brings video chat to the masses via the 750 million Facebook users out there.

First I chatted with Larry Lisser in San Francisco.  Not a good experience.  Grainy, laggy video, and bad audio synch problems.  If this is what Facebook video chat is all about, I thought to myself, it’s going to be a failure.  Next I talked with Dan York and his two year old daughter Cassie.  Great experience, and entertaining as all get out due to young Cassie’s antics on the screen.  Don’t tell Mrs. Saunders, but that little flirt was blowing me kisses the whole time!  And the video was wonderful and in synch.  Clearly the quality problems with Larry were simply network related.  And then I chatted with Jim Courtney, where we quickly started digging through the nitty gritty of the user experience.

What do I love about Facebook video chat?

  1. It’s a little thing, but the window pops up on screen directly below my center-mounted web cam.  It forces me to look into the camera when i’m chatting, which means that I’m meeting the other person’s eye, rather than looking at the screen.
  2. I can leave a video-mail message if the other person isn’t available.  Why isn’t this in the standard Skype application?
  3. It’s SUPER easy to set up and use. For many people, Skype has an intimidating UI with a lot of options.  Facebook video chat is pure simplicity. I could see my wife, or my brother-in-law, both of whom have resisted Skype until now, using this.

It’s probably not going to steal away today’s Skype user.  The experience isn’t as rich, quality isn’t as high, and you have to be logged into Facebook to receive calls.  Instead, Facebook video chat is a great compliment to Skype.

Bottom line: I don’t agree with Om Malik that this is a one-sided deal in Facebook’s favour.  Like Andy Abramson, I think this is a good thing for Skype and for Facebook.  Facebook gets a feature that will allow it to compete against Google +, and Skype gains an audience that they might not have otherwise had access to.  It won’t be long before there are a billion video chatters on the planet, all using Skype technology, and that’s what Skype’s management wants and needs.

Enhanced by Zemanta
{ 2 comments }

According to the Globe and Mail’s Hugh Thompson, next month will mark the 10th anniversary of the Personal Video Recorder, or PVR, in Canada.  And what a boring and dull ten-year old our PVR has become.  Almost none of the promise of the PVR’s first released in 1999 has ever been realized here.  Instead, our PVR has become little more than a glorified video cassette recorder.

Yes, the satellite and cable industry trumpets the advent of “whole home” networked PVR’s.  What a yawner.  ReplayTV had this ten years ago.  In fact, the current crop of PVR’s is missing a whole host of features that used to be common place!  How about:

  • In-video content search, pioneered by Ottawa’s own Televitesse.  By scanning the caption stream, Televitesse could find specific spoken words in a program, and jump the viewer to that scene.  It was perfect for news hounds.
  • Search and record by cast member, subject, genre, or review ratings.  All delivered by ReplayTV over a decade ago.   My favorite feature of ReplayTV?  Once you had created the search, it would simply record anything on any channel at any time that matched search.
  • Tivo Season Pass – record an entire series, every week, even if the time or the day or the channel changes.

The tenth anniversary of the PVR in Canada is a legacy of mediocrity.  The television companies – Bell, Rogers, Shaw – apparently don’t have the imagination or the desire to improve the viewing experience for the user.

Is it any wonder that so many people are turning to the web, instead of television, for video?

{ 1 comment }

eComm 2011 doesn’t disappoint

Each year around 300 people gather together for three days in San Francisco at an invitation only event to plot the future of communications. The event is Lee Dryburgh’s eComm, the Emerging Communications Conference. You can think of it as TED, for the communications industry. Topics have ranged from Voice over IP, to the Internet of Things, mobility, sensor networks, user experience design, augmented reality and social networks.

clip_image002

I attended this year’s eComm last week, and it didn’t disappoint.

Monday morning kicked off with a series of presentations on how todays markets are evolving. The best of the bunch was Ovum Chief Telecoms Analyst Jan Dawson’s presentation titled Telecoms in 2020: A Vision of the Future. He made the case for the emergence of two categories of carriers: SMART players, where SMART stands for ‘Services, Management, Applications, Relationships and Technology’, and LEAN operators, where LEAN stands for ‘Low-cost Enablers of Agnostic Networks’. You can think of these as being similar to today’s retail and wholesale telecom markets. Dawson showed how carriers could build good businesses in either market, a departure from the common viewpoint that carriers must build value-added services rather than be so-called “bit pipes”.

Monday afternoon, another stand-out presenter was Raj Singh from SRI International. Singh’s research focused on enterprise mobile applications, showing convincingly that enterprise is ready to buy narrowly construed mobile applications in virtually every part of business, from HR to accounting, sales, manufacturing and more. This is a market which has been dramatically overlooked in the rush to build consumer smartphone applications, yet may hold more promise.

HP’s Dr. Peter Hartwell showed prototype sensors orders of magnitude more sensitive than the motion sensors in today’s mobile phones. Hartwell imagines a world in which highly integrated sensors, capable of detecting light, motion, send, and location are embedded into literally everything. Using a prototype he demonstrated how a single device could be used to monitor breathing, heart rate, location, and velocity when attached to a person, or an entire building when attached to a single piece of infrastructure such as a water pipe.

Tuesday morning was dominated by presentations around Voice 3.0, the Voice Web, including a panel at the end of the morning. Harqen CEO Kelly Fitzsimmons presented a wide ranging series of scenarios on how to extract relevant information from voice conversations, Vox.io’s Tomaz Stolfa showed his company’s web based telephone services, and Voxeo’s Jose de Castro gave an update on the latest Web RTC / RTC Web efforts to embed voice communications directly into the web using open standards. De Castro showed how to create a telephone call from a web page using just five lines of javascript, and according to de Castro the next releases of the Chromium browser will support RTC Web.

Martin Geddes also demonstrated an early prototype he and Dean Elwood have been working on, which allows the creation of voice “objects”. They propose encapsulating logic within a voice stream – a voice mail message, for example, with actions associated with it, similar to an HTML email message. A restaurant might leave you a voice mail message about a reservation, asking you to press 1 to confirm, or 2 to cancel.

Harqen’s security industry heritage was on display Tuesday afternoon, as they launched their Symposia product. Symposia creates automatic synopses from web conferences by following user actions, text communications, and tagging events in order to allow meaningful search of the entire event – voice, presentation and text chat.

The rest of eComm promised as much as the first day and half as it continued with presentations on augmented reality, open source voice, user experience and more. I was forced to leave early for family reasons, and was disappointed to miss Berkeley’s Alex Bayen, Skype’s Jonathan Christensen, 2600Hz Darren Shreiber, the always fascinating Dean Bubley and the closing talk by Richard Thieme.

eComm is unique in the communications industry in the extent to which it focuses on the future of communications technology. You won’t generate leads or sales from this conference, but you will walk away energized by the possibilities, and possibly with one or two great product ideas of your own.

I can hardly wait for next year’s event. In the meantime, there’s always the eComm blog, with its repository of presentations from years past.

{ 2 comments }

Open Standards

I’m at eComm, the Emerging Communications Conference, for the next couple of days. Over dinner last night a heated debate erupted over open standards in telephony, the genesis of which was my Voice 3.0 piece posted on Friday. I didn’t explicitly state that open standards are important to the Voice 3.0 vision. Dan York took me to task over the same issue last Friday, and then we discussed it again on the VUC call this morning.

It wasn’t an omission on my part.

In a business that depends on network effects, as the communications business does, interoperability is critical. How we get there, whether it be through a standards body, or via a de-facto standard as Skype has become, isn’t that important. What’s important is that there be sufficient open-ness for an ecosystem to flourish.

In platform markets those that call for Open Standards are typically the number two or three players in the market, seeking to unseat a dominant incumbent. In other words, the adoption of a standards body sanctioned standard is a competitive strategy, and not an inherent “goodness”.

The dominant player can do three things when faced with an one standards competitor: compete harder, adopt the standard, or find a standards body willing to anoint their proprietary technology as a standard. It would not surprise me in the least to see Skype, for example, choose any of these strategies in the future.

And that’s the reason I left the adoption of an open standard out of the Voice 3.0 manifesto.

{ 1 comment }

In October 2005, I published The Voice 2.0 Manifesto. The Manifesto’s theme was the marriage of voice to the web, and all of the accompanying technological and business shifts that might occur as a result. Five-and-a half-years later, some – indeed I would argue many – of the predictions made then have come true, or at least partially true. Voice minutes cost next to nothing, voice applications are creating real value beyond just carriage in some segments, and open programmable architectures for voice services have emerged.

A lot has changed in five years, however. Skype, which was an early stage start-up with no revenue model, is now the dominate supplier of international long distance calling minutes worldwide. Smartphones finally exploded onto the scene, and in the process dashed the old mobile models to pieces. And the internet has morphed into the “semantic web” as increasingly device to device communication becomes as important as device to human.

Voice 2.0 was about how the internet intersected with the voice network. Voice 3.0 moves beyond Voice 2.0, as the voice network becomes the Voice Web – the amalgam of voice, and the internet and its intersection with real world problems to create new forms of value, useful in everyday life. By extension, Voice 3.0 is therefore also about how voice applications incorporate the best of the internet to increase the options and opportunities for developers, and ultimately create new value for users, corporations, and investors.

Like Voice 2.0, Voice 3.0 is a user and developer focused view of the world. It’s “all about me”, about how the focus is shifting from a supply driven world managed by the some of the largest corporations in the world, the telephone networks, to a demand driven world managed by users of those networks. We’ve already seen dramatic evidence of this momentum in the mobile world, as customers flocked to devices supplied by Apple and Android partners, primarily because of their ability to be infinitely customized via mobile applications to the customers’ needs and desires. In the Voice 3.0 world, we see the same dynamic as the large suppliers of voice services are beginning to recognize that innovation comes from users who find and adapt tools to new uses, and from the developers who build applications to “productize” those new uses. Just as in mobile, the winners will be those who successfully unlock the demand driven model.

Defining Themes

Let’s start with the ubiquity of voice, or how voice gets embedded into the fabric of ordinary digital life. As Martin Geddes points out, we humans have really only got three modalities of communication available to us – gesture, voice and the written word. Despite the rhetoric around video, voice, which represents at least 1/3 of our communication palate, isn’t going to disappear at any time in the near future.

In fact, voice is being adapted to many different uses than before. For example, my own children communicate with their friends more often on the xBox network using spoken words than on the telephone. And, in fact, when playing multi-person internet games on a PC, they will turn to Skype as a substitute for xBox voice.

There are two billion voice “input terminals” in existence on the planet today, including telephones and computers. Many people now suggest that an explosion of voice enabled devices is imminent as all kinds of devices become voice enabled. Just as the internet has become embedded in billions of devices, the Voice Web will also be.

So what does the world look like when 10 or 15 billion devices are now voice enabled? When two or three voice enabled devices exist for every man, woman and child on the planet?

Two obvious possible outcomes are that:

  1. The personal voice terminal, or the telephone, disappears. There will be no need to hold a microphone to our faces when one can simply speak to the ether, and make communications connections.
  2. Voice control as user interface becomes much more prevalent. When you can speak to your automobile, television, computer or home appliances, then interaction models change dramatically.

Perhaps the largest promise of the ubiquity of the Voice Web is that the over 1.5B people who are functionally illiterate can join suddenly get off the sidelines and begin actively participating in the emerging commerce model of the voice enabled web. Until now, much of this population has been unable to participant online. In India, for example, a great deal of commerce is conducted by illiterate women en masse via dumb phones. This type of commerce is equivalent to the dark web today — important, but off-limits. With the Voice Web, phones don’t need to be smartphones to connect people to the larger network. Global economics will change when the participation of those previously sidelined becomes active and at significant scale.

The second big theme of Voice 3.0 is the programmable Voice Web. Programmability has been a topic in voice circles for years, beginning with the old-fashioned computer aided telephony right through to today’s hot topic, Communications Enabled Business Processes.

Today’s state-of-the-art is to marry a voice channel to a processing channel to create new kinds of applications. The earliest interactive voice response systems simply allowed individuals to press keys on the telephone keypad to alter a call path – press 1 for sales, 2 for support, and so on. More advanced systems today can automate different kinds of business processes, by separating business logic from the voice channel itself. For example, using analytics tools to track incoming customer response to advertising on voice channels as Ifbyphone does, performing database lookups in response to incoming caller ID information for customer relationship management purposes, or creating outbound reminder and polling systems.

In all these examples, the voice channel is treated as a relatively linear audio object which is only loosely coupled to other systems. The user communicates with a standalone IVR that exists solely for the purpose of providing the application. It’s as if today’s state-of-the-art voice systems are the walled garden AOL’s and MSN’s of yester-year, and the world wide web hasn’t yet happened. The result has been successful unified communications systems which give a great experience in totality, but the learning curve is high if customers choose to switch service providers.

The internet, however, grew well beyond those old systems. It evolved from the hyper-linked world of the early web to container objects supporting media and programmable objects, with sophisticated tracking systems, scripting, offline execution, mobility and more. The web went from being a hyperlinked text library, to the largest programmable application on the planet, fuelled by open standards, lightweight communications infrastructure, standards which allowed content to be separated from logic and presentation, and an explosion of end-user devices, including today’s mobile devices.

Voice is on the cusp of the same revolution – a revolution that will be defined by letting the customer define the business logic of the application, not the service provider. Imagine a “hyper-talk” protocol, where the voice servers of today evolve to become more akin to a web server – a hyper-linking voice application with the ability to autonomously download and execute other data and voice objects. In that world, for example:

  • Voice mail wouldn’t be the linear audio message it is today. It would be more like HTML email, with live links and buttons embedded. Made a reservation at a restaurant? A day before, the restaurant’s reminder system might leave a voice object in your mail box: “Press 1 to confirm your reservation, 2 to cancel or 3 to speak with an operator”. Your voice mail box would know how to correctly execute the scripts embedded in that voice object, process the button presses, and inform the restaurant’s reservation system of your responses.
  • Caller ID spoofing would be a thing of the past. If your bank phones, an authenticated caller ID system, akin to HTTPS on the web, would give you the confidence to deal with them directly.

Not only is this world coming, it’s required if voice and the phone is to stay relevant as a communications medium. Phone is so far behind today that it’s losing traction for business, in favour of the impersonal web where identity and programmability already exist. And yet what could be more important in business than personal contact, and more personal than a voice conversation?

Ultimately, we’ll build systems where communications result in artefacts that can be consumed by services that have not been pre-specified. Think, for example, of the role that RSS played in the syndication of content, and imagine a similar world for voice. Tool chains will be created that will allow people to participate in building these services, and an explosion of new applications to consume these voice artefacts will be built.

And that leads directly to the third big theme of Voice 3.0, which is the arrival of the semantic voice network. In the world of the internet, massive businesses have been built around the collection and management of huge databases of content – think of businesses like Amazon and Google for example. On the Voice Web, these databases will exist, but more likely as the proprietary property of enterprise. In their quest for accountable, measurable and actionable IT assets, management will turn an eye to voice—the last opaque data object in the organization.

We’ll see the advent of:

  • Voice as an Asset — The combination of Voice Analytics used to help organize and categorize voice content, metadata surrounding the original voice object to assist with relevancy, weighting, and prioritization. We’re already seeing this in the post-Enron era, as corporations struggle with making voice an auditable asset, and these efforts will only become more sophisticated over time.
  • Voice as Relevant — Voice content will be findable. Just as Google is a query mechanism that sources and displays relevant text (mostly) content, voice will have its search engine unlocking the value of those stored voice assets throughout enterprise.
  • Voice in Context — Voice content will live side by side with other data types within enterprise transactional systems like CRM, and ERP. Voice will be used to provide the emotional context around the other data objects to help provide richness to the inquiry. Imagine searchable sales calls in the CRM next to the account information and text logs for a customer.

As voice becomes a “big data” asset, databases of conversations will be built, cut into snippets, decoded, analyzed, and added to the enterprise knowledge base. The Voice 3.0 tool chain will provide APIs that give access to these snippets, both privately and on the web. And ultimately, the content, context, and meaning of audio conversations will become a key input into business processes.

Voice 3.0 impact

Accompanying these three themes will be a radical shift in business model – from a supply constrained model to a demand driven model. Customers will want to be able to choose a dial tone provider, and applications independently. Innovation will accelerate as customers take these new services and use them in even newer and more different ways than anticipated.

Developer ecosystems will form a “virtuous cycle”, a self-reinforcing market that gathers momentum as more adopters flock to the platform. Platforms will try to recruit applications developers, customers will look to the ability of platforms to deliver the third party applications that they need and want as a key differentiator in the market, and as more customers adopt the platform, more developers will also, seeing opportunity.

Some applications will be pre-built, some custom built, and some modular. As a result we’ll see a thriving market for suppliers like today’s Ifbyphone, Twilio. and Voxeo You can think of Ifbyphone as the Ikea of voice, supplying modular components, and Twilio as the Home Depot of voice, supplying building blocks and materials. Both are important in the market.  Recognizing the broad range of needs in the market, Voxeo plays both ends of the spectrum with services ranging from the self-service Tropo and Imified platforms, all the way to custom built applications with100% uptime SLA’s based on their Prophecy and Prism platforms.

The voice service provider of tomorrow will probably be much more like today’s SaaS providers – a hosted Voice as a Service business. “VaaS” will deliver a core managed and hosted voice service, decoupled from both the context of use and from the internet service provider. The service package will include not just voice, but detailed statistics, group management controls, and more. And it will bristle with API’s that will enable an ecosystem of other players to be built around it.

These in turn will unlock the value of stored voice assets, allowing the growth voice as an organizational asset. Iotum, for example, has stored every recording made by users of its Calliflower system since it was first introduced. Newcomer HarQen takes this to the next level, focusing exclusively on what do to with voice data once a call terminates. Both companies are in contrast but complementary to metered transport models, and both recognize that there is value in those voice assets beyond the termination fee charged during their creation. For example, HarQen has built a database of over 100,000 phone based job interviews that is not only valuable today, but as that database grows into the millions, may transform recruiting tomorrow. HarQen’s business model focuses on monetizing voice as a rich media asset that can be archived, which is how they’re able to turn a $.10 phone call into an $11.57 voice asset that customers are lining up to pay for.

Winners and Losers

For some time, Voice 3.0 is going to be a messy hybrid of delivery technologies. The Internet will be “reliably unreliable”. Telcos will stake out a corner. The “under the floor” players like Ericsson will take an increasingly large role in creating and managing voice-ready delivery networks. Some of the cloud communications and commerce infrastructure players like Apple, Google and Amazon may reverse themselves into owning delivery assets to just “make it work”.

Don’t be confused by the messiness.

Today’s large voice network suppliers – the companies that have always relied on constraining supply as their business model – must change or fail in the wake of companies focused on meeting demand. Ironically, the network asset that many telco’s market as their chief advantage may just be their largest liability. The technical rigidity of large telco infrastructure (and sclerotic business models) may just lock the incumbents out of the game, as they struggle to keep up with the massive proliferation of 10 billion non-telco voice nodes on networks. Moreover, the temporary solution of bridging to and from the PSTN actually weakens their ability to compete by artificially enlarging the network effect of upstarts like Skype.

Network effects in the Voice 3.0 world become even more important. Will an open standard emerge? Although many die-hard networking folks would prefer that scenario, it’s hard to say. We may find ourselves in a world where a dominant proprietary player like Skype controls the platform, as a result of winning the race to build thriving developer ecosystems, and the applications that customers use and want.

And new opportunities will explode onto the scene as businesses perhaps not even imagined today will discover how to turn a $.10 phone call, into a $10 voice asset.

Acknowledgements: Many people in our industry contributed their time and ideas to this posting over the last few weeks. I’d like to publicly thank Andy Abramson, Jonathan Christensen, Evan Cooke, Dean Elwood, Kelly Fitzsimmons, Martin Geddes, Thomas Howe, Erik Lagerway, Irv Shapiro, Dan York, and most of all Lee Dryburgh, who provided the initial impetus.

Enhanced by Zemanta
{ 9 comments }

People who know me know that I’m a photo geek.  I love great photos, and I like to flatter myself that occasionally I might even take one or two.

Every photographer has encountered the problem that Lytro (unveiled yesterday) solves – an incorrectly focused image.   Lytro’s innovation is to place an array of “micro-lenses” in front of the main sensor in the camera, capturing the “light field” of the image, rather than just the image itself.  Practically, that means:

  • The focal point and depth of field of the image can be changed after the picture is taken.  For photographers, it’s a mind blowing piece of technology, as it does away with the entire process of choosing aperture and focal point at the time of shooting.  Now one can simply capture the image, and choose the focal point later.   This has applications in portrait, action and macro photography.
  • Better images can be captured in low light, without resorting to the use of a flash.  This is because the micro-lens array uses wide aperture settings, reducing image noise common in low light settings.   Landscape, night time, and indoor photography can all benefit.

Check it out. In the image below, click each of the scuba tanks, and then the diver in the background to see the focal point of the photograph change.


Lytro.com / Jason Bradley

Founder Ren Ng’s PhD dissertation won the 2007 ACM Award, as well as Stanford University’s Arthur Samuel Award for Best PhD dissertation.  The math and the science behind this technology make for a fascinating read.

Ng’s thesis highlights the one inherent limitation in the micro-lens approach as well – micro-lens photography uses enormously more photographic sensor resolution than conventional photography in order to achieve the same image size.  In Ng’s dissertation, the prototype camera was capable of producing images  just 296 x 296 pixels in size.  He writes:

the ideal photosensor for light field photography is one with a very large number (e.g. 100 mp) of small pixels, in order to match the spatial resolution of conventional cameras while providing extra directional resolution.

You can see this in the image above, and the other sample images that Lytro has published on their site.  Simply use the control on the bottom right side of the image to zoom the image to full screen, and then pick a background object as the focal point.  The images have a soft quality about them, due to the lack of pixel resolution.

Lytro is bringing this technology to market, later this year, in a consumer point and shoot format. Their concept is that ordinary people will be able to simply shoot a photo and correct it later, producing results akin to those of a high end SLR in the hands of a professional. And because most consumer photographs are shared online, the resolution limitations of the technology shouldn’t be as important.

I’m a skeptic, I’m afraid.  I think Lytro’s market choice is a pragmatic attempt to fit an early stage technology to a market. Consumers, however, mostly don’t care if their photographs aren’t perfect.  Most consumers don’t edit, color correct, balance light or contrast, etc despite the fact that there is plenty of good and inexpensive photo editing software available.  Consumers point, shoot and upload.

I think the biggest market for Lytro’s technology will be professionals – photographic journalists, commercial photographers, artists, the scientific community and others for whom the requirement to have correctly composed and focused images is of paramount importance. Professionals arrive at a shoot with an array of lenses and camera bodies in order to manage the problems that Lytro eliminates. They then shoot hundreds of photographs knowing that 90% of the photographs they take will be unusable. Lytro could potentially save these folks thousands of dollars in equipment costs and time, and allow them to take many more usable photographs in a single session.  However, until 100MP and 200MP sensors are available at affordable price points, these applications will have to wait.

If Moore’s Law is to be believed, we should only have to wait another 3 to 5 years for those sensors to become available.

Enhanced by Zemanta
{ 0 comments }

Yesterday the Seesmic team blindsided RIM with news that they would no longer develop Seesmic for Blackberry.  They were very public about it, and the only explanation offered was they would “discontinue support for Blackberry in order to focus development efforts on our most popular mobile platforms: Android, iOS, and Windows 7.”  The press seized on this statement as evidence that developers are abandoning the Blackberry platform.

Frankly, it’s lazy reporting.  Here’s why:

  1. RIM devices ship with a Twitter client built in already.  And it’s actually a pretty good client.  Personally, I wasn’t even aware that Seesmic was available for Blackberry, as I have never even bothered to search for another Twitter client for my Torch.
  2. On the basis of reviews written in Blackberry App World, Seesmic is a distant third in the universe of Blackberry Twitter clients.  RIM’s own client has over 14,000 reviews.  UberSocial, which is a feature rich location aware Twitter client, has over 4,000 reviews.  And Seesmic?  A whopping 518 reviews.

In other words, perhaps 3% of Blackberry Twitter users preferred Seesmic over other Twitter solutions for Blackberry.

It’s pretty clear that Seesmic is having their ass handed to them by their competitors. As Blackberry Cool points out, there are millions of Blackberry’s in use around the world.  The fact that Seesmic cannot build a business on this platform is a reflection on Seesmic’s business model, and Seesmic’s application, not the viability of Blackberry as a development platform.

Seesmic CEO Loic le Meur owes the RIM team an apology, in my opinion.  Seesmic is a failure on Blackberry, but he has chosen to let RIM take the blame.  That’s just cowardly.

And my friends in the press?  You put your own spin on Seesmic’s statements, and became a virtual lynch mob.  You were either stupid, or willing dupes – neither is pretty.  Shame on you.

Enhanced by Zemanta
{ 3 comments }

Bre.ad, the new link shortener promoted used by Lady Gaga, is a bad idea.  If you haven’t seen it yet, the quick recap of bre.ad is as follows:

  1. Shorten links by simply navigating to http://bre.ad/{your URL}.  It’s a little easier and simpler than visiting the bit.ly or ow.ly site.
  2. When visitors click on your bre.ad link, a “toast” created by you is shown for 5 seconds before the link it refers to is shown.   You can create an inventory of toasts that promote your own favorite sites or URLs, and each time you share a link, you advertise another site.

Try it by clicking this bre.ad generated short link.  It will show you a toast I created from my about.me profile, and then link you to back to the home page for this site: http://bre.ad/082tnk

So why don’t I like it?

Bre.ad will likely sell part of the inventory of “toasts” that they maintain to third parties. Their privacy policy explicitly says “Bread may use some of the information collected in order to customize the advertisements to your interests and preferences.” They also have a liberal cookie policy, collecting information about who clicks on each and every link (not just toasts) generated by their system.  Their terms of service require you to register for the site, and give them the demographic info they need to target you.

Bre.ad haven’t said what they intend to do with this information.  However, the information they’re collecting, when combined with their technology, enables a plague of interstitial ads to be generated during ordinary content navigation.

I’m not sure bre.ad is going to succeed in any case.  Social sharing is now the norm.  What web site doesn’t have a “tweet this” or “share on facebook” button on each page already? URL shorteners are now so ubiquitous that it’s actually more work to create a bre.ad link than other ways.  In order to succeed, bre.ad needs to become embedded in those sites, rather than focus on consumers.

Bre.ad:  a half-baked idea? burnt toast? (rim-shot, please!)

CLARIFICATION: I received a note from Alan Chan, Bre.ad founder and CEO, clarifying exactly the relationship that Lady Gaga has with Bre.ad.  Alan’s note read, in part “Both Lady Gaga and 50 Cent are early adopters who have been known to send out Bre.ad links, but neither individual is an investor, endorser, supporter or a backer of our company.”  Thanks for making that clear, Alan.

Enhanced by Zemanta
{ 1 comment }

Living with Playbook, two months later

When RIM launched the BlackBerry Playbook in mid-April, I grabbed one and started using it.  You might have noticed that I didn’t write about it the time.  The same as other writers, my initial take on the Playbook was that it had a lot of promise but wasn’t ready for prime-time.  Some websites didn’t work, there weren’t many apps, and the device itself was a little buggy. I wanted to like it though, and set about figuring out whether I could put my iPad aside, and use the Playbook instead.

Two months, and three software updates later, Playbook has dramatically improved.

  1. Battery life, which was typically less than a day when Playbook first launched, is now much improved.
  2. Applications are coming at a steady pace, and several key applications that I depend on my iPad, are now available.
  3. The biggest breakthrough was a native Dropbox client named Bluebox.   Now I can access all of my files from Playbook.
  4. All of the major newspapers I read on iPad, now have equivalent editions on Playbook.   Interestingly, they’ve all chosen to omit the social sharing buttons that are present on iPad.  That throws a wrench into my early morning routine – reading the paper on my tablet, and tweeting interesting stories.  No equivalent yet exists for FlipBoard on the iPad, but several capable news aggregators (like News360) and RSS readers are available.
  5. One of the biggest criticisms of Playbook, when it launched, was the lack of a native email client.  If you didn’t have a Blackberry to use their Blackberry Bridge application with, then you were out of luck.  The same is still true.  However, for Blackberry users, the Bridge application provides capable access to email, contacts, calendar, and messenger.  The email and contacts experience is very similar to that on iPad, and using Blackberry Messenger on Playbook is light years better than the native experience on a Blackberry device.

The QNX operating system, which is the foundation of Playbook, will also becoming to the Blackberry handset.   It will be a profound shift, and it can’t happen soon enough.  Playbook is already a better device than the BlackBerry that is its companion, and it’s only going to get better.

Enhanced by Zemanta
{ 1 comment }