The two big drivers of iotum’s functionality are context and relevance.Â Context is the set of circumstances in which an event occurs.Â Relevance is pertinance to the matter at hand.Â Today, iotum is capable of considering contextual inputsÂ such calendar (are you in a meeting, what timeÂ of day is it, and so on), location, and presence.Â It evaluates one kind of event — a telephone call — and based on context, predicts relevance.Â
So far, so good.
I had a very challenging meeting with a couple of the smartest people I know in the industry, yesterday.Â Among the many questions posed was the simple question of how we could actually know, with certainty,Â how relevant a particular call is to each and every individual.Â The short answer is that we can’t.Â Human beings are simply unpredictable.Â But what we can do is use contextual clues to provide richer and more varied sources of inputs to the relevance engine, allowing it to better situationalize individuals.Â And, based on the heuristics in the system, which imitate a human assistant, we canÂ help it to make better decisions based on that richer and more varied contextual information.
The challenge is in deciding what is valuable input, and what not.Â For instance, this evening I learned that long time friend Phil Holden is also an accomplished photographer, and will shortly be mounting a show of his work in downtown Seattle.Â I’m a bit of an amateur photographer myself.Â So, how should I use this new information, and in what contexts is it meaningful?Â Is it meaningful to anyone else who isn’t interested in photography?
Just as the use of context is revolutionizing search, retailing, and music, by providing more relevant results, we expect to see the same impact in voice.Â What isn’t clear yet is how large the potential taxonomy of inputs might grow.