Towards an Interactive Encyclopedia of Physical Objects

2

What if you could use the camera of your smart phone or wearable to identify and receive interactive information of just about every item you encounter as you live your life? Aim your device at a painting and information about the artist and the motive pops up on your screen, look at a movie poster with your smart glasses and receive the trailer as well as ticket purchasing ability right down to your phone. In a not too distant future this is probably a very realistic image of your daily life.

Before we describe the idea more fully, we want start out by mentioning that there was a time when Wikipedia was called impossible. It was criticized of being biased and sparse, and the question was frequently posed how on earth the website would ever measure up to the extensive works of expert encyclopedia contributors. Now, much of that criticism has been withdrawn and Wikipedia with its 5,000,000 articles dwarfs the second largest contender Encyclopædia Britannica Online which provides the comparably modest number of 120,000 articles.

Wikipedia was pioneering the practice of harnessing the crowd in efficient ways, and the idea has sent ripples through almost every field of business during the last decade. Now, a novel company called Neurence is taking the idea of Wikipedia, using it not as inspiration but rather as an exact model for copying, and aims to transfer it from the digital world to the physical one. This is not to be held as criticism against the company; we would rather call the application quite astounding. The mission is interestingly enough to create an immense and interactive encyclopedia of physical objects.

Firstly, as an interesting note, it might be worth mentioning that one man managed to predict and describe the idea of Neurence in a rather pinpointed manor quite a while before the company made the news (we guess that the company had not been working with the idea for too long at this point). In an article written in the fall of 2013 called “Four Bold Predictions for the Future of Augmented Reality”, featured at Business2Community.com, the AR – proponent Ambarish Mitra contributes with the following point about the proximate future of Augmented Reality:

Prediction one: Creating a Wikipedia for 3D Objects

When we need details on a product or other object, we generally type a word or phrase into Google or other search engine. In many cases, the item can be difficult to describe, resulting in the need to try a few different terms and browse through pages of results before yielding the correct find. It will not be long before AR enables the creation of a “Wikipedia for objects,” where we no longer have to play the search engine guessing game.

Instead, by using image-recognition technology on our smart phones and tablets, users will be able to scan and identify 3-D objects like plants, furniture, or even car makes and models. Soon, AR will disrupt not only how we search for information on the Internet, but how we digest it as well. In the same way that the Wikipedia database grows each day, a library built through AR will eventually contain details on any 3-D object you can set your eyes on in the physical world.”

Interestingly enough, we could probably not have encountered a better sum-up description of what ability Neurence and its solution Sense (an intelligent cloud-based recognition engine) aims to provide to the world. In the words of Android Authority, Sense works as an online database that is comprised by information input by its users. It is being used to build up an encyclopedia of video, image, audio and text data for use with third party application. It almost seems natural that Google is working with the company, but also Samsung is onboard as well as four other developers.

According to Wired, the Sense solution uses sensors on wearables such as smartwatches and glasses to ‘see’ the physical world and then uses algorithms to understand it. Audio and visual data is sent from the wearable to the cloud where it is decoded before useful information is sent back to the device. It is worth mentioning that all the resource heavy computing is made in the cloud, which means that Sense can run smoothly even on low-powered devices.

As with all technological applications based on machine learning, the more information Sense is given, the more intelligent it becomes. It can already recognize millions of objects and sounds, but it is still early days for the technology. When Springwise tries to provide us with some useful examples of what the technology can do, they range from simply informing a user about the history of their chosen object to guiding consumers to an ecommerce shop where they could purchase it. To get the picture and spur your own imagination, you could simply watch the attached video at the top of this article.

The system has been made available to users and third party developers at no cost in the hope that others will adopt it. This means that whomever wants to, can integrate the technology in its own applications for smart devices. Thus, consumers will gain the ability to interact with Sense’s crowd-sourced encyclopedia of objects in limitless ways – it is all a matter of developer creativity. Even though the application is not-near finished, we think that the concept is both viable and utterly intriguing. It is a natural extension of the path technology is taking as we merge the digital and physical world to increased levels of seamless interaction.

Share.

2 Comments

  1. Pingback: 1p – Towards an Interactive Encyclopedia of Physical Objects – Exploding Ads

  2. Pingback: 1p – Towards an Interactive Encyclopedia of Physical Objects – blog.offeryour.com

Leave A Reply