Page Information
...
Increased triggering types - smell, sound, temperature etc
- Increased expression types - vibration, gyroscope etc
- We have passed animal trials with digital contact lenses
- The emergence of "The Internet of things”
- Real telepresence integrating actuators for users into the environment (moving beyond web-conferencing)
- Wearable Lower form factor wearable displays
- Interfaces will become more customisable and the machines will adapt to the person individual (like as does speech recognition).
- Nicolas Negraponte point of being able to manage and extract value from multiple information streams
- Ray Kurzweil is trying to build a brain for google: http://www.wired.com/business/2013/04/kurzweil-google-ai/
...
- The future Augmentation of student faces with grades and special provisions, medical, social.
- Car windscreen information
- Contact lenses (successful trials on animals have already been conducted)
- Proximity based shopping alerts
- Augmented police helmet displays
- Gaming using AR consoles
- Household devices as operating conditions
Emerging applications
Magic Leap has received over $500 million from Google and other sources, and appears to aim at building "a Google Glasses on steroids that can seamlessly blend computer-generated graphics with the real world", according to a November 2014 analysis by Gizmodo. Gizmodo's conclusion was based on job listings, trademark registrations and patent applications from Magic Leap.
This presentation provides some background from an employee, experienced Games Designer, Graeme Devine.
The web of things
In this demonstration of standards such as WebGL a mobile device (such as a smartphone) is paired with a website.
The device is then used to interact with objects appearing on a webpage.
A pointer can be moved by moving a phone in the air and items selected by clicking. Sweeping gestures can also control on-objects appearing in a desktop browser.
Imagining possible innovations
...
In a few short years, many of us will routinely wear devices that augment our reality. Google looks to be was first out of the gate on this with their “Glass” glasses, released in 2013. Merging the physical reality with the screen reality, a mini-camera on the frame conveys what you are seeing conveyed what is seen to the data cloud, which will provide provided information on it to one of the lenses. This provides provided immediate visual search information.
Since this we have seen various AR/VR devices, however none have found social acceptance for day to day use.
AR apps
As we look at the world through our smartphones and AR glasses, corporations will develop both specialized and general apps in both free and paid mode. In free mode, the corporation will aggregate and provide information on various topics and experiences and have their logo always on display. In paid mode, a consumer or business will charge a small amount for a particular AR interface for which they are adding curatorial value tied to their expertise.
...
Second Stage AR headsets and pods
This is when the technology has advanced so far, say in the early 2020s, that what In the second stage we will experience in full AR mode may be stored by our neurological systems so powerfully that we may wonder: is it real or AR, taking us into deeper issues of moralityethical quandaries.