A new artificial intelligence system from the Georgia Institute of Technology develops interactive stories through crowdsourced data for more robust fiction. This video shows the AI replicating a typical first date to the movies (user choices are in red), including loud talkers, the arm-over-the-shoulder movie move and more.
Project Soli is developing a new interaction sensor using radar technology. The sensor can track sub-millimeter motions at high speed and accuracy. It fits onto a chip, can be produced at scale and built into small devices and everyday objects.
This is not another do-it-yourself website builder. The Grid harnesses the power of artificial intelligence to take everything you throw at it – videos, images, text, urls and more – and automatically shape them into a custom website unique to you. As your needs grow, it evolves with you, effortlessly adapting to your needs.
Our algorithms expertly analyze your media and apply color palettes that keep your messaging consistent and unique. The Grid also detects color contrasts, automatically adjusting typographic color to maximize legibility.
What’s possible when an AI does all the hard work for you? You can get things done, even on the go. Drag-n-drop builders don’t play nice with fingers on phones, but AI works perfectly, anywhere.
Never again change your content to fit your template or the latest hot mobile device. The layout changes as you add content, and adapts to look great and work flawlessly no matter where your users find you.
It’s as easy as that. Actually, it’s incredibly complicated, but The Grid figures it out so you don’t have to.
Fraunhofer IIS presents a real-time* face tracker on Google Glass that can read people’s emotions. At the same time it also estimates age and gender of persons in front of Glass’ camera. Privacy is important: everything happens inside Glass – no image leaves the device. Detection is anonymous – no facial recognition. The app is based on SHORE, Fraunhofer’s proprietary software library for real-time facial detection and analysis. Emotion analysis on wearable devices has endless applications. E.g. it can be used in aids for people suffering from ASD (autism spectrum disorders) or for visually impaired.
The device doesn’t look like much: a caterpillar-sized assembly of metal rings and strips resembling something you might find buried in a home-workshop drawer. But the technology behind it, and the long-range possibilities it represents, are quite remarkable.
The little device is called a milli-motein — a name melding its millimeter-sized components and a motorized design inspired by proteins, which naturally fold themselves into incredibly complex shapes. This minuscule robot may be a harbinger of future devices that could fold themselves up into almost any shape imaginable.
The device was conceived by Neil Gershenfeld, head of MIT’s Center for Bits and Atoms, visiting scientist Ara Knaian and graduate student Kenneth Cheung, and is described in a paper presented recently at the 2012 Intelligent Robots and Systems conference. Its key feature, Gershenfeld says: “It’s effectively a one-dimensional robot that can be made in a continuous strip, without conventionally moving parts, and then folded into arbitrary shapes.”
Trials from a pilot study of direct brain-to-brain communication in humans conducted by Rajesh Rao, Andrea Stocco, and colleagues at the U of Washington, Seattle.
Seung-Schik Yoo of Harvard Medical School in Boston and colleagues created a non-invasive brain-to-brain interface that allowed human participants to move a rat’s tail with their thoughts via EEG and focused ultrasound signals.