24 June 2015

Ambient Light for Bright, Efficient Screens

Using a simple structure comprising a mirror and an absorbing layer to take advantage of the wave properties of light, researchers at Qualcomm MEMS Technologies, Inc., a subsidiary of Qualcomm Incorporated, have developed a display technology that harnesses natural ambient light to produce an unprecedented range of colors and superior viewing experience. This display technology, which could greatly reduce the amount of power used in multiple consumer electronics products, is the latest version of an established commercial product known as Qualcomm® Mirasol™.


Based on a new color rendering format that the researchers call Continuous Color, the new design helps solve many key problems affecting mobile displays such as how to provide an always-on display function without requiring more frequent battery charging and a high quality viewing experience anywhere, especially in bright outdoor environments. Using a combination of a mirror with a thin absorbing layer separated by a precise and controllable gap. While the mirror by itself would simply reflect all of the incident light energy, the absorbing layer selectively filters out a narrow slice of the spectrum.

More information:

22 June 2015

Brain-to-Text System Converts Speech Brainwave Patterns to Text

German and U.S. researchers have decoded natural continuously spoken speech from brain waves and transformed it into text — a step toward communication with computers or humans by thought alone. Their “Brain-to-Text” system recorded signals from an electrocorticographic (ECoG) electrode array located on relevant surfaces of the frontal and temporal lobes of the cerebral cortex of seven epileptic patients, who participated voluntarily in the study during their clinical treatment.


The patients read sample text aloud during the study. Machine learning algorithms were then used to extract the most likely word sequence from the signals, and automatic speech-to-text methods created the text output. The system achieved word error rates as low as 25% and phone (instances of phonemes in utterances) error rates below 50%. The researchers suggest that the Brain-to-Text system might lead to a speech-communication method for locked-in patients in the future.

More information:

21 June 2015

Robot That Learns Like a Child

Two researchers from the Norwegian University of Science and Technology (NTNU) have made a robot that learns like a young child. At least, that's the idea. The machine starts with nothing -- it has to learn everything from scratch. The machine is called ‘self’. It analyses sound through a system based on the human ear, and learns to recognize images using a digital model of how nerve cells in the brain handle sensory impressions. It is designed to learn entirely from sensory input with no pre-defined knowledge database, so that its learning process will resemble that of a human child in early life. In the beginning, the robot knew nothing. It 'hears' sounds from a person speaking, and can connect these to a simultaneous video feed of the speaker. The robot picks a sound that the person appears to be emphasizing, and responds by playing other sounds that it associates with this, while projecting a neural representation of its association between the sound and pictures.
 
 
It doesn't show a video, but rather how its 'brain' connects sounds and images. The robot gradually absorbed more and more impressions of different people. Certain people, like guides, affected it more, because it 'saw' them often. The robot also learned to filter input. If a word is said in a certain way five times, and then in a different way once, it learned to filter away the standout and concentrate on the most common way, which is presumably correct. This processing happens during the robot's downtime. After a while, the robot was able to connect words and pictures together in a more complex manner, you could say that it associates sounds with images and connects them by itself. The robot is constantly under development, and the result is a robot that shows how it makes associations in a very pedagogical manner. It doesn't resemble any living organisms on purpose -- you're supposed to concentrate on its learning and the process behind it.

More information:

20 June 2015

Using Minecraft to Unboggle the Robot Mind

Researchers from Brown University are developing a new algorithm to help robots better plan their actions in complex environments. It's designed to help robots be more useful in the real world, but it's being developed with the help of a virtual world, that of the video game Minecraft. Basic action planning, while easy for humans, is a frontier of robotics. Part of the problem is that robots don't intuitively ignore objects and actions that are irrelevant to the task at hand. For example, if someone asked you to empty the trashcan in the kitchen, you would know there's no need to turn on the oven or open the refrigerator. You'd go right to the trashcan. Robots, however, lack that intuition. Most approaches to planning consider the entire set of possible objects and actions before deciding which course to pursue. In other words, a robot might actually consider turning on the oven as part of its planning process for taking out the trash. In complex environments, this leads to what computer scientists refer to as the ‘state-space explosion’ (an array of choices so large that it boggles the robot mind). The algorithm augments standard robot planning algorithms using ‘goal-based action priors’, sets of objects and actions in a given space that are most likely to help an agent achieve a given goal.  The priors for a given task can be supplied by an expert operator, but they can also be learned by the algorithm itself through trial and error.


The game Minecraft, as it turns out, provided an ideal world to test how well the algorithm learned action priors and implemented them in the planning process. For the uninitiated, Minecraft is an open-ended game, where players gather resources and build all manner of structures by destroying or stacking 3D blocks in a virtual world. At over 100 million registered users, it's among the most popular video games of all time. After the algorithm ran through a number of trials of a given task to learn the appropriate priors, the researchers moved to a new domain that it had never seen before to see if it could apply what it learned. Indeed, the researchers showed that, armed with priors, their Minecraft agents could solve problems in unfamiliar domains much faster than agents powered by standard planning algorithms. Having honed the algorithm in virtual worlds, the researchers then tried it out in a real robot. They used the algorithm to have a robot help a person in the task of baking brownies. The algorithm was supplied with several action priors for the task. For example, one action prior let the robot know that eggs often need to be beaten with a whisk. So when a carton of eggs appears in the robot's workspace, it is able to anticipate the cook's need for a whisk and hand him one. The work also shows the potential of virtual spaces like Minecraft in developing solutions for real-world robots and other artificial agents.

More information:

19 June 2015

Microsoft's Holographic Minecraft Demo

Minecraft is a game we’ve seen countless times before. And yet, the virtual reality demo showcased by Microsoft a few days ago makes the game almost seem brand new. Can start a new world on any surface you would like, or play Minecraft right on your wall, and then transfer it to another surface.


It is possible to manipulate the world using your voice and your hands. And of course, you can walk around and change your viewpoint whenever you’d like. You can zoom in, out, and around using your voice or can raise the world and see things that are normally not visible.

More information:

17 June 2015

20 Billion Nanoparticles Communicate with Brain Using Electricity

Electricity is the brain's language, and now we can speak to it without wires or implants. Nanoparticles can be used to stimulate regions of the brain electrically, opening up new ways to treat brain diseases. It may even one day allow the routine exchange of data between computers and the brain. A material discovered in 2004 makes this possible. When magnetoelectric nanoparticles (MENs) are stimulated by an external magnetic field, they produce an electric field. If such nanoparticles are placed next to neurons, this electric field should allow them to communicate.


Researchers of Florida International University inserted 20 billion of these nanoparticles into the brains of mice. They then switched on a magnetic field, aiming it at the clump of nanoparticles to induce an electric field. An electroencephalogram showed that the region surrounded by nanoparticles lit up, stimulated by this electric field that had been generated. Their goal is to build a system that can both image brain activity and precisely target medical treatments at the same time. Since the nanoparticles respond differently to different frequencies of magnetic field, they can be tuned to release drugs.

More information: