20 January 2018

Gain Control of What is in our Minds

Working memory is a sort of mental sketchpad that allows you to accomplish everyday tasks. It also allows your mind to go from merely responding to your environment to consciously asserting your agenda. In a new study, researchers at MIT's Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences shows that the underlying mechanism depends on different frequencies of brain rhythms synchronizing neurons in distinct layers of the prefrontal cortex (PFC), the area of the brain associated with higher cognitive function. As animals performed a variety of working memory tasks, higher-frequency gamma rhythms in superficial layers of the PFC were regulated by lower-frequency alpha/beta frequency rhythms in deeper cortical layers. The findings suggest not only a general model of working memory, and the volition that makes it special, but also new ways that clinicians might investigate conditions such as schizophrenia where working memory function appears compromised.

The current study benefited from newly improved multilayer electrode brain sensors that few groups have applied in cognitive, rather than sensory, areas of the cortex. Researchers realized that they could determine whether deep alpha/beta and superficial gamma might interact for volitional control of working memory. In the lab they made multilayer measurements in six areas of the PFC as animals performed three different working memory tasks. In different tasks, animals had to hold a picture in working memory to subsequently choose a picture that matched it. In another type of task, the animals had to remember the screen location of a briefly flashed dot. Overall, the tasks asked the subjects to store, process, and then discard from working memory the appearance or the position of visual stimuli. With these insights, the team has since worked to directly test this multilayer, multi-frequency model of working memory dynamics more explicitly, with results in press but not yet published.

More information:

19 January 2018

VR Volumetric Photogrammetry

For consumers, VR generally means strapping on a head-mounted display (HMD), stepping into a new world and enjoying the experience. The enveloping nature of VR allows people to explore environments in 360 degrees, but for most, how these immersive worlds are created is a mystery. Though VR is still in its infancy, traditional methods of capturing and transforming footage have emerged. Typically, to shoot 360-degree VR content, a camera-person employs several cameras rigged in a spherical formation to capture the scene. Each camera is mounted at a specific angle so the camera’s field of view will overlap portions of the surrounding cameras’ field of view. With the overlap, editors should be able to get more seamless footage, without any gaps. Alternatively, professional 360-degree cameras can be purchased, but more or less look and function the same as hand-rigged apparatuses. Once filming is completed, editors stitch together the footage, creating a unified, continuous experience.

In addition to camera formation, camera placement also plays a major role in the end result of a particular piece of immersive content. Depending on what the content creator wants the consumer to experience, camera placement will vary. Though the creative direction will ultimately determine placement, it is important to note that even with several rigs placed throughout a set, this method creates a more static outcome. Volumetric photogrammetry could possibly hold the key to the future of VR. Unlike the method mentioned above, there are no takes or shots in volumetric VR that are later edited in post-production. This allows for a much more fluid experience, as the consumer frames the scene and chooses his or her own perspective. Using the volumetric capture method, footage of a real person is recorded from various viewpoints, after which software analyzes, compresses and recreates all the viewpoints of a fully volumetric 3D human. With volumetric VR explained, photogrammetry’s defining characteristic is the principle of triangulation.

More information:

12 January 2018

Action Games Expand Cognitive Abilities

The human brain is malleable, it learns and adapts. Numerous research studies have focused on the impact of action video games on the brain by measuring cognitive abilities, such as perception, attention and reaction time. An international team of psychologists, led by the University of Geneva (UNIGE), Switzerland, has assembled data from the last fifteen years to quantify how action video games impact cognition. The present study focuses on one specific video game genre, action video (war or shooter) games that have long been considered as mind-numbing and how they influence the cognitive skills of players. A total of 8,970 individuals between the ages of 6 and 40, including action gamers and non-gamers, took a number of psychometric tests in studies conducted by laboratories across the world with the aim of evaluating their cognitive abilities. The assessments included spatial attention (e.g. quickly detecting a dog in a herd of animals) as well as assessing their skills at managing multiple tasks simultaneously and changing their plans according to pre-determined rules. It was found that the cognition of gamers was better by one-half of a standard deviation compared to non-gamers.

The psychologists proceeded to analyze intervention studies as part of the second meta-analysis. 2,883 people (men and women) who played for a maximum of one hour a week were first tested for their cognitive abilities and then randomly divided into two groups: one played action games (war or shooter games), the other played control games (SIMS, Puzzle, Tetris). Both groups played for at least 8 hours over a week and up to 50 hours over 12 weeks. At the end of the training, participants underwent cognitive testing to measure any changes in their cognitive abilities. The results were beyond dispute: individuals playing action videos increased their cognition more than those playing the control games with the difference in cognitive abilities between these two training groups being of one-third of a standard deviation. Despite the good news for avid gamers, it is worth highlighting that these beneficial effects were observed in studies that asked individuals to space their game play out over a period of many weeks to months rather than to engage in a large amount of gaming in a single sitting. As is true in any learning activity, short bouts of repeated practice is much preferred over binging.

More information:

08 January 2018

Virtual Twin in Ten Minutes

Avatars are a core element of ICSpace, the virtual fitness and movement environment at Bielefeld University's Cluster of Excellence Cognitive Interaction Technology (CITEC). The system makes it possible to practise and improve motion sequences by providing individualised feedback in real time. The system is embodied by a virtual person acting as a coach. In addition, users see themselves as avatars - virtual copies of themselves in the mirror of the virtual room. The creation of such personalised avatars used to take several days, but CITEC researchers have now developed an accelerated process.

In order to create avatars for the ICSpace system, the researchers scan people. The computer scientists use a circular array of 40 DSLR cameras to photograph the respective person from all sides and use these images to compute several million three-dimensional sample points on the person's body. A generic virtual human model is fitted to this data in such a way that it corresponds to the shape and appearance of the person scanned. The resulting virtual people can be animated in detail: they can move all joints, even individual fingers, and communicate through facial expressions, speech and gestures.

More information:

07 January 2018

Tiny Robots Move & Think Like Insects

While engineers have had success building tiny, insect-like robots, programming them to behave autonomously like real insects continues to present technical challenges. A group of Cornell engineers has been experimenting with a new type of programming that mimics the way an insect's brain works, which could soon have people wondering if that fly on the wall is actually a fly. The amount of computer processing power needed for a robot to sense a gust of wind, using tiny hair-like metal probes embedded on its wings, adjust its flight accordingly, and plan its path as it attempts to land on a swaying flower would require it to carry a desktop-size computer on its back.

Unlike traditional chips that process combinations of 0s and 1s as binary code, neuromorphic chips process spikes of electrical current that fire in complex combinations, similar to how neurons fire inside a brain. Researchers are developing a new class of event-based sensing and control algorithms that mimic neural activity and can be implemented on neuromorphic chips. Because the chips require significantly less power than traditional processors, they allow engineers to pack more computation into the same payload. They developed an 80-milligram flying RoboBee outfitted with a number of vision, optical flow and motion sensors.

More information: