28 February 2012

Modelling Human Brain

Officially, the Swiss Academy of Sciences meeting in Bern on 20 January was an overview of large-scale computer modelling in neuroscience. Unofficially, it was neuroscientists' first real chance to get answers about researcher’s controversial proposal for the Human Brain Project (HBP) — an effort to build a supercomputer simulation that integrates everything known about the human brain, from the structures of ion channels in neural cell membranes up to mechanisms behind conscious decision-making.


Brain researchers are generating 60,000 papers per year, they're all fantastic studies — but all focused on their one little corner: this molecule, this brain region, this function, this map. The HBP would integrate these discoveries, and create models to explore how neural circuits are organized, and how they give rise to behaviour and cognition, among the deepest mysteries in neuroscience. Ultimately, the HBP would even help researchers to grapple with disorders such as Alzheimer's disease.

More information:

http://www.nature.com/news/computer-modelling-brain-in-a-box-1.10066

25 February 2012

Virtuality for Language Learning

Merging the real world with its mirror in a virtual world so that students can be immersed in a hybrid learning environment that permits improved language teaching: that is the objective of scientists at the Universidad Carlos III de Madrid (UC3M) who have developed the first prototype that demonstrates how a platform of this type works. These researchers have used an open code platform to create distributed three-dimensional virtual worlds (OpenWonderland), and they have made use of geolocalization, which is already incorporated in smartphones. The purpose is to take advantage of the immersion characteristics that virtual worlds offer along with the interaction of these three spaces (reality, the virtual world and the hybrid one) in order to create e-learning activities in the field of languages.


The researchers have recently carried out a pilot test of the prototype in the center of Madrid in which Spanish was taught to foreigners. They created a mirror of the real world, in this case of the Gran Via in the capital, and they superimposed digital information on the reality that the users observed through a camera and a mobile phone screen. This way they were able to create a world of enhanced reality and the users who are walking along the street can see the avatars that represent the participants in their smartphones, which are connected to the platform via Internet. With this system, students who are inside of the system can see the movements of the avatar who represents the teacher, who may really be in the street, and who can, in turn, interact with the students by means of his/her mobile phone.

More information:

http://www.uc3m.es/portal/page/portal/news_repository/general_news/Research%20on%20applying%20enhanced%20virtuality%20to%20language%20learnin?_template=/SHARED/pl_noticias_detalle_pub_ingles

22 February 2012

A Robot Sketches Portraits

Artists are often colorful personalities. This one, though, comes across as cool, precise and metallic -- and is anything but extravagant. No wonder -- after all, it's an industrial robot, one that will convert the Fraunhofer stand at CeBIT into an art studio. Its artistic genius only emerges if someone takes a seat on the model's stool positioned in front of the robot: first, its camera records an image of its model; then it whips out its pencil and traces a portrait of the individual on its easel. After around ten minutes have passed, it grabs the work and proudly presents it to its public. This robot installation was developed by artists in the robotlab group, at the Center for Art and Media ZKM in Karlsruhe, Germany, some of whom are now employed at the Fraunhofer Institute for Optronics, System Technologies and Image Exploitation IOSB in Karlsruhe. But how does this technical production aid manage to provide an authentic rendering of a person's facial expressions? Researchers used an image-evaluation process that essentially equips the robot with the sense of sight. There is a camera mounted on the robot's arm that it uses first to take the person's picture. Edge-processing software seeks out the contrasts in the image and translates these to robot coordinates: to movements of the robot's arm.


For the researchers and artists, the main difficulty was to adjust the algorithm for image processing so that the sketched image would leave the impression of a portrait. They attach great importance to the artistic look of the drawings that results, but on the other hand, they have also equipped the robot with an automatic system that enables it to carry out all of the steps itself. With this installation, they have created an interface between art, science and technology. The robot's everyday routine is less artistic, however: ordinarily, researchers at IOSB use it to analyze the optical reflection properties of various materials. They shine light on an object from various directions. The robot's arm circles the material sample in a hemispheric pattern, measuring how the object reflects light. Experts refer to this as a material's spatial reflection characteristics. This helps design objects such as reflectors so that they return light in the most bundled way possible to the direction from which it comes. Then the reflector emits a bright flash that draws the driver's attention to the child. The objective is different when it comes to paint effects on a car's own surface: The aim there is to display different hues to the observer depending on the direction of view.

More information:

http://www.sciencedaily.com/releases/2012/02/120217115543.htm

21 February 2012

AR Space Medical Knowhow

A new augmented reality unit developed by ESA can provide just-in-time medical expertise to astronauts. All they need to do is put on a head-mounted display for 3D guidance in diagnosing problems or even performing surgery. The Computer Assisted Medical Diagnosis and Surgery System, CAMDASS, is a wearable augmented reality prototype. Augmented reality merges actual and virtual reality by precisely combining computer-generated graphics with the wearer’s view. Ultrasound is leading the way because it is a versatile and effective medical diagnostic tool, and already available on the International Space Station. Future astronauts venturing further into space must be able to look after themselves. Depending on their distance from Earth, discussions with experts on the ground will involve many minutes of delay or even be blocked entirely.


CAMDASS uses a stereo head-mounted display and an ultrasound tool tracked via an infrared camera. The patient is tracked using markers placed at the site of interest. An ultrasound device is linked with CAMDASS and the system allows the patient’s body to be ‘registered’ to the camera and the display calibrated to each wearer’s vision. 3D augmented reality cue cards are then displayed in the headset to guide the wearer. These are provided by matching points on a ‘virtual human’ and the registered patient. Reference ultrasound images give users an indication of what they should be seeing, and speech recognition allows hands-free control. The prototype has been tested for usability at Saint-Pierre University Hospital in Brussels, Belgium, with medical and nursing students, Belgian Red Cross and paramedic staff.

More information:

http://www.esa.int/esaCP/SEMHSTSXXXG_index_0.html

13 February 2012

VS-GAMES '12 Conference

The 4th International Conference on Games and Virtual Worlds for Serious Applications (VS-GAMES’12) will be held between 29-31 October, in Genoa, Italy 2012. The emergence of serious or meaningful uses of games technologies and virtual world applications have been swift and dramatic over the last years. As a result, there has been little time to develop the theoretical and academic rigor in the emerging field. This problem has been exacerbated by the largely technological drive of the applications, with often little time to consider more user-centered approaches to design and use of serious applications of games and virtual world technologies. Another factor affecting the field has been the general aspects of convergence between new technologies, such as AI applications, augmented reality, mobile technologies, GPS technologies, sensor technologies and social software. Together, this has presented potential for developing new technologies based upon immersive and interactive interfaces and has in parallel produced many unanswered questions in terms of the usage of the formats. As a result, there is a clear need to consider new frameworks, theories, methods and design strategies for making serious applications of games and virtual world technologies more effective and useful as part of education, health and training.


The 4th International Conference in Games and Virtual Worlds for Serious Applications aims to meet the significant challenges of the cross-disciplinary community that work around these serious application areas by bringing the community together to share case studies of practice, to present new frameworks, methodologies and theories and to begin the process of developing shared cross-disciplinary outputs. In order to achieve this main aim the conference will pioneer new methods for bringing together and supporting communities of practice emerging in themed areas beyond the duration of the conference. Using the conference as an ignition to support a wider aspiration to form and sustain a community of practice around the field. The term 'Serious Games' covers a broad range of applications from flash-based animations to totally immersive and code driven 3D environments where users interface with large volumes of data through sophisticated and interactive digital interfaces. This shift towards immersive world applications being used to support education, health and training activities marks the beginning of new challenges that offer real scope for collaborative and multi-disciplinary research solutions, and real opportunities for innovative development.

More information:

http://www.vs-games2012.org/

12 February 2012

Kilobots Are Leaving the Nest

The Kilobots are coming. Computer scientists and engineers at Harvard University have developed and licensed technology that will make it easy to test collective algorithms on hundreds, or even thousands, of tiny robots. Called Kilobots, the quarter-sized bug-like devices scuttle around on three toothpick-like legs, interacting and coordinating their own behavior as a team. A June 2011 Harvard Technical Report demonstrated a collective of 25 machines implementing swarming behaviors such as foraging, formation control, and synchronization. Once up and running, the machines are fully autonomous, meaning there is no need for a human to control their actions.


One key element is the development of sophisticated algorithms that can coordinate the actions of tens to thousands of robots. The name Kilobot describes the researchers' goal of quickly and inexpensively creating a collective of a thousand bots. Inspired by nature, such swarms resemble social insects, such as ants and bees that can efficiently search for and find food sources in large, complex environments, collectively transport large objects, and coordinate the building of nests and other structures. Due to reasons of time, cost, and simplicity, the algorithms being developed today in research labs are only validated in computer simulation or using a few dozen robots at most.

More information:

http://www.sciencedaily.com/releases/2011/11/111122112020.htm

10 February 2012

Effects of Violent Video Games

Sustained changes in the region of the brain associated with cognitive function and emotional control were found in young adult men after one week of playing violent video games, according to study results presented by Indiana University School of Medicine researchers at the annual meeting of the Radiological Society of North America. This is the first time the IU researchers, who have studied the effects of media violence for more than a decade, have conducted an experimental study that showed a direct relationship between playing violent video games over an extended period of time and a subsequent change in brain regions associated with cognitive function and emotional control. The controversy over whether or not violent video games are potentially harmful to players has been debated for many years, even making it as far as the Supreme Court in 2010. There has been little scientific evidence demonstrating that the games have a prolonged negative neurological effect. For the study, 28 healthy adult males, age 18 to 29, with low past exposure to violent video games were randomly assigned to two groups of 14.


Members of the first group were instructed to play a shooting video game for 10 hours at home for one week and refrain from playing the following week. The second group did not play a video game at all during the two-week period. Each of the 28 men underwent functional magnetic resonance imaging (fMRI) analysis at the beginning of the study, with follow-up exams at one and two weeks. During fMRI, the participants completed an emotional interference task, pressing buttons according to the color of visually presented words. Words indicating violent actions were interspersed among nonviolent action words. In addition, the participants completed a cognitive inhibition counting task. The results showed that after one week of violent game play, the video game group members showed less activation in the left inferior frontal lobe during the emotional Stroop task and less activation in the anterior cingulate cortex during the counting Stroop task, compared to their baseline results and the results of the control group after one week. After the video game group refrained from game play for an additional week, the changes to the executive regions of the brain returned closer to the control group.

More information:

http://www.sciencedaily.com/releases/2011/11/111130095251.htm

07 February 2012

Games for Calculating Floods

Norwegian researchers have borrowed a page from game developers to devise simulation technology that can save lives in many parts of the world by helping to reduce the damage from catastrophic floods. Three-dimensional simulations of natural phenomena such as tsunamis and floods require tremendous computing capacity – and even then, clear-cut answers are slow to emerge. But now researchers at SINTEF ICT in Oslo have developed methods that cut that waiting time substantially. The key lies in using computer graphics cards.


The simulations can also be used for flood-prone areas where other conditions such as landslides may cause flooding. All that is required is the input of a different data set. It is also conceivable that terrain data can be collected from other sources such as Google Earth. The method could also streamline the visualisation of petroleum reservoirs based on data collected from the seabed. This is another application area that requires processing of massive amounts of data.

More information:

http://www.forskningsradet.no/en/Newsarticle/Gaming_technology_for_calculating_floods/1253972272123