29 September 2007

Open Source Physics Engines

Computer games make use of physics engines to represent realistically a 2D/3D environment and thus make the player more immersed into the environment. Some of the most popular open-source physics engines are listed below:

AGEIA is dedicated to delivering dynamic interactive realism to the ever demanding complexity of next generation games. Its flagship solution, AGEIA PhysX, is the world's first dedicated physics engine and physics processor to bridge the gap between static virtual worlds and responsive unscripted physical reality. AGEIA PhysX allows developers to use active physics-based environments for a truly realistic entertainment experience.

Bullet is a 3D Collision Detection and Rigid Body Dynamics Library for games and animation. Free for commercial use, including PlayStation 3, Open Source multiplatform C++ under the ZLib License. Discrete and continuous collision detection, integrated into Blender 3D and COLLADA 1.4 Physics tools support. Bullet features collision shapes include: Sphere, box, cylinder, cone, convex hull, and triangle mesh. It implements GJK convex collision detection and swept collision test. It also supports continuous Collision Detection and constraints.

Chrono::Engine is a multi-body dynamics engine, aimed at providing high-performance simulation features in C++ projects. CHRONO::ENGINE can perform dynamical, kinematics and static analyses for virtual mechanisms built of parts such as actuators, motors, constraints between parts, spring, dampers, etc. Applications will be able to simulate a wide set of mechanisms: cars, robots, trucks, trains, car suspensions, earth-moving machines, motorscrapers, backhoe loaders, human skeletons, aereospatial devices, landing gears, robotic manipulators, engines, torque converters, prosthetic devices, artificial arms, miniaturized mechanisms for tape recorders, camcorders, etc.

DynaMechs is a cross-platform, object-oriented, C++ class library that supports dynamic simulation of a large class of articulated mechanisms. From simple serial chains to tree-structured articulated mechanisms (including the simpler star topologies) to systems with closed loops. Code to compute approximate hydrodynamic forces are also available to simulate underwater robotic systems of this class including submarines (ROVs, AUVs, etc.) with one or more robotic manipulators. Joint types supported include the standard revolute and prismatic classes, as well as efficient implementations (using Euler angles or quaternions) for ball joints.

DynaMo is a software library providing classes that takes care of the calculation of the motions of geometries moving under the influence of forces and torques and impulses. In addition, the library can also compute forces for you through the mechanism of constraints. These allow you to easily connect geometries to each other in various ways. A constraint only has to be specified once, and the Dynamo library will continually enforce it from that moment on by applying the required reaction forces. The Dynamo library is released under the terms of the GNU Library General Public License.

FastCar library was designed by people with great experience in multi-body dynamics. The authors previously built a fast, versatile and very elaborate multi-body dynamics package that was used for many applications including games and vehicles. However, experience showed that a complex general purpose physics package and a versatile and efficient vehicle simulator for games are two very different things; so the decision was taken to build a separate small package for simulation of vehicles having speed and simplicity in mind.

Newton is an integrated solution for real time simulation of physics environments. The API provides scene management, collision detection, dynamic behavior and yet it is small, fast, stable and easy to use. Newton implements a deterministic solver, which is not based on traditional LCP or iterative methods, but possesses the stability and speed of both respectively. This feature makes Newton a tool not only for games, but also for any real-time physics simulation.

ODE is an open source, high performance library for simulating rigid body dynamics. It is fully featured, stable, mature and platform independent with an easy to use C/C++ API. It has advanced joint types and integrated collision detection with friction. ODE is useful for simulating vehicles, objects in virtual reality environments and virtual creatures. It is currently used in many computer games, 3D authoring tools and simulation tools.

OpenTissue provides generic algorithms and data structures for rapid development of interactive modelling and simulation. OpenTissue works as a foundation for research and student projects in physics-based animation at the Department of Computer Science, University of Copenhagen (commonly known as DIKU). OpenTissue is free for commercial use, open source under the ZLib License.

PAL (physics abstraction layer) provides a unified interface to a number of different physics engines. This enables the use of multiple physics engines within one application. It is not just a simple physics wrapper, but provides an extensible plug-in architecture for the physics system, as well as extended functionality for common simulation components. PAL does not restrict you to one particular physics engine. Finally, PAL has an extensive set of common features such as simulating different devices or loading physics configurations from XML or COLLADA files.

Physsim is a C++ rigid-body dynamics simulation library. It has been developed for two purposes: (a) to provide a stable, flexible platform for research into rigid body simulation and (b) to supply roboticists with state-of-the-art tools in robotic simulation. Rigid body simulators can be measured in three ways: speed, accuracy, and stability. Speed is important so that complex environments can be simulated in real-time. Accuracy implies that the simulator reflects the physical phenomena of the real world. Stability (or instability) is an artefact of the numerous approximations made in rigid body simulation. Rigid body simulators currently must balance these three factors.

Pulsk investigates novel methods for simulation of rigid bodies that integrate well with stacking situations. This work focuses on impulse-based simulation techniques with physical interactions such as collision, contact and friction in relatively complex scenes: large number of stacked objects, sliding objects, highly dynamical scenes with non-convex bodies. Pulsk aims the work to be applicable to real-time applications such as games, therefore some small approximations in the algorithms are allowed.

SPE (Simple Physics Engine) is a lightweight but still powerful physics engine for games and virtual reality programs. SPE includes the following features: Uniform Tri-Mesh collision detection algorithm; Collision data analysis; Stable solver; Joint; Breakable RigidBody; High Parallel Computation and an easy to use interface.

Tokamak Game Physics SDK is a high performance real-time physics library designed specially for games. It has a high-level, simple to use programming interface. Tokamak features a unique iterative method for solving constraint. This allows developers to make the trade-off between accuracy and speed, as well as providing more predictable processor and memory usage. Currently, Tokamak provides collision detection for primitives (box, sphere, capsule), combination of primitives, and arbitrary static triangle mesh as well as convex to convex collision detection.

True Axis Physics SDK is a fast and solid real world physics simulation system designed for demanding games and virtual interactive environments. The SDK aims in avoiding common issues present in most physics and collision implementations and give developers the control they need over the way objects behave. The SDK features swept collision detection, allowing it to handle rapidly changing environments far more effectively than other, non-swept based, physics systems. True Axis will seamlessly handle collisions between many high velocity entities, such as speedy vehicles or missiles, with out letting them become intersected.

26 September 2007

Halo III

Halo III (also known as Halo 3) launched in Europe at midnight on Tuesday (25 September 2007) and it is the third chapter in the highly successful and critically acclaimed Halo franchise. Halo III is one of the most anticipated and heavily marketed titles in history. It represents the third chapter in the Halo trilogy—an international award-winning action series that grew into a global entertainment phenomenon, selling more than 14.5 million units worldwide, logging more than 650 million hours of multiplayer action on Xbox LIVE®, and spawning action figures, books, a graphic novel, apparel, an upcoming film adaptation, and more.

In the UK more than 1,000 shops opened at midnight so gamers could get their hands on the title. Many gamers started queuing outside shops in the afternoon to ensure they got hold of a copy. The Xbox 360 game is Microsoft's key weapon in the console wars with Sony and Nintendo. Microsoft hopes day one sales will top £70m ($140m), more than the opening takings of any movie in history. Microsoft needs Halo III to boost sales of the Xbox 360; despite investing billions of dollars into the Xbox project it has yet to see any meaningful profitable return. More than a million people pre-ordered the game, which is the concluding part of a science fiction trilogy that tells the story of a super soldier, called Master Chief, who is leading the fight to save humanity from an alien collective, called the Covenant. The game has become a major entertainment franchise in recent years - with spin-off games, clothing, novels and action figures all available.

More information:

http://www.xbox.com/en-US/games/h/halo3/

http://www.halo3.com/

http://news.bbc.co.uk/1/hi/technology/7010214.stm

22 September 2007

Aslib Proceedings Journal Article

Last month, Aslib Proceedings has published a journal article I co-authored with a colleague at City University with title ‘Mixed reality (MR) interfaces for mobile information systems’. The paper presented some of the results obtained from the LOCUS research project. The purpose of this paper was to explore how mixed reality interfaces can be used for the presentation of information on mobile devices. The motivation for this work is the emergence of mobile information systems where information is disseminated to mobile individuals via handheld devices. The LOCUS project is extending the functionality of the WebPark architecture to allow the presentation of spatially referenced information via these mixed reality interfaces on mobile devices.

In particular, the LOCUS system is built on top of the WebPark mobile client-server architecture which provides the basic functionality associated with LBS including the retrieval of information based upon spatial and semantic criteria, and the presentation of this information as a list or on a map (top images). However, the LOCUS system extends this LBS interface by adding a VR (bottom left image) and an AR interface (bottom right image). We strongly believe that the most suitable interface for mobile information systems is likely to be user and task dependent, however, mixed reality interfaces offer promise in allowing mobile users to make associations between spatially referenced information and the physical world.

The abstract of the paper can be found online at:

http://www.emeraldinsight.com/Insight/viewContentItem.do;jsessionid=C6648C35B8ACA30C963A51EBCE6E16BE?contentType=Article&contentId=1626454

Also a draft version can be downloaded from here.

19 September 2007

DigitalGlobe Launch

The WorldView-1 satellite was launched on Tuesday, September 18, 2007, from Vandenberg Air Force Base in California. WorldView-1 is the first of two new next-generation satellites DigitalGlobe, a leader in the global commercial Earth imagery and geospatial information market, plans to launch. Shortly after the launch, a DigitalGlobe ground station received a downlink signal confirming that the satellite successfully separated from its launch vehicle and had automatically initialized its onboard processors. WorldView-1 is currently undergoing a calibration and check-out period and will deliver imagery soon after. First imagery from WorldView-1 is expected to be available prior to October 18, the six-year anniversary of the launch of QuickBird, DigitalGlobe’s current satellite.

WorldView-1, built by Ball Aerospace and Technologies Corporation with the imaging sensor provided by ITT Corporation, is a high-capacity, panchromatic imaging system featuring half-meter resolution imagery. With an average revisit time of 1.7 days, WorldView-1 is capable of collecting up to 750,000 square kilometres (290,000 square miles) per day of half-meter imagery. Frequent revisits will increase image collection opportunities, enhance change detection applications and enable accurate map updates and will provide more accurate data to Google Earth. The satellite is capable of collecting; storing and down linking more frequently updated global imagery products than any other commercial imaging satellite in orbit, allowing for expedited image capture, processing and delivery to customers where speed is a driving factor. WorldView-1 is equipped with state-of-the-art geo-location accuracy capability and exhibits unprecedented agility with rapid targeting and efficient in-track stereo collection.


More information:

http://media.digitalglobe.com/index.php?s=press_release_popup&ITEM=135

17 September 2007

SiSi

Technology that translates spoken or written words into British Sign Language (BSL) has been developed by researchers at IBM. In particular a software system, called SiSi (Say It Sign It), was created by a group of students in the UK. SiSi brings together a number of computer technologies. A speech recognition module converts the spoken word into text, which SiSi then interprets into gestures that are used to animate an avatar which signs in BSL. The main of SiSi is to enable deaf people to have simultaneous sign language interpretations of meetings and presentations based on speech recognition to animate a digital character or avatar.

IBM says its technology will allow for interpretation in situations where a human interpreter is not available. It could also be used to provide automatic signing for television, radio and telephone calls. It is worth-mentioning that the concept has already gained the approval of the Royal National Institute for Deaf people (RNID). The students used two signing avatars developed by the University of East Anglia. One of them signs in BSL and the other uses Sign Supported English - a more direct translation using conventional syntax and grammar.

13 September 2007

Serious Virtual Worlds ‘07

Today I have attended the first day of the Serious Virtual Worlds ‘07, the First European Conference on the Professional Applications on Virtual Worlds (13-14 September 2007) held at Coventry TechnoCentre. The theme for this first Serious Virtual Worlds conference is ‘The Reality of the Virtual World' and takes a close look at how virtual worlds are now being used for serious professional purposes. Many organisations are now actively researching and deploying virtual worlds. Serious Virtual Worlds is a good introduction to the serious uses of virtual worlds. This was driven by the extraordinary success of virtual worlds such as ‘Second Life’ (see screenshot below) as virtual social spaces for play leads to the question ‘What is the potential for the serious uses of these worlds?’

The theme of the first day is ‘Introducing Virtual Worlds’. A number of presentations and conversations introducing virtual worlds and the 3D web from Cisco, Linden Labs, TruSim, Forterra, Giunti Labs, Pixel Learning, Caspian, Ambient Performance and Daden, closing with the launch of the Serious Games Institute’s ‘Second Life’ Island with a Cocktail Reception followed by the Conference Dinner. The theme of the second day, is ‘Serious Virtual Worlds: Action & Potential’ including live virtual world presentations and conversations from Digital Earth, Reuters, Stamford Medical School, TruSim, PA Consulting, IBM, Forterra, NPL, Logicom, and AVM.

More information:

http://www.seriousvirtualworlds.net/

08 September 2007

Deep Exploration

Right Hemisphere's Deep Exploration Standard Edition is a software tool that allows the quick production of multimedia 3D graphics. Unlike typical multimedia 3D tools, Deep Exploration Standard Edition is a unified application that works across a number of multimedia formats and 3D graphics styles so it is easy to take advantage of existing content and avoid big changes in file formats. The main operations that Deep Exploration performs includes: (a) translate 2D and 3D graphics and multimedia files, (b) search, view, and markup 3D graphics and (c) author, render, and publish 3D images and animations.

The above screenshot shows a 3D model representing City University’s campus and it was used as part of the LOCUS project. This 3D model (or 3D map) was originally generated in 3D Studio Max and then converted into two different formats used for mobile navigation in personal digital assistants (PDAs). More specific, the 3D map was converted in VRML for the Virtual Navigator interface and to DirectX for the MD3DM VR interface.

More information:

http://www.righthemisphere.com/products/dexp/

01 September 2007

IJAC Journal Article

Last June, the International Journal of Architectural Computing (IJAC) has published a journal paper I co-authored with colleagues from the Centre for VLSI and Computer Graphics at the University of Sussex and the Ename Center, in a Special Issue on Cultural Heritage. The title of the article is ‘Multimodal Mixed Reality Interfaces for Visualizing Digital Heritage’ and the main aim of the paper is to provide several different and interesting types of virtual heritage exhibitions by utilising Web3D, virtual and augmented reality technologies for visualizing digital heritage in an interactive manner through the use of several different input devices. A high-level diagram illustrating the technologies employed in the multimodal mixed reality system is presented below.

The novelty of the technologies employed is that they allow users to switch between three different types of visualization environments including: the web in the traditional way, but including 3D, virtual reality and augmented reality. Additionally, several different interface techniques can be employed to make exploration of the virtual museum that much more interesting. In particular the architectural diagram illustrates several interaction modes from use of SpaceMouse and gamepad through to use of a physical replica of an artefact, and simple hand manipulation of AR marker cards. In addition, several visualization scenarios are provided ranging from the familiar web page but with 3D object, a virtual gallery environment, a person using a physical replica of an artefact to control and explore the virtual artefact, and several augmented reality examples.

A draft version of the article can be downloaded from here.