摘要:Immersive Virtual Environments typically rely on a tracking system that captures the position and orientation of the head and hands of the cyber-user. Tracking devices, however, are usually quite expensive and require a lot of free space around the user, preventing them from being used for gaming at home. In contrast with these expensive capture systems, the use of inertial sensors (accelerometers and gyroscopes) to register orientation is spreading everywhere, finding them in different control devices at affordable prices, such as the Nintendo Wiimote. With a control like this, the player can aim at and shoot the enemies like holding a real weapon. However, the player cannot turn the head to observe the world around because the PC monitor or TV remains in its place. Head-mounted displays, such as the Oculus Rift, with a head-tracker integrated in it, allows the player to look around the virtual world. Even if the game does not support the head-tracker, it can still be used if the sensor driver emulates the mouse, so it can control the player's view. However, the point of view is typically coupled with the weapon in first-person shooting (FPS) games, and the user gets rapidly tired of using the neck muscles for aiming. In this paper, these two components -view and weapon- are decoupled to study the feasibility of an immersive FPS experience that avoids position data, relying solely on inertial sensors and the mapping models hereby introduced. Therefore, the aim of this paper is to describe the mapping models proposed and present the results of the experiment carried out that proves that this approach leads to similar or even better targeting accuracy, while delivering an engaging experience to the gamer.