出版社:Academy & Industry Research Collaboration Center (AIRCC)
摘要:In this paper we proposed a new and improved concept of segmentation for sign language gestures. The algorithm presented extracts signs from video sequences under various non-static backgrounds. The signs are segmented which are normally hands and head of the signing person by minimizing the energy function of the level set fused by various image characteristics such as colour, texture, boundary and shape information. Three color planes are extracted from the RGB and one color plane is used based on the environments presented by the video background. Texture edge map provides spatial information which makes the color features more distinctive for video segmentation. The boundary features are extracted by forming image edge map form the existing color and texture features. The shape of the sign is calculated dynamically and is made adaptive to each video frame for segmentation of occlude objects. The energy minimization is achieved using level sets. Experiments show that our approach provides excellent segmentation on signer videos for different signs under robust environments such as diverse backgrounds, sundry illumination and different signers.