首页    期刊浏览 2025年03月03日 星期一
登录注册

文章基本信息

  • 标题:Video support in a multimedia environment - HP's Mpower computer network communications software
  • 作者:Craig S. Richard
  • 期刊名称:Hewlett-Packard Journal
  • 印刷版ISSN:0018-1153
  • 出版年度:1994
  • 卷号:April 1994
  • 出版社:Hewlett-Packard Co.

Video support in a multimedia environment - HP's Mpower computer network communications software

Craig S. Richard

Combining video with the computing power of a workstation adds an extra level of interpretation, detail, and perception to information seen and manipulated on a workstation desktop.

We have all heard the expression "a picture is worth a thousand words." Images convey meaning that is difficult to express just using words. For example, consider the difficult, in trying to describe in words a person's looks, a shade of color, a complex object, or a CAT-scan image. You can use a lot of words to create a mental image of the object being described and hope that the words are interpreted as you intended, or you can show a picture. Video takes the value of images a step further by presenting 30 interrelated pictures every second.

We live in a time when video is one of our primary sources of information. We depend on video on a dally basis to provide us with the information we need or desire. The television set is the focal point of many homes. We watch television to see the latest breaking news, to see what the weather is going to be like the next day, to see places and things that we may never be able to see in person, and simply for entertainment. Video gives us the perception of "being there" as we watch it. We experience the event rather than create our own interpretation of it, and we remember the experience in more detail.

Why Video on a Workstation

Obviously, buying an expensive workstation just to watch television reams, sports events, or any other broadcast television shows doesn't make a lot of sense. So why would it be useful to have video on a workstation? A simple answer is that video can add a new dimension for doing many kinds of work. For example, consider a mechanical test engineer who must validate a computer simulation based on the design data with the actual physical behavior of a mechanical part. Typically, this is done by first analyzing the behavior of the mechanical part on the computer using a 3D modeling package. By having the physical behavior available as video information on the workstation, the mechanical part can be simultaneously analyzed with its computer model. The accuracy of the analysis and the ease and efficiency of testing would be greatly enhanced. This is just one example of the advantage of having video functionality integrated into a workstation, and the added value that computers can bring to video.

Using a computer is inherently an interactive experience. The user provides input through a keyboard, a mouse, or some other input device and the computer responds, takes some action, shows the results of that action, and waits for the user to provide more input. Watching television is a passive experience. The viewer typically doesn't interact with the picture on the screen.

As computer technology and television technology converge, the result is the power and impact of video with the control and interactive capability of computers. This is where the advantages of video capability on workstations become apparent. By combining text, graphics, audio, and video into an interactive presentation, the user can quickly and efficiently gather information on demand with a high rate of retention. This is invaluable for on-the-job training where an employee may need to learn (or relearn) a very specific task very quickly. As an example, take an automobile assembler who may work on engine assembly for a few months and then work on front-end assembly for a few months, and so on. This is a situation in which interactive video and workstation capabilities can be combined with a computer-based traln'mg course to provide some quick and inexpensive training as the assembler moves from one type of assembly station to another.

Challenges in Video Technology

Although video provides a vast amount of information, the delivery of video also requires a vast amount of digital data. A video image on a moultor is composed of hundreds of thousands of dots of phosphor (pixels) that are illuminated (at different intensities) by an electron gun as it scans across the monitor. The gun scans horizontally across a line of pixels, and then skips down to the next line and scans horizontally across the new line until the entire monitor is scanned. The electron gun then jumps back to the top of the display and begins scanning across the lines again. The whole process is repeated 30 to 75 times a second depending on the refresh rate of the monitor.

For television, the intensity of each pixel is determined by a voltage representing an analog signal received from a video source such as a VCR or cable TV tuner. The signal changes continuously to represent the color and intensity of each pixel.

In the computer, the intensity of each pixel is determined by a voltage representing a numerical value in a specialized computer memory called a frame buffer. In a simplified black and white model, there would be one bit of memory representing the intensity of each pixel. If the bit is on, the pixel on the monitor would be white, and if the bit is off, the pixel on the monitor would be black. More information is required to represent color images. To display 256 colors simultaneously (the most common workstation graphics capability), eight bits of information is required for each pixel. To represent true color (four million colors), 24 bits of information is required for each pixel.

To change the color or intensity of a pixel, a different value must be written into the frame buffer memory location that represents that pixel. For static, computer-generated images such as text and graphics, the memory values do not need to be updated frequently. The border around the window, or the icon on the top of the screen may not change for hours. However, for animated sequences, the memory values representing the pixels need to change for each new image. For video, this means that the memory value for each of the hundreds of thousands of pixels has to be changed 30 times a second! In the United States, there is a standard video timing format, NTSC (National Television Standards Committee), which allows for a video image that is 640 pixels wide and 480 pixels high. If the NTSC signal were represented in true color, there would have to be three bytes of information for each pixel. This corresponds to 900K bytes of information for each image, and with images changing 30 times per second, over 26M bytes of information would need to be written every second. This is an enormous amount of information to move around every second.

HP VideoLive Requirements

Video technology In HP MPower has been provided by HP's VideoLive product. VideoLive was developed by RasterOps Corporation exclusively for HP. HP engineers specified the requirements for the video capability, and RasterOps designed and produced a product that met the requirements. There were four main design requirements. The first requirement was to provide full-motion (30 frames per second) video in a window on an HP 9000 Series 700 Workstation monitor. The window had to be movable to any position on the display, uniformly scalable, and occludable by other windows on the display.

The second requirement was that the video product had to work with existing HP graphics subsystems. At the time, most video implementations required a frame buffer that was shared by the graphics and the video. HP produces some of the highest-performance graphics frame buffers in the world, and nobody would be willing to sacrifice highperformance graphics for video functionality.

The third requirement was that displaying video images should not adversely affect system or graphics performance. If the CPU were required to display video information, the system would need to allocate most (if not all) of its processing power to the video. In a shared frame buffer implementation, even though the CPU is not rendering the video, there is contention between the CPU and the video when the frame buffer memory is accessed, resulting in the degradation of graphics performance.

The fourth requirement was that although watching video on a workstation has defimte value, the ability to capture the video information has even more value. Therefore, it had to be possible to capture individual video frames in digital form.

VideoLive Hardware

To meet the design requirements, an analog mixing scheme is used to generate video frames on the display. VideoLive is a single slot EISA card. The card has an on-board 1024-by512-by-24-bit video frame buffer into which an analog video signal is digitized in real time. The digitization is accomplished using a Philips SAA7191 video decoder and an SAA7192 color space converter. The contents of the frame buffer are stored in RGB format (8 bits of red, 8 bits of blue, and 8 bits of green). A Brooktree Bt463 digital-to-analog converter (RAMDAC) is used to generate the analog signal that drives the monitor. Fig. 1 shows the architecture for the VideoLive card.

The RGB output from the graphics frame buffer's RAMDAC is connected directly to the VideoLive card instead of to the monitor. Using a high-speed multiplexer, the RGB output from VideoLive's video frame buffer is mixed with the RGB output from the graphics frame buffer. A set of programmable registers are used to specify the position and size of the video window.

The need to capture digitized video frames is met by digitizing the analog video in real time into a frame buffer. When grabbing frames, the video image is momentarily frozen in the video frame buffer and then the CPU reads the contents of the frame buffer (in RGB format) over the EISA bus and into system memory.

By taking advantage of the HP Image Library capabilities, the captured flames can be saved in a TIFF file and can optionally be compressed using the JPEG compression algorithm. Once in TIFF format, the captured frames can be printed, faxed, and viewed by HP ImageView, and pulled into the HP SharedX Whiteboard application to be viewed and annotated over the network by several people. HP SharedX and Whiteboard are described in the article on page 23 and the HP Image IAbrary is described in the article on page 37.

X Video Software

Live video functionality is tightly integrated into the HP VUE environment. This is accomplished using the X video extensions (Xv) to the X window server (see Fig. 2). Xv is a de facto standard for providing live video functionality for applications based on the the X Window System.

Xv provides the hooks through which X window geometry events such as position changes and clip rectangles can be relayed to the video window. When an Xv window is active, all geometry events to that window are intercepted by the Xv software. The Xv software renders a black area on the screen onto which the video is overlayed. Xv also programs the registers on the video board to position, size, and clip the video.

Xv also provides a programming interface that allows an application to control the video. The interface provides basic control of turning the video on and off, freezing and capturing video frames, selecting active video connections, and controlling video attributes such as brightness, contrast, hue, and saturation.

Collaboration Using Video Images

As previously mentioned, captured frames of video can be used by any of the image-capable components of HP MPower. This functionality provides powerful collaboration capabilities. How many times have you been on the phone trying to describe a physical object to someone and wished that they could see the object? For example, consider a technician working with a prototype printed circuit board. As frequently happens during early hardware development, a few wires are put on the board to fix layout problems. Suppose the technician is having some problems with the board. The technician contacts the design engineer and describes the problem. The engineer says the problem sounds like a problem that was fixed in an earlier release of the board. The technician and the design engineer can try to fix the problem over the telephone, and if that doesn't solve the problem, either the board or the design engineer must make a trip to fix the problem.

If both the technician and the design engineer are equipped with workstations nmning HP MPower and live video capability, the technician can point a camera at the defective board or a specific area of the board and capture a frame and save it to a TIFF file. The TIFF file can then be dropped into the HP SharedX Whiteboard and the engineer and technician can share the Whiteboard image. The engineer circles the areas where changes were made and shows the technician how to make the changes.

This is a specific example which can be extended to any situation in which someone is trying to convey information about a physical object.

Conclusion

As computer technology and computational power continue to progress, the capability of processing video information becomes more feasible and less expensive. Video boards are now available from third parties that provide similar functionality to the VideoLive board, with the additional capability of capturing the video data in real time (30 frames per second) and saving the digitized video on disk. HP has released a software digital video player that plays MPEG encoded video files from disk at up to 30 frames per second without additional hardware (see "Digital Video in HP MPower," on page 8).

COPYRIGHT 1994 Hewlett Packard Company
COPYRIGHT 2004 Gale Group

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有