This chapter provides an overview for programming a DM3 board. The DM3 board supports the OpenML media library software development kit ( ML). This application programmatic interface (API) is described in the OpenML Media Library Software Development Kit Programmer's Guide.
The following topics are covered:
After installing the DMediaPro software and the DM3 board, if you want to build programs that run under ML, follow these steps:
In your source code, include the following header files:
ML/ml.h
ML/mlu.h
ML/ml_xtdigvid.h
Link with libML and libMLU
You can find several useful ML programming examples on the DMediaPro CD in the /usr/share/src/ml/video/xtdigvid/examples directory. You can also find some programming examples later in this chapter.
This section covers the following topics:
The DM3 board supports two types of controls: path controls and jack controls.
With path controls, you can set up the following types of data transfer paths:
Using these path controls, you can transfer data from memory to an SD/HD video jack or transfer data from an SD/HD video jack to memory (see Figure 4-1).
With jack controls you can adjust controls on a jack without setting up a data transfer path.
Using some jack controls, you can adjust certain parameters while a data transfer is in progress. However, you can only use this type of jack control to adjust parameters that do not affect memory-to-video and video-to-memory data transfers. For example, you can adjust the EE mode ( XTDIGVID_EE_MODE_INT32) during a transfer (see Figure 4-2), but you cannot use jack controls to adjust the colorspace or memory packing order parameters.
Following are the DM3 board jack controls for the HD jacks and the SD jacks.
HD Serial Digital Input Jack
XTDIGVID_LOOPBACK_INT32 |
HD Serial Digital Output Jack
ML_VIDEO_GENLOCK_SOURCE_TIMING_INT32 ML_VIDEO_GENLOCK_TYPE_INT32 XTDIGVID_EE_MODE_INT32 ML_VIDEO_H_PHASE_INT32 ML_VIDEO_V_PHASE_INT32 |
SD Serial Digital Input Jack
XTDIGVID_LOOPBACK_INT32 |
SD Serial Digital Output Jack
ML_VIDEO_GENLOCK_SOURCE_TIMING_INT32 ML_VIDEO_GENLOCK_TYPE_INT32 XTDIGVID_EE_MODE_INT32 ML_VIDEO_H_PHASE_INT32 ML_VIDEO_V_PHASE_INT32 |
Table 4-1 shows the input/output paths for each of the DM3 board controls.
Table 4-1. HD/SD Input and Output Paths
Control Parameters | Input | Input Image Memory Buffer | Output | Output Image Memory Buffer |
---|---|---|---|---|
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X |
| X | |
| X (Read only) |
| X (Read only) | |
| X (Read only) |
| ||
| X (Read only) |
| ||
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
|
| X |
| |
|
| X |
| |
|
| X |
| |
|
| X |
| |
|
| X |
| |
|
| X |
| |
|
| X |
| |
X (Serial input only; not applicable for graphics input) |
|
|
| |
|
| X (Read only) |
| |
|
|
| ||
|
| X (Read only) |
| |
|
| X (Read only) |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
X |
| X |
| |
ML_OPEN_EVENT_PAYLOAD_COUNT_INT32 | X |
| X |
|
X |
| X |
|
Table 4-2 defines the value(s) and use for each of the DM3 board controls. For a detailed description of the ML_ controls, see the OpenML Media Library Software Development Kit Programmer's Guide. You can find more information on the device-specific (XTDIGVID_) controls later in this guide.
Table 4-2. DM3 Board Control Parameters, Value(s), and Use
This section provides the default values for the HD and SD input path controls and HD and SD output path controls.
The following default values are for the HD input path controls:
ML_VIDEO_TIMING_INT32 = ML_TIMING_1125_1920x1080_5994i ML_VIDEO_PRECISION_INT32 = 8 ML_VIDEO_COLORSPACE_INT32 = ML_COLORSPACE_CbYCr_709_HEAD ML_IMAGE_TEMPORAL_SAMPLING_INT32 = ML_TEMPORAL_SAMPLING_FIELD_BASED ML_VIDEO_SAMPLING_INT32 = ML_SAMPLING_422 ML_VIDEO_START_Y_F1_INT32 = 21 ML_VIDEO_START_Y_F2_INT32 = 584 ML_VIDEO_HEIGHT_F1_INT32 = 540 ML_VIDEO_HEIGHT_F2_INT32 = 540 ML_VIDEO_WIDTH_INT32 = 1920 ML_IMAGE_WIDTH_INT32 = 1920 ML_IMAGE_HEIGHT_1_INT32 = 1080 ML_IMAGE_HEIGHT_2_INT32 = 0 ML_IMAGE_PACKING_INT32 = ML_PACKING_8 ML_IMAGE_SAMPLING_INT32 = ML_SAMPLING_444 ML_IMAGE_COLORSPACE_INT32 = ML_COLORSPACE_RGB_709_FULL ML_IMAGE_INTERLEAVE_MODE_INT32 = ML_INTERLEAVE_MODE_INTERLEAVED ML_IMAGE_DOMINANCE_INT32 = ML_DOMINANCE_F1 ML_IMAGE_ORIENTATION_INT32 = ML_ORIENTATION_TOP_TO_BOTTOM XTDIGVID_LOOPBACK_INT32 = XTDIGVID_LOOPBACK_DISABLE ML_IMAGE_COMPRESSION_INT32 = ML_COMPRESSION_UNCOMPRESSED ML_IMAGE_SKIP_PIXELS_INT32 = 0 ML_IMAGE_ROW_BYTES_INT32 = 0 ML_IMAGE_SKIP_ROWS_INT32 = 0 ML_VIDEO_START_X_INT32 = 1 |
The following default values are for the SD input path controls:
ML_VIDEO_TIMING_INT32 = ML_TIMING_525 ML_VIDEO_PRECISION_INT32 = 8 ML_VIDEO_COLORSPACE_INT32 = ML_COLORSPACE_CbYCr_601_HEAD ML_IMAGE_TEMPORAL_SAMPLING_INT32 = ML_TEMPORAL_SAMPLING_FIELD_BASED ML_VIDEO_SAMPLING_INT32 = ML_SAMPLING_422 ML_VIDEO_START_Y_F1_INT32 = 20 ML_VIDEO_START_Y_F2_INT32 = 283 ML_VIDEO_HEIGHT_F1_INT32 = 244 ML_VIDEO_HEIGHT_F2_INT32 = 243 ML_VIDEO_WIDTH_INT32 = 720 ML_IMAGE_WIDTH_INT32 = 720 ML_IMAGE_HEIGHT_1_INT32 = 487 ML_IMAGE_HEIGHT_2_INT32 = 0 ML_IMAGE_PACKING_INT32 = ML_PACKING_8 ML_IMAGE_SAMPLING_INT32 = ML_SAMPLING_444 ML_IMAGE_COLORSPACE_INT32 = ML_COLORSPACE_RGB_601_FULL ML_IMAGE_INTERLEAVE_MODE_INT32 = ML_INTERLEAVE_MODE_INTERLEAVED ML_IMAGE_DOMINANCE_INT32 = ML_DOMINANCE_F1 ML_IMAGE_ORIENTATION_INT32 = ML_ORIENTATION_TOP_TO_BOTTOM XTDIGVID_LOOPBACK_INT32 = XTDIGVID_LOOPBACK_DISABLE ML_IMAGE_COMPRESSION_INT32 = ML_COMPRESSION_UNCOMPRESSED ML_IMAGE_SKIP_PIXELS_INT32 = 0 ML_IMAGE_ROW_BYTES_INT32 = 0 ML_IMAGE_SKIP_ROWS_INT32 = 0 ML_VIDEO_START_X_INT32 = 1 |
The following default values are for the HD output path controls:
ML_VIDEO_TIMING_INT32 = ML_TIMING_1125_1920x1080_5994i ML_VIDEO_PRECISION_INT32 = 8 ML_VIDEO_COLORSPACE_INT32 = ML_COLORSPACE_CbYCr_709_HEAD ML_IMAGE_TEMPORAL_SAMPLING_INT32 = ML_TEMPORAL_SAMPLING_FIELD_BASED ML_VIDEO_SAMPLING_INT32 = ML_SAMPLING_422 ML_VIDEO_START_Y_F1_INT32 = 21 ML_VIDEO_START_Y_F2_INT32 = 584 ML_VIDEO_HEIGHT_F1_INT32 = 540 ML_VIDEO_HEIGHT_F2_INT32 = 540 ML_VIDEO_WIDTH_INT32 = 1920 ML_IMAGE_WIDTH_INT32 = 1920 ML_IMAGE_HEIGHT_1_INT32 = 1080 ML_IMAGE_HEIGHT_2_INT32 = 0 ML_IMAGE_PACKING_INT32 = ML_PACKING_8 ML_IMAGE_SAMPLING_INT32 = ML_SAMPLING_444 ML_IMAGE_COLORSPACE_INT32 = ML_COLORSPACE_RGB_709_FULL ML_IMAGE_INTERLEAVE_MODE_INT32 = ML_INTERLEAVE_MODE_INTERLEAVED ML_IMAGE_DOMINANCE_INT32 = ML_DOMINANCE_F1 ML_IMAGE_ORIENTATION_INT32 = ML_ORIENTATION_TOP_TO_BOTTOM ML_VIDEO_GENLOCK_TYPE_INT32 = XTDIGVID_GENLOCK_SRC_TYPE_INTERNAL ML_VIDEO_GENLOCK_SOURCE_TIMING_INT32 = ML_TIMING_1125_1920x1080_5994i ML_VIDEO_OUTPUT_REPEAT_INT32 = ML_VIDEO_REPEAT_NONE XTDIGVID_EE_MODE_INT32 = XTDIGVID_EE_MODE_DISABLE XTDIGVID_FF_MODE_INT32 = XTDIGVID_FF_MODE_DISABLE ML_IMAGE_COMPRESSION_INT32 = ML_COMPRESSION_UNCOMPRESSED ML_IMAGE_SKIP_PIXELS_INT32 = 0 ML_IMAGE_ROW_BYTES_INT32 = 0 ML_IMAGE_SKIP_ROWS_INT32 = 0 ML_VIDEO_START_X_INT32 = 1 |
The following default values are for the SD output path controls:
ML_VIDEO_TIMING_INT32 = ML_TIMING_525 ML_VIDEO_PRECISION_INT32 = 8 ML_VIDEO_COLORSPACE_INT32 = ML_COLORSPACE_CbYCr_601_HEAD ML_IMAGE_TEMPORAL_SAMPLING_INT32 = ML_TEMPORAL_SAMPLING_FIELD_BASED ML_VIDEO_SAMPLING_INT32 = ML_SAMPLING_422 ML_VIDEO_START_Y_F1_INT32 = 20 ML_VIDEO_START_Y_F2_INT32 = 283 ML_VIDEO_HEIGHT_F1_INT32 = 244 ML_VIDEO_HEIGHT_F2_INT32 = 243 ML_VIDEO_WIDTH_INT32 = 720 ML_IMAGE_WIDTH_INT32 = 720 ML_IMAGE_HEIGHT_1_INT32 = 487 ML_IMAGE_HEIGHT_2_INT32 = 0 ML_IMAGE_PACKING_INT32 = ML_PACKING_8 ML_IMAGE_SAMPLING_INT32 = ML_SAMPLING_444 ML_IMAGE_COLORSPACE_INT32 = ML_COLORSPACE_RGB_601_FULL ML_IMAGE_INTERLEAVE_MODE_INT32 = ML_INTERLEAVE_MODE_INTERLEAVED ML_IMAGE_DOMINANCE_INT32 = ML_DOMINANCE_F1 ML_IMAGE_ORIENTATION_INT32 = ML_ORIENTATION_TOP_TO_BOTTOM ML_VIDEO_GENLOCK_TYPE_INT32 = XTDIGVID_GENLOCK_SRC_TYPE_INTERNAL ML_VIDEO_GENLOCK_SOURCE_TIMING_INT32 = ML_TIMING_525 ML_VIDEO_OUTPUT_REPEAT_INT32 = ML_VIDEO_REPEAT_NONE XTDIGVID_EE_MODE_INT32 = XTDIGVID_EE_MODE_DISABLE XTDIGVID_FF_MODE_INT32 = XTDIGVID_FF_MODE_DISABLE ML_IMAGE_COMPRESSION_INT32 = ML_COMPRESSION_UNCOMPRESSED ML_IMAGE_SKIP_PIXELS_INT32 = 0 ML_IMAGE_ROW_BYTES_INT32 = 0 ML_IMAGE_SKIP_ROWS_INT32 = 0 ML_VIDEO_START_X_INT32 = 1 |
The ML_TIMING control sets the timing type, which expresses the timing of video presented to an input or an output.
Each value for ML_TIMING indicates the raster configuration of a particular SMPTE specification, such as SMPTE 274M-1995. The values are named according to the raster format:
The first field is the number of total lines, such as 1125, 750, 525, or 625.
The second field is the size of the active region, in pixels by lines.
The third field is the vertical refresh rate and the scanning format; the scanning format is as follows:
i: interlaced
p: progressive (noninterlaced)
PsF (progressive segmented frame)
In PsF formats, the frame is transmitted as two fields that are of the same time instant; in interlaced formats, the two fields are temporally displaced.
For example, ML_TIMING_1125_1920x1080_5994i specifies 1125 total lines, an active region of 1920 pixels by 1080 lines, 59.94 fields per second, and 2:1 interlacing.
![]() | Note: If you change the ML_TIMING from SD to HD or HD to SD at the beginning of a data transfer, several seconds may elapse before the change takes effect. |
The input timing auto detect function for the DM3 board determines whether the detected signal matches the user requested signal. If the detected signal does not match the requested signal, there are two possible results:
If the application is registered for SYNC_LOST events on the input path, the application receives a SYNC_LOST event.
If the device is not looking for a SYNC_LOST event, an ML_DEVICE error is sent to the application. If an error is detected before a transfer is started, the dmmodule does not allow the transfer to begin.
For more information on input timing-related events and device errors, see “DMediaPro Events”.
Genlock enables the SGI VBOB to receive an external sync signal, which locks the timing of the output video picture. This allows you to maintain a common timing across multiple video devices. With the DMediaPro/VBOB system, you can set up genlock as follows:
Supported genlock input timings (see Table 4-3) referenced by the VBOB HD video input or HD genlock input and locked to a compatible video output timing. The only exceptions are ML_TIMING_ 525 and ML_TIMING_625, which can be used only as a reference on the VBOB HD genlock input, but not on the VBOB HD video input.
Table 4-3 lists each of the supported genlock input timings and their compatible HD video output timings. The prefix, ML_TIMING_ should appear before each timing, but was omitted to avoid redundancy.
Table 4-3. Supported HD Input/Output Timings
Genlock Input Timing | Video Output Timing |
---|---|
525 | 1125_1920x1080_5994i |
625 | 1125_1920x1080_50i |
750_1280x720_60p | 750_1280x720_60p 1125_1920x1080_24p 1125_1920x1080_24PsF 1125_1920x1080_60i |
750_1280x720_5994p | 750_1280x720_5994p 1125_1920x1035_5994i 1125_1920x1080_5994i 1125_1920x1080_2398p 1125_1920x1080_2398PsF |
1125_1920x1080_5994i | 750_1280x720_5994p 1125_1920x1035_5994i 1125_1920x1080_5994i 1125_1920x1080_2398p 1125_1920x1080_2398PsF |
1125_1920x1080_2398p | 1125_1920x1080_2398p |
1125_1920x1080_2398PsF | 1125_1920x1080_2398PsF |
1125_1920x1080_24p | 1125_1920x1080_24p |
1125_1920x1080_24PsF | 1125_1920x1080_24PsF |
1125_1920x1080_50i | 1125_1920x1080_50i 1125_1920x1080_25p 1125_1920x1080_25PsF |
1125_1920x1080_60i | 1125_1920x1080_60i 750_1280x720_60p 1125_1920x1080_24p 1125_1920x1080_24PsF |
1125_1920x1035_5994i | 1125_1920x1035_5994i 750_1280x720_5994p 1125_1920x1080_2398p 1125_1920x1080_2398PsF 1125_1920x1080_5994i |
1125_1920x1080_25p | 1125_1920x1080_25p 1125_1920x1080_50i |
1125_1920x1080_25 PsF | 1125_1920x1080_25p 1125_1920x1080_50i |
The genlock auto detect function for the DM3 board automatically identifies the genlock input signal ( genlock source timing), and determines whether the signal is compatible with the video output signal. If the DM3 board determines that the two signals are compatible, the board automatically locks the genlock source timing with the video output timing.
To enable this function, set the ML_VIDEO_GENLOCK_SOURCE_TIMING_INT32 control parameter to the XTDIGVID_GENLOCK_TIMING_AUTODETECT value (see Table 4-2).
For more information on genlock-related events and device errors, see “DMediaPro Events”.
In ML, the combination of ML_IMAGE_PACKING and ML_IMAGE_SAMPLING is the equivalent to VL_PACKING in the video library (VL) environment. Table 4-4 shows how the ML combinations correspond to VL_PACKING. If you are unfamiliar with VL_PACKING, see the HD I/O Board Owner's Guide for a detailed description.
Table 4-4. VL/ML Packing Conversions
VL_PACKING | ML_IMAGE_PACKING | ML_IMAGE_SAMPLING |
---|---|---|
VL_PACKING_R242_8 | ML_PACKING_8 | ML_SAMPLING_422 |
VL_PACKING_242_8 | ML_PACKING_8_3214 | ML_SAMPLING_422 |
VL_PACKING_R242_10 | ML_PACKING_10 | ML_SAMPLING_422 |
VL_PACKING_242_10 | ML_PACKING_10_3214 | ML_SAMPLING_422 |
VL_PACKING_R242_10_in_16_L | ML_PACKING_10in16L | ML_SAMPLING_422 |
VL_PACKING_242_10_in_16_L | ML_PACKING_10in16L_3214 | ML_SAMPLING_422 |
VL_PACKING_R242_10_in_16_R | ML_PACKING_10in16R | ML_SAMPLING_422 |
VL_PACKING_242_10_in_16_R | ML_PACKING_10in16R_3214 | ML_SAMPLING_422 |
VL_PACKING_R2424_10_10_10_2Z | ML_PACKING_10_10_10_2 | ML_SAMPLING_4224 |
VL_PACKING_2424_10_10_10_2Z | ML_PACKING_10_10_10_2_3214 | ML_SAMPLING_4224 |
VL_PACKING_444_8 | ML_PACKING_8 | ML_SAMPLING_444 |
VL_PACKING_R444_8 | ML_PACKING_8_R | ML_SAMPLING_444 |
VL_PACKING_444_12_in_16L | ML_PACKING_S12in16L | ML_SAMPLING_444 |
VL_PACKING_444_12_in_16_R | ML_PACKING_S12in16R | ML_SAMPLING_444 |
VL_PACKING_4444_8 | ML_PACKING_8 | ML_SAMPLING_4444 |
VL_PACKING_R4444_8 | ML_PACKING_8_R | ML_SAMPLING_4444 |
VL_PACKING_4444_10_10_10_2 | ML_PACKING_10_10_10_2 | ML_SAMPLING_4444 |
VL_PACKING_R4444_10_10_10_2 | ML_PACKING_10_10_10_2_R | ML_SAMPLING_4444 |
![]() | Note: If you change packings at the beginning of a data transfer, several seconds may elapse before the change takes effect. |
The ML_COLORSPACE control specifies the color space of video data in memory or for input and output. A color space is a color component encoding format, for example, RGB and YUV. Because video equipment uses more than one color space, the DMediaPro video paths, in addition to the image memory buffers, support the ML_COLORSPACE control.
Each component of an image has:
Normally, a component stays within the minimum and maximum values. For example, for a luma signal such as Y, you can think of these limits as the black level and the peak white level, respectively. For an unsigned component with n bits, you can determine the full range minimum value and/or maximum value as follows:
[0, (2n)-1]
This provides the maximum resolution for each component.
V arious HDTV specifications define color models differently from those defined in Recommendation 601 (ITU-R BT.601-5), which is used by most standard-definition digital video equipment. For HDTV, the DM3 board supports the following three color models:
Within each color model, four different color spaces exist:
Headroom range means that black is at, for example, code 64 rather than 0, and white is at, for example, code 940 rather than 1023. Headroom-range color spaces can accommodate overshoot (superwhite) and undershoot (superblack) colors. Full-range color spaces clamp these out-of-range colors to black and white.
RGB_F: full range
For image memory buffers, these four color spaces are defined for each of three color models, resulting in 12 color spaces. Note that all 12 are supported on image memory buffers, but only YCrCb and RGB_H color spaces are supported on video paths.
Color space conversion is performed within a color model if the color spaces are different on the image memory buffer and video paths. Conversion between the color models is not supported.
![]() | Note: If you change this control at the beginning of a data transfer, several seconds may elapse before the change takes effect. |
Typically, two sets of colors are used together, RGB (RGBA) and YCrCb/YUV (VYUA). YCrCb (YUV), the most common representation of color from the video world, represents each color by a luma component called Y and two components of chroma, called Cr (or V), and Cb (or U). The luma component is loosely related to brightness or luminance, and the chroma components make up a quantity loosely related to hue. These components are defined in ITU-R BT.601-5 (also known as Rec. 601 and CCIR 601), ITU-R BT.709-2, and SMPTE 240M.
The alpha channel is not a real color. For that channel, the minimum value specifies completely transparent, and the maximum value specifies completely opaque.
For more information about color spaces, see A Technical Introduction to Digital Video, by Charles A. Poynton (New York: Wiley, 1996).
Along with image memory buffer color space, ML_COLORSPACE determines the color-conversion matrix values. In addition, this control affects the type of blanking output by the board during horizontal and vertical blanking, and during an active video timeframe when data is not being transferred. On a video output path, ML_COLORSPACE affects the type of blanking that the board outputs, in accordance with SMPTE 274M:
YCrCb: blanking is Y = 64, Cr/Cb = 512, A = 64
RGB_H: blanking is R = 64, G = 64, B = 64, A = 64
The DM3 board supports lookup tables ( LUTs) on input and output for gamma correction or decorrection. To successfully run an application with linear components, you can use LUTs to convert between linear and nonlinear spaces.
The DM3 board hardware has three LUTs, one LUT for each RGB color component. Each LUT has 8,192 entries; each entry stores 13 bits. The application programs the entries in each table. The LUTs produce offsets, if they are required by the memory storage format.
The LUTs perform rounding as follows:
If the LUT is not explicitly programmed by the application, the output LUT is in pass-through mode, all rounding is performed in the color space converter, and the input LUT performs both rounding and offset.
If the LUT is programmed explicitly by the application, the application can control rounding as part of the lookup table function. The packer (hardware that reads the LUT and formats data for the host memory; see Figure 4-3) performs a final conversion from 13-bit LUT format to host memory format.
An application can also use the LUT to convert between video path RGB_H and image memory buffer RGB_F. Because each component is independent of the others for this conversion, a matrix multiplication is not needed (pass-through mode). The required component scaling and rounding can be placed into each LUT.
Figure 4-3 shows an example color space conversion. In the example, RGB are values in linear space and R'G'B' are values in nonlinear space after the opto-electric transfer function is applied as specified in ITU-R BT.709. You can use the LUTs to apply this function or its inverse to convert between RGB and R'G'B'.
This example also shows a typical video capture path. The input jack is YCrCb 4:2:2 and the desired result in system memory is RGB. First, an appropriate filter interpolates YCrCb 4:2:2 to YCrCb 4:4:4 to fill in the missing CrCb samples. Then a 3x3 matrix multiplier with appropriate offsets and coefficients obtains RGB values for each pixel. At this point, you can use the LUT option to convert gamma pre-corrected RGB values to linear RGB values. Finally, the packer swizzles the bits into the desired memory packing format and DMA places the result in system memory.
Field dominance identifies the frame boundaries in a field sequence; that is, it specifies which pair of fields in a field sequence constitutes a frame. You can use ML_IMAGE_DOMINANCE_INT32 to specify where an edit occurs, as follows:
ML_DOMINANCE_F1: the edit occurs on the nominal video field boundary (field 1 or F1).
ML_DOMINANCE_F2: the edit occurs on the intervening field boundary (field 2 or F2).
You can determine whether a field is field 1 or field 2 by setting bit 9, the F bit, in the XYZ word of the EAV and SAV sequences, as follows:
For field 1 (also called the odd field), set the F bit to 0.
For field 2 (also called the even field), set the F bit to 1.
Figure 4-4 shows fields and frames as defined for digital 1080-line formats for the DM3 board.
Editing is usually on field 1 boundaries, where field 1 is defined as the first field in the video standard's two-field output sequence. However, you may want to edit on F2 boundaries, which fall on the field between the video standard's frame boundary. To do so, use this control, then program your deck to select F2 edits.
A set of frames for output must be de-interlaced into fields differently, depending on the specified output field dominance. For SMPTE 274M, the top line is in F1, as shown in Figure 4-4. For SMPTE 240M, the top line is in F2. For example, when F1 dominance is selected, the field with the topmost line must be the first field to be transferred; when F2 dominance is selected, the field with the topmost line must be the second field to be transferred.
The DM3 board supports an EE mode ( XTDIGVID_EE_MODE_INT32). In this mode, the serial input is looped-through directly to the serial output. EE mode can only function correctly if the LVDS output is genlocked to the same source as the device feeding the LVDS input. This genlock mode is commonly referred to as " reclocking," which is used in DAs, D to As, and data serializers. Reclocking ensures that the re-transmitted signals have sufficient jitter attenuation applied to reject jitter from the digital inputs.
When using EE mode, you must consider the following issues:
The dmmodule does not enforce the genlock requirement. You can enable EE mode, but the output display may be unstable.
You can enable EE mode while an output transfer is running. For example, if an SD output transfer is running and SD EE mode is enabled on the output path, EE mode “hijacks” the serial output jack.
If the application is not sending buffers fast enough for the receiving equipment's video frame rate, you can set ML_VIDEO_OUTPUT_REPEAT_INT32 to repeat MLbuffers automatically. The values for this control vary, depending on whether the transfer is p rogressive or interlaced.
ML_VIDEO_REPEAT_NONE
Repeats nothing, usually resulting in black output. This is the most useful for debugging, because underflow is then quite visible on output.
ML_VIDEO_REPEAT_FIELD
Repeats the last field (non-interleaved) or the last frame (interleaved or progressive). This setting is spatially imperfect, but does not cause flicker.
ML_REPEAT_FRAME (the default)
Repeats the last two fields (non-interleaved) or the last frame (interleaved or progressive). This setting is spatially better than ML_VIDEO_REPEAT_FIELD, but causes flicker.
To capture graphics to video, you can use OpenGL to read pixels into memory. However, the coordinate system differs between video and OpenGL; under OpenGL, the origin is at the lower left corner and, in video, the origin is in the upper left corner. To adjust for this difference, set the ML_IMAGE_ORIENTATION_INT32 parameter to ML_BOTTOM_TO_TOP. For more information, see Table 4-4 in this guide and the OpenML Media Library Software Development Kit Programmer's Guide.
In some cases, an exceptional event occurs, which requires the device to send a message back to the application. For this type of event message, you must initiate a request. When the application requests an event, it must read its receive queue often enough to prevent the device from running out of the required message space for the specific enqueue request. If the queue begins to fill up, the device enqueues an event message, which terminates the exceptional event.
The device does not have to allocate space in the data area for reply messages. It automatically stops sending notifications of events when the receive queue begins to fill up. Space is reserved in the receive queue for a reply to every message that the application enqueues. When there is insufficient space, any attempt to send new messages fails.
The DM3 board currently supports the following ML exceptional events:
ML_EVENT_VIDEO_SEQUENCE_LOST ML_EVENT_VIDEO_SYNC_LOST ML_EVENT_VIDEO_SYNC_GAINED |
Table 4-5 summarizes these events.
Table 4-5. ML Exceptional Events
Event | Use |
---|---|
A field/frame was dropped. | |
Genlock was lost or the input signal was lost. | |
Genlock sync lock occurred or a valid signal was found on the input. |
![]() | Note: Other events, for example, ML_BUFFERS_COMPLETE, are automatically sent to the application. For more information, see the OpenML Media Library Software Development Kit Programmer's Guide. |
The following text lists the ML controls and event records:
eventRecord[0].length = 1; eventRecord[1].param = XTDIGVID_GENLOCK_ERROR_STATUS_INT32; eventRecord[1].value.int32 = <syncLostReason> (see |
eventRecord[0].length = 1; eventRecord[1].param = ML_END; |
Table 4-6 describes the XTDIGVID_GENLOCK_ERROR_STATUS_INT32 values for ML_EVENT_VIDEO_SYNC_LOST on the output path. It also lists the corresponding values for ML_VIDEO_GENLOCK_SIGNAL_PRESENT_INT32.
Table 4-6. Error Status Values for ML_EVENT_VIDEO_SYNC_LOST (Output Path)
Error Status Values | ML_VIDEO_GENLOCK_SIGNAL_PRESENT_INT32 |
---|---|
XTDIGVID_GENLOCK_ERROR_STATUS_NO_SIGNAL | ML_TIMING_NONE |
XTDIGVID_GENLOCK_ERROR_STATUS_UNKNOWN_
SIGNAL | ML_TIMING_UNKNOWN |
XTDIGVID_GENLOCK_ERROR_STATUS_ILLEGAL_ | |
XTDIGVID_GENLOCK_ERROR_STATUS_TIMING_ | ID of the timing detected on the genlock jack
|
XTDIGVID_GENLOCK_ERROR_STATUS_NONE | ID of the timing detected on the genlock jack |
This section provides the following examples:
The following ML control settings capture 487 line 525:
ML_VIDEO_TIMING_INT32 = ML_TIMING_525 ML_VIDEO_COLORSPACE_INT32 = ML_COLORSPACE_CbYCr_601_HEAD ML_VIDEO_PRECISION_INT32 = 8 ML_VIDEO_START_Y_F1_INT32 = 20 ML_VIDEO_START_Y_F2_INT32 = 283 ML_VIDEO_HEIGHT_F1_INT32 = 244 ML_VIDEO_HEIGHT_F2_INT32 = 243 ML_IMAGE_TEMPORAL_SAMPLING_INT32 = ML_TEMPORAL_SAMPLING_FIELD_BASED ML_VIDEO_SAMPLING_INT32 = ML_SAMPLING_422 ML_VIDEO_WIDTH_INT32 = 720 ML_IMAGE_WIDTH_INT32 = 720 ML_IMAGE_HEIGHT_1_INT32 = 487 ML_IMAGE_HEIGHT_2_INT32 = 0 ML_IMAGE_SAMPLING_INT32 = ML_SAMPLING_444 ML_IMAGE_COLORSPACE_INT32 = ML_COLORSPACE_RGB_601_FULL ML_IMAGE_PACKING_INT32 = ML_PACKING_8 ML_IMAGE_INTERLEAVE_MODE_INT32 = ML_INTERLEAVE_MODE_INTERLEAVED ML_IMAGE_DOMINANCE_INT32 = ML_DOMINANCE_F1 ML_IMAGE_ORIENTATION_INT32 = ML_ORIENTATION_TOP_TO_BOTTOM XTDIGVID_LOOPBACK_INT32 = XTDIGVID_LOOPBACK_DISABLE ML_IMAGE_COMPRESSION_INT32 = ML_COMPRESSION_UNCOMPRESSED ML_IMAGE_ROW_BYTES_INT32 = 0 ML_IMAGE_SKIP_PIXELS_INT32 = 0 ML_IMAGE_SKIP_ROWS_INT32 = 0 ML_VIDEO_START_X_INT32 = 1 |
The following ML control settings perform a memory-to-video transfer in HD 720p format:
ML_VIDEO_TIMING_INT32 = ML_TIMING_750_1280x720_5994p ML_VIDEO_PRECISION_INT32 = 8 ML_VIDEO_COLORSPACE_INT32 = ML_COLORSPACE_CbYCr_709_HEAD ML_IMAGE_TEMPORAL_SAMPLING_INT32 = ML_TEMPORAL_SAMPLING_PROGRESSIVE ML_VIDEO_SAMPLING_INT32 = ML_SAMPLING_422 ML_VIDEO_START_Y_F1_INT32 = 26 ML_VIDEO_START_Y_F2_INT32 = 0 ML_VIDEO_HEIGHT_F1_INT32 = 720 ML_VIDEO_HEIGHT_F2_INT32 = 0 ML_VIDEO_WIDTH_INT32 = 1280 ML_IMAGE_WIDTH_INT32 = 1280 ML_IMAGE_HEIGHT_1_INT32 = 720 ML_IMAGE_HEIGHT_2_INT32 = 0 ML_IMAGE_PACKING_INT32 = ML_PACKING_8 ML_IMAGE_SAMPLING_INT32 = ML_SAMPLING_444 ML_IMAGE_COLORSPACE_INT32 = ML_COLORSPACE_RGB_709_FULL ML_IMAGE_INTERLEAVE_MODE_INT32 = ML_INTERLEAVE_MODE_INTERLEAVED ML_IMAGE_DOMINANCE_INT32 = ML_DOMINANCE_F1 ML_IMAGE_ORIENTATION_INT32 = ML_ORIENTATION_TOP_TO_BOTTOM ML_VIDEO_GENLOCK_TYPE_INT32 = XTDIGVID_GENLOCK_SRC_TYPE_INTERNAL ML_VIDEO_GENLOCK_SOURCE_TIMING_INT32 = ML_TIMING_525 XTDIGVID_FF_MODE_INT32 = XTDIGVID_FF_MODE_DISABLE ML_VIDEO_OUTPUT_REPEAT_INT32 = ML_VIDEO_REPEAT_NONE ML_IMAGE_COMPRESSION_INT32 = ML_COMPRESSION_UNCOMPRESSED ML_IMAGE_ROW_BYTES_INT32 = 0 ML_IMAGE_SKIP_PIXELS_INT32 = 0 ML_IMAGE_SKIP_ROWS_INT32 = 0 ML_VIDEO_START_X_INT32 = 1 |
The DM3 board uses the standard NTSC field height of 487 lines for standard-definition video. However, some SGI products (for example, DIVO and DIVO DVC), use a 486 line NTSC field height. The following example provides ML code that you can use to retrieve the default sizing parameters for any given timing. Then you can reset these parameters from 487 lines to 486 lines, which provides backward compatibility with SGI standard-definition products, such as DIVO and DIVO DVC.
![]() | Note: This example is the accepted method for retrieving these values, so you do not have to perform any calculations. |
// timing = ML_TIMING_525 short progressive; DMpv videoSizeDefaults[] = { ML_IMAGE_TEMPORAL_SAMPLING_INT32, 0, 0, 0, ML_VIDEO_START_Y_F1_INT32, 0, 0, 0, ML_VIDEO_START_Y_F2_INT32, 0, 0, 0, ML_VIDEO_HEIGHT_F1_INT32, 0, 0, 0, ML_VIDEO_HEIGHT_F2_INT32, 0, 0, 0, ML_VIDEO_WIDTH_INT32, 0, 0, 0, ML_END, 0, 0, 0 }; if (starty1 != NULL) { *tempSampling = videoSizeDefaults[0].value.int32; progressive = (*tempSampling == ML_TEMPORAL_SAMPLING_PROGRESSIVE); *starty1 = videoSizeDefaults[1].value.int32; /* set starty2 to 0 if progressive, func returns -1 */ *starty2 = (progressive ? 0 :videoSizeDefaults[2].value.int32); *height1 = videoSizeDefaults[3].value.int32; /* set F2 height to 0 if progressive, func returns -1 */ *height2 = (progressive ? 0 : videoSizeDefaults[4].value.int32); *width = videoSizeDefaults[5].value.int32; switch (timing) { case ML_TIMING_525: switch (rasterSize) { case NTSC_486: // *starty1 = 21; *starty2 = 283; *height1 = 243; *height2 = 243; break; } break; } |
The following text lists the ML control settings:
ML_VIDEO_TIMING_INT32 = ML_TIMING_525 ML_VIDEO_COLORSPACE_INT32 = ML_COLORSPACE_CbYCr_601_HEAD ML_VIDEO_PRECISION_INT32 = 8 ML_VIDEO_START_Y_F1_INT32 = 21 ML_VIDEO_START_Y_F2_INT32 = 283 ML_VIDEO_HEIGHT_F1_INT32 = 243 ML_VIDEO_HEIGHT_F2_INT32 = 243 ML_IMAGE_TEMPORAL_SAMPLING_INT32 = ML_TEMPORAL_SAMPLING_FIELD_BASED ML_VIDEO_SAMPLING_INT32 = ML_SAMPLING_422 ML_VIDEO_WIDTH_INT32 = 720 ML_IMAGE_WIDTH_INT32 = 720 ML_IMAGE_HEIGHT_1_INT32 = 486 ML_IMAGE_HEIGHT_2_INT32 = 0 ML_IMAGE_SAMPLING_INT32 = ML_SAMPLING_444 ML_IMAGE_COLORSPACE_INT32 = ML_COLORSPACE_RGB_601_FULL ML_IMAGE_PACKING_INT32 = ML_PACKING_8 ML_IMAGE_INTERLEAVE_MODE_INT32 = ML_INTERLEAVE_MODE_INTERLEAVED ML_IMAGE_DOMINANCE_INT32 = ML_DOMINANCE_F1 ML_IMAGE_ORIENTATION_INT32 = ML_ORIENTATION_TOP_TO_BOTTOM XTDIGVID_LOOPBACK_INT32 = XTDIGVID_LOOPBACK_DISABLE ML_IMAGE_COMPRESSION_INT32 = ML_COMPRESSION_UNCOMPRESSED ML_IMAGE_ROW_BYTES_INT32 = 0 ML_IMAGE_SKIP_PIXELS_INT32 = 0 ML_IMAGE_SKIP_ROWS_INT32 = 0 ML_VIDEO_START_X_INT32 = 1 |
This device-specific example shows you how to use the LUTs to invert video. You can perform this example with 8-bit packings only (load with an inverse ramp). Following are the DMedia Pro control settings:
for (i=1; i<=NUM_DEFINED_LUT_ENTRIES;i++) lutentries[NUM_DEFINED_LUT_ENTRIES-i] = i; pv->param = XTDIGVID_LUT_YG_INT32_ARRAY; pv->value.pInt32 = lutentries; pv->length=NUM_DEFINED_LUT_ENTRIES; pv->maxLength=sizeof(lutentries)/sizeof(DMint32); pv++; pv->param = XTDIGVID_LUT_UB_INT32_ARRAY; pv->value.pInt32 = lutentries; pv->length=NUM_DEFINED_LUT_ENTRIES; pv->maxLength=sizeof(lutentries)/sizeof(DMint32); pv++; pv->param = XTDIGVID_LUT_VR_INT32_ARRAY; pv->value.pInt32 = lutentries; pv->length=NUM_DEFINED_LUT_ENTRIES; pv->maxLength=sizeof(lutentries)/sizeof(DMint32); pv++; |
For output transfers, you can use field/frame mode (FF mode) ( XTDIGVID_FF_MODE_INT32) to assist an application in performing 3/2 pulldown. You can only use this mode when there are 1080p 23.97 frames in memory and you want to output 1080i 59.94 fields. By default FF mode is disabled. To enable FF mode, follow these steps:
Set XTDIGVID_FF_MODE_INT32 to the value XTDIGVID_FF_MODE_ENABLE.
Set the ML_IMAGE_INTERLEAVE_MODE_INT32 to ML_INTERLEAVE_MODE_INTERLEAVED.
In field/frame mode, you can send the DM3 board an entire frame, but the board only extracts a single field. For example, if the 1080p frames in memory are labeled A,B,C..., and FF mode is enabled, you can send the board AAABBCCCDD, and it will output field 1 from A, field 2 from A, field 1 from A, field 2 from B, and field 1 from B. Each buffer you send is treated as an interleaved frame, but only a single field is extracted from it. As a result, your application does not have to manually extract fields from the frames in memory.
UST (unadjusted system time) and MSC ( media stream count) function exactly as they do in the one-field-per buffer case; MSC increases by one for each buffer. To specify whether the first field is an F1 or an F2, use the ML_IMAGE_DOMINANCE_INT32 control.
The following device-specific example shows you how to allocate buffers for a fixed set of images and how to place the images in the buffer to achieve the desired results.
// buffer allocation if (ffmode) bufferCount = imageCount*5/2; else bufferCount = (imageCount > 1 ? imageCount : maxBuffers); bufArray = (void *) malloc(bufferCount * sizeof(void *)); if (bufArray == NULL) { fprintf(stderr, "Cannot allocate buffer array\n"); exit(-1); } else bzero(bufArray, (bufferCount * sizeof(void *))); // filling the buffers in ffmode if (ffmode) { /* |
In field/frame mode, you must send each frame 2.5 times (on average), so you must duplicate entries in the buffer array. Begin with buffer array entries as follows:
ABCD...... |
And finish with:
AAABBCCCDD */ for(fnum = bufferCount-1; fnum>0; fnum--) bufArray[fnum] = bufArray[(int)(fnum*2/5)]; |
This section provides example code for setting up a jack control. The following example uses a jack control to enable a loopback on the HD input jack:
// enable loopback on HD input jack #include <stdio.h> #include <ML/ml.h> #include <ML/mlu.h> #include <ML/ml_xtdigvid.h> int main( int argc, char **argv ) { DMstatus status; DMopenid jack; // open the HD input jack on the xt-digvid device { DMint64 sysId = ML_SYSTEM_LOCALHOST; DMint64 devId; DMint64 jackId; char *jackName = "HDSerialDigitalInputJack"; if( status = dmuFindDeviceByName( sysId, "xt-digvid", &devId )) { fprintf( stderr, "xt-digvid: %s\n", dmStatusName( status )); return( 1 ); } if( status = dmuFindJackByName( devId, jackName, &jackId )) { fprintf( stderr, "%s: %s\n", jackName, dmStatusName( status )); return( 1 ); } if( status = dmOpen( jackId, NULL, &jack ) ) { fprintf( stderr, "open %s: %s\n", jackName, dmStatusName( status )); return( 1 ); } } // set the loopback control { DMpv ctrls[] = { XTDIGVID_LOOPBACK_INT32, 0, 0, 0, ML_END, 0, 0, 0 }; ctrls[ 0 ].value.int32 = XTDIGVID_LOOPBACK_ENABLE; if( status = dmSetControls( jack, ctrls )) { fprintf( stderr, "dmSetControls: %s\n",dmStatusName(status)); return( 1 ); } } dmClose( jack ); return 0; } |
You can use UST (unadjusted system time) and MSC ( media stream count) signals to synchronize data streams. These are special signals that are recognized or generated by the DM3 board. For more information, see the OpenML Media Library Software Development Kit Programmer's Guide.
Consider the following information when programming the DM3 board:
You can only open one input path and one output path at the same time.
Before you configure a path with SetControls, your image controls and your video controls must be compatible. Because the DM3 board validates the path configuration at SetControl time, set all video controls and image controls at the same time. If this is inconvenient, start from a valid configuration and change “blocks” of controls. This alternative method also results in a valid path configuration.
The image width and height must correspond to the video width and height as follows:
ML_IMAGE_WIDTH = ML_VIDEO_WIDTH
ML_IMAGE_HEIGHT_1 = ML_VIDEO_HEIGHT_F1
ML_IMAGE_HEIGHT_2 = 0
I nterlaced formats with ML_INTERLEAVE_MODE_INT32 set to ML_INTERLEAVE_MODE_INTERLEAVED
ML_IMAGE_WIDTH = ML_VIDEO_WIDTH
ML_IMAGE_HEIGHT_1 = ML_VIDEO_HEIGHT_F1 + ML_VIDEO_HEIGHT_F2
ML_IMAGE_HEIGHT_2 = 0
I nterlaced formats with ML_INTERLEAVE_MODE_INT32 set to ML_INTERLEAVE_MODE_SINGLE_FIELD
ML_IMAGE_WIDTH = ML_VIDEO_WIDTH
ML_IMAGE_HEIGHT_1 = ML_VIDEO_HEIGHT_F1
ML_IMAGE_HEIGHT_2 = ML_VIDEO_HEIGHT_F2
The VBOB does not distinguish between 25PsF and 50i timings. There are three possible results:
If the input timing is 25PsF, the detected input signal is 50i.
If the output timing is 25PsF and the genlock source is 25PsF, the detected genlock signal is 50i.
If the output timing is 50i and the genlock source is 25PsF, the detected genlock signal is 50i.