1Name 2 3 NV_present_video 4 5Name Strings 6 7 GL_NV_present_video 8 GLX_NV_present_video 9 WGL_NV_present_video 10 11Contributors 12 13 James Jones 14 Jeff Juliano 15 Robert Morell 16 Aaron Plattner 17 Andy Ritger 18 Thomas True 19 Ian Williams 20 21Contact 22 23 James Jones, NVIDIA (jajones 'at' nvidia.com) 24 25Status 26 27 Implemented in 165.33 driver for NVIDIA SDI devices. 28 29Version 30 31 Last Modified Date: July 8, 2011 32 Author Revision: 8 33 $Date$ $Revision$ 34 35Number 36 37 347 38 39Dependencies 40 41 OpenGL 1.1 is required. 42 43 ARB_occlusion_query is required. 44 EXT_timer_query is required. 45 ARB_texture_compression affects the definition of this extension. 46 ARB_texture_float affects the definition of this extension. 47 GLX_NV_video_out affects the definition of this extension. 48 EXT_framebuffer_object affects the definition of this extension. 49 WGL_ARB_extensions_string affects the definition of this extension. 50 WGL_NV_video_out affects the definition of this extension. 51 52 This extension is written against the OpenGL 2.1 Specification 53 and the GLX 1.4 Specification. 54 55Overview 56 57 This extension provides a mechanism for displaying textures and 58 renderbuffers on auxiliary video output devices. It allows an 59 application to specify separate buffers for the individual 60 fields used with interlaced output. It also provides a way 61 to present frames or field pairs simultaneously in two separate 62 video streams. It also allows an application to request when images 63 should be displayed, and to obtain feedback on exactly when images 64 are actually first displayed. 65 66 This specification attempts to avoid language that would tie it to 67 any particular hardware or vendor. However, it should be noted that 68 it has been designed specifically for use with NVIDIA SDI products 69 and the features and limitations of the spec compliment those of 70 NVIDIA's line of SDI video output devices. 71 72New Procedures and Functions 73 74 void PresentFrameKeyedNV(uint video_slot, 75 uint64EXT minPresentTime, 76 uint beginPresentTimeId, 77 uint presentDurationId, 78 enum type, 79 enum target0, uint fill0, uint key0, 80 enum target1, uint fill1, uint key1); 81 82 void PresentFrameDualFillNV(uint video_slot, 83 uint64EXT minPresentTime, 84 uint beginPresentTimeId, 85 uint presentDurationId, 86 enum type, 87 enum target0, uint fill0, 88 enum target1, uint fill1, 89 enum target2, uint fill2, 90 enum target3, uint fill3); 91 92 void GetVideoivNV(uint video_slot, enum pname, int *params); 93 void GetVideouivNV(uint video_slot, enum pname, uint *params); 94 void GetVideoi64vNV(uint video_slot, enum pname, int64EXT *params); 95 void GetVideoui64vNV(uint video_slot, enum pname, 96 uint64EXT *params); 97 98 99 unsigned int *glXEnumerateVideoDevicesNV(Display *dpy, int screen, 100 int *nelements); 101 int glXBindVideoDeviceNV(Display *dpy, unsigned int video_slot, 102 unsigned int video_device, 103 const int *attrib_list); 104 105 106 DECLARE_HANDLE(HVIDEOOUTPUTDEVICENV); 107 108 int wglEnumerateVideoDevicesNV(HDC hDc, 109 HVIDEOOUTPUTDEVICENV *phDeviceList); 110 BOOL wglBindVideoDeviceNV(HDC hDc, unsigned int uVideoSlot, 111 HVIDEOOUTPUTDEVICENV hVideoDevice, 112 const int *piAttribList); 113 BOOL wglQueryCurrentContextNV(int iAttribute, int *piValue); 114 115New Tokens 116 117 Accepted by the <type> parameter of PresentFrameKeyedNV and 118 PresentFrameDualFillNV: 119 120 FRAME_NV 0x8E26 121 FIELDS_NV 0x8E27 122 123 Accepted by the <pname> parameter of GetVideoivNV, GetVideouivNV, 124 GetVideoi64vNV, GetVideoui64vNV: 125 126 CURRENT_TIME_NV 0x8E28 127 NUM_FILL_STREAMS_NV 0x8E29 128 129 Accepted by the <target> parameter of GetQueryiv: 130 131 PRESENT_TIME_NV 0x8E2A 132 PRESENT_DURATION_NV 0x8E2B 133 134 Accepted by the <attribute> parameter of glXQueryContext: 135 136 GLX_NUM_VIDEO_SLOTS_NV 0x20F0 137 138 Accepted by the <iAttribute> parameter of wglQueryCurrentContextNV: 139 140 WGL_NUM_VIDEO_SLOTS_NV 0x20F0 141 142Additions to Chapter 2 of the OpenGL 2.1 Specification (OpenGL Operation) 143 144 None 145 146Additions to Chapter 3 of the OpenGL 2.1 Specification (Rasterization) 147 148 None 149 150Additions to Chapter 4 of the OpenGL 2.1 Specification (Per-Fragment Operations and the Framebuffer) 151 152 Add a new section after Section 4.4: 153 154 "4.5 Displaying Buffers 155 156 "To queue the display of a set of textures or renderbuffers on one 157 of the current video output devices, call one of: 158 159 void PresentFrameKeyedNV(uint video_slot, 160 uint64EXT minPresentTime, 161 uint beginPresentTimeId, 162 uint presentDurationId, 163 enum type, 164 enum target0, uint fill0, uint key0, 165 enum target1, uint fill1, uint key1); 166 167 void PresentFrameDualFillNV(uint video_slot, 168 uint64EXT minPresentTime, 169 uint beginPresentTimeId, 170 uint presentDurationId, 171 enum type, 172 enum target0, uint fill0, 173 enum target1, uint fill1, 174 enum target2, uint fill2, 175 enum target3, uint fill3); 176 177 "PresentFrameKeyedNV can only be used when one output stream 178 is being used for color data. Key data will be presented on the 179 second output stream. PresentFrameDualFillNV can be used only when 180 two output streams are being used for color data. It will present 181 separate color images on each stream simultaneously. 182 183 "The <video_slot> parameter specifies which video output slot 184 in the current context this frame should be presented on. If no 185 video output device is bound at <video_slot> at the time of the 186 call, INVALID_OPERATION is generated. 187 188 "The value of <minPresentTime> can be set to either the earliest 189 time in nanoseconds that the frame should become visible, or the 190 special value 0. Frame presentation is always queued until the 191 video output's vertical blanking period. At that time, the video 192 output device will consume the frames in the queue in the order 193 they were queued until it finds a frame qualified for display. A 194 frame is qualified if it meets one of the following criteria: 195 196 1) The frame's minimum presentation time is the special value 197 zero. 198 199 2) The frame's minimum presentation time is less than or equal 200 to the current time and the next queued frame, if it exists, 201 has a minimum presentation time greater than the current time. 202 203 Any consumed frames not displayed are discarded. If no qualified 204 frames are found, the current frame continues to display. 205 206 "If <beginPresentTimeId> or <presentDurationId> are non-zero, they 207 must name valid query objects (see section 4.1.7, Asynchronous 208 Queries). The actual time at which the video output device began 209 displaying this frame will be stored in the object referred to by 210 <beginPresentTimeId>. The present frame operations will implicitly 211 perform the equivalent of: 212 213 BeginQuery(PRESENT_TIME_NV, <beginPresentTimeId>); 214 BeginQuery(PRESENT_DURATION_NV, <presentDurationId>); 215 216 when the respective query object names are valid, followed by the 217 actual present operation, then an implicit EndQuery() for each 218 query started. The result can then be obtained asynchronously via 219 the GetQueryObject calls with a <target> of PRESENT_TIME_NV or 220 PRESENT_DURATION_NV. The results of a query on the PRESENT_TIME_NV 221 target will be the time in nanoseconds when the frame was first 222 started scanning out, and will become available at that time. The 223 results of a query on the PRESENT_DURATION_NV target will be the 224 number of times this frame was fully scanned out by the video output 225 device and will become available when the subsequent frame begins 226 scanning out. 227 228 "If the frame was removed from the queue without being displayed, 229 the present duration will be zero, and the present time will refer 230 to the time in nanoseconds when the first subsequent frame that was 231 not skipped began scanning out. 232 233 "The query targets PRESENT_TIME_NV and PRESENT_DURATION_NV may not 234 be explicitly used with BeginQuery or EndQuery. Attempting to do 235 so will generate INVALID_ENUM. 236 237 "The parameters <type>, <target0>, <fill0>, <key0>, <target1>, 238 <fill1>, and <key1> define the data to be displayed on the first 239 video output stream. Valid values for <type> are FIELDS_NV or 240 FRAME_NV. Other values will generate INVALID_ENUM. The <target0> 241 and <target1> parameters can each be one of TEXTURE_2D, 242 TEXTURE_RECTANGLE, RENDERBUFFER_EXT, or NONE. Other values will 243 generate INVALID_ENUM. The <fill0> and <fill1> parameters then name 244 an object of the corresponding type from which the color data will 245 be read. Similarly, <key0> and <key1> name an object from which key 246 channel data will be read. If <type> is FIELDS_NV <target0> and 247 <target1> can not be NONE and <fill0>, and <fill1> must both name 248 valid image objects or INVALID_VALUE is generated. If <type> is 249 FRAME_NV <target0> can not be NONE and <fill0> must name a valid 250 object or INVALID_VALUE is generated. Additionally, <target1> must 251 be NONE or INVALID_ENUM is generated. The values of <fill1> and 252 <key1> are ignored. 253 254 "A texture object is considered a valid color image object only if 255 it is consistent and has a supported internal format. A 256 renderbuffer object is considered a valid image object if its 257 internal format has been specified as one of those supported. 258 Implementations must support at least the following internal formats 259 for presenting color buffers: 260 261 RGB 262 RGBA 263 RGB16F_ARB 264 RGBA16F_ARB 265 RGB32F_ARB 266 RGBA32F_ARB 267 LUMINANCE 268 LUMINANCE_AlPHA 269 270 If no separate key object is specified when using a key output 271 stream, the key data is taken from the alpha channel of the color 272 object if it is present, or is set to 1.0 otherwise. 273 Implementations must support at least the following internal formats 274 when presenting key stream buffers: 275 276 RGBA 277 RGBA16F_ARB 278 RGBA32F_ARB 279 LUMINANCE_AlPHA 280 DEPTH_COMPONENT 281 282 "The key values are read from the alpha channel unless a depth 283 format is used. For depth formats, the key value is the depth 284 value. 285 286 "It is legal to use the same image for more than one of <fill0>, 287 <fill1>, <key0>, and <key1>. 288 289 "In the following section, which discusses image dimension 290 requirements, the image objects named by <fill0> and <key0> are 291 collectively referred to as 'image 0' and the image objects named by 292 <fill1> and <key1> are collectively referred to as 'image 1'. The 293 dimensions of a pair of fill and key images must be equal. If using 294 PresentFrameDualFillNV, 'image 0' refers only to <fill0>, and 295 'image 1' refers only to <fill1>. 296 297 "If <type> is FRAME_NV image 1 must have a height equal to the 298 number of lines displayed per frame on the output device and a width 299 equal to the number of pixels per line on the output device or 300 INVALID_VALUE will be generated. Each line in the image will 301 correspond to a line displayed on the output device. 302 303 "If <type> is FIELDS_NV, the way in which lines from the image are 304 displayed depends on the image's size. If progressive output is in 305 use, image 0 and image 1 must either both have a height equal to the 306 number of lines displayed per frame, or both have a height equal to 307 the ceiling of half the number of lines displayed per frame. If an 308 interlaced output is in use, the images must either both have a 309 height equal to the number of lines displayed per frame, or image 0 310 must have a height equal to the number of lines in field one and 311 image 1 must have a height equal to the number of lines in field 312 two. The images must both have a width equal to the number of 313 pixels per line on the output device. If any of these conditions 314 are not met, INVALID_VALUE is generated. 315 316 "If progressive output is used, the lines are displayed as follows: 317 If the images are the same height as a frame, the resulting frame 318 displayed is comprised of the first line of image 0, followed by 319 the second line of image 1, followed by the third line of image 0, 320 and so on until all the lines of a frame have been displayed. If 321 the images are half the height of the frame, the resulting frame 322 displayed is comprised of the first line of image 0, followed by the 323 first line of image 1, followed by the second line of image 0, and 324 so on until the number of lines per frame has been displayed. 325 326 "If interlaced output is used and the images are the same height as 327 a frame, the order in which lines are chosen from the images 328 depends on the video output mode in use. If the video output mode 329 specifies field 1 as containing the first line of the display, the 330 first line of field 1 will come from the first line of image 0, 331 followed by the third line from image 0, and so on until the entire 332 first field has been displayed. The first line of field 2 will come 333 from the second line of image 1, followed by the fourth line of 334 image 1, and so on until the entire second field is displayed. If 335 the mode specifies field 1 as containing the second line of the 336 display, the first line of field 1 will come from the second line of 337 image 0, followed by the fourth line of image 0, and so on until the 338 entire first field is displayed. The first line of field 2 will 339 come from the first line of image 1, followed by the third line of 340 image 1, and so on until the entire second field is displayed. 341 342 "If interlaced output is used and the images are the same height as 343 individual fields, the order of lines used does not depend on the 344 mode in use. Regardless of the mode used the first line of the 345 first field will come from the first line of image 0, followed by 346 the second line of image 0, and so on until the entire first field 347 has been displayed. The first line of the second field will come 348 from the first line of image 1, followed by the second line of 349 image 1, and so on until the entire second field has been displayed. 350 351 "The parameters <target2>, <fill2>, <target3>, and <fill3> are used 352 identically to <target0>, <fill0>, <target1>, and <fill1> 353 respectively, but they operate on the second color video output 354 stream. 355 356 "If the implementation requires a copy as part of the present frame 357 operation, the copy will be transparent to the user and as such will 358 bypass the fragment pipeline completely and will not alter any GL 359 state." 360 361Additions to Chapter 5 of the OpenGL 2.1 Specification (Special Functions) 362 363 (Add to section 5.4, "Display Lists", page 244, in the list of 364 commands that are not compiled into display lists) 365 366 "Display commands: PresentFrameKeyedNV, PresentFrameDualFillNV 367 368Additions to Chapter 6 of the OpenGL 2.1 Specification (State and 369State Requests) 370 371 (In section 6.1.12, Asynchronous Queries, add the following after 372 paragraph 6, p. 254) 373 374 For present time queries (PRESENT_TIME_NV), if the minimum number of 375 bits is non-zero, it must be at least 64. 376 377 For present duration queries (PRESENT_DURATION_NV, if the minimum 378 number of bits is non-zero, it must be at least 1. 379 380 381 (Replace section 6.1.15, Saving and Restoring State, p. 264) 382 383 Section 6.1.15, Video Output Queries 384 385 Information about a video slot can be queried with the commands 386 387 void GetVideoivNV(uint video_slot enum pname, int *params); 388 void GetVideouivNV(uint video_slot enum pname, uint *params); 389 void GetVideoi64vNV(uint video_slot enum pname, 390 int64EXT *params); 391 void GetVideoui64vNV(uint video_slot enum pname, 392 uint64EXT *params); 393 394 If <video_slot> is not a valid video slot in the current context or 395 no video output device is currently bound at <video_slot> an 396 INVALID_OPERATION is generated. If <pname> is CURRENT_TIME_NV, the 397 current time on the video output device in nanoseconds is returned 398 in <params>. If the time value can not be expressed without using 399 more bits than are available in <params>, the value is truncated. 400 If <pname> is NUM_FILL_STREAMS_NV, the number of active video output 401 streams is returned in <params>. 402 403Additions to Appendix A of the OpenGL 2.1 Specification (Invariance) 404 405 None 406 407Additions to the WGL Specification 408 409 Add a new section "Video Output Devices" 410 411 "WGL video output devices can be used to display images with more 412 fine-grained control over the presentation than wglSwapBuffers 413 allows. Use 414 415 int wglEnumerateVideoDevicesNV(HDC hDc, 416 HVIDEOOUTPUTDEVICENV *phDeviceList); 417 418 to enumerate the available video output devices. 419 420 "This call returns the number of video devices available on <hDC>. 421 If <phDeviceList> is non-NULL, an array of valid device handles 422 will be returned in it. The function will assume <phDeviceList> is 423 large enough to hold all available handles so the application should 424 take care to first query the number of devices present and allocate 425 an appropriate amount of memory. 426 427 "To bind a video output device to the current context, use 428 429 BOOL wglBindVideoDeviceNV(HDC hDc, unsigned int uVideoSlot, 430 HVIDEOOUTPUTDEVICENV hVideoDevice, 431 const int *piAttribList); 432 433 "wglBindVideoDeviceNV binds the video output device specified by 434 <hVideoDevice> to one of the context's available video output slots 435 specified by <uVideoSlot>. <piAttribList> is a set of attribute 436 name-value pairs that affects the bind operation. Currently there 437 are no valid attributes so <piAttribList> must be either NULL or an 438 empty list. To release a video device without binding another 439 device to the same slot, call wglBindVideoDeviceNV with 440 <hVideoDevice> set to NULL. The bound video output device will be 441 enabled before wglBindVideoDeviceNV returns. It will display black 442 until the first image is presented on it. The previously bound 443 video device, if any, will also be deactivated before 444 wglBindVIdeoDeviceNV resturns. Video slot 0 is reserved for the GL. 445 If wglBindVideoDeviceNV is called with <uVideoSlot> less than 1 or 446 greater than the maximum number of video slots supported by the 447 current context, if <hVideoDevice> does not refer to a valid video 448 output device, or if there is no current context, FALSE will be 449 returned. A return value of TRUE indicates a video device has 450 successfully been bound to the video slot. 451 452 453 Add section "Querying WGL context attributes" 454 455 To query an attribute associated with the current WGL context, use 456 457 BOOL wglQueryCurrentContextNV(int iAttribute, int *piValue); 458 459 wglQueryCurrentContextNV will place the value of the attribute named 460 by <iAttribute> in the memory pointed to by <piValue>. If there is 461 no context current or <iAttribute> does not name a valid attribute, 462 FALSE will be returned and the memory pointed to by <piValue> will 463 not be changed. Currently the only valid attribute name is 464 WGL_NUM_VIDEO_SLOTS_NV. This attribute contains the number of valid 465 video output slots in the current context. 466 467 468Additions to Chapter 2 of the GLX 1.4 Specification (GLX Operation) 469 470 None 471 472Additions to Chapter 3 of the GLX 1.4 Specification (Functions and Errors) 473 474 Modify table 3.5: 475 476 Attribute Type Description 477 ---------------------- ---- ------------------------------------------ 478 GLX_FBCONFIG_ID XID XID of GLXFBConfig associated with context 479 GLX_RENDER_TYPE int type of rendering supported 480 GLX_SCREEN int screen number 481 GLX_NUM_VIDEO_SLOTS_NV int number of video output slots this context supports 482 483 484 Add a section between Sections 3.3.10 and 3.3.11: 485 486 3.3.10a Video Output Devices 487 488 "GLX video output devices can be used to display images with more 489 fine-grained control over the presentation than glXSwapBuffers 490 allows. Use 491 492 unsigned int *glXEnumerateVideoDevicesNV(Display *dpy, 493 int screen, 494 int *nElements); 495 496 to enumerate the available video output devices. 497 498 "This call returns an array of unsigned ints. The number of 499 elements in the array is returned in nElements. Each entry in the 500 array names a valid video output device. Use XFree to free the 501 memory returned by glXEnumerateVideoDevicesNV. 502 503 "To bind a video output device to the current context, use 504 505 Bool glXBindVideoDeviceNV(Display *dpy, 506 unsigned int video_slot, 507 unsigned int video_device, 508 const int *attrib_list); 509 510 "glXBindVideoDeviceNV binds the video output device specified 511 by <video_device> to one of the context's available video 512 output slots specified by <video_slot>. <attrib_list> is a 513 set of attribute name-value pairs that affects the bind 514 operation. Currently there are no valid attributes so <attrib_list> 515 must be either NULL or an empty list. To release a video device 516 without binding another device to the same slot, call 517 glXBindVideoDeviceNV with <video_device> set to "0". Video slot 0 518 is reserved for the GL. The bound video output device will be 519 enabled before glXBindVideoDeviceNV returns. It will display black 520 until the first image is presented on it. The previously bound 521 video device, if any, will also be deactivated before 522 glXBindVIdeoDeviceNV resturns. If glXBindVideoDeviceNV is called 523 with <video_slot> less than 1 or greater than the maximum number of 524 video slots supported by the current context, BadValue is generated. 525 If <video_device> does not refer to a valid video output device, 526 BadValue is generated. If <attrib_list> contains an invalid 527 attribute or an invalid attribute value, BadValue is generated. If 528 glXBindVideoDeviceNV is called without a current context, 529 GLXBadContext is generated. 530 531 532Additions to Chapter 4 of the GLX 1.4 Specification (Encoding on the X 533Byte Stream) 534 535 None 536 537Additions to Chapter 5 of the GLX 1.4 Specification (Extending OpenGL) 538 539 None 540 541Additions to Chapter 6 of the GLX 1.4 Specification (GLX Versions) 542 543 None 544 545GLX Protocol 546 547 BindVideoDeviceNV 548 1 CARD8 opcode (X assigned) 549 1 17 GLX opcode (glXVendorPrivateWithReply) 550 2 6+n request length 551 4 1332 vendor specific opcode 552 4 CARD32 context tag 553 4 CARD32 video_slot 554 4 CARD32 video_device 555 4 CARD32 num_attribs 556 4*n LISTofATTRIBUTE_PAIR attribute, value pairs 557 => 558 1 CARD8 reply 559 1 unused 560 2 CARD16 sequence number 561 4 0 reply length 562 4 CARD32 status 563 20 unused 564 565 EnumerateVideoDevicesNV 566 1 CARD8 opcode (X assigned) 567 1 17 GLX opcode (glXVendorPrivateWithReply) 568 2 4 request length 569 4 1333 vendor specific opcode 570 4 unused 571 4 CARD32 screen 572 => 573 1 CARD8 reply 574 1 unused 575 2 CARD16 sequence number 576 4 n reply length 577 4 CARD32 num_devices 578 4*n LISTofCARD32 device names 579 580 PresentFrameKeyedNV 581 1 CARD8 opcode (X assigned) 582 1 16 GLX opcode (glXVendorPrivate) 583 2 15 request length 584 4 1334 vendor specific opcode 585 4 CARD32 context tag 586 8 CARD64 minPresentTime 587 4 CARD32 video_slot 588 4 CARD32 beginPresentTimeId 589 4 CARD32 presentDurationId 590 4 CARD32 type 591 4 CARD32 target0 592 4 CARD32 fill0 593 4 CARD32 key0 594 4 CARD32 target1 595 4 CARD32 fill1 596 4 CARD32 key1 597 598 PresentFrameDualFillNV 599 1 CARD8 opcode (X assigned) 600 1 16 GLX opcode (glXVendorPrivate) 601 2 17 request length 602 4 1335 vendor specific opcode 603 4 CARD32 context tag 604 8 CARD64 minPresentTime 605 4 CARD32 video_slot 606 4 CARD32 beginPresentTimeId 607 4 CARD32 presentDurationId 608 4 CARD32 type 609 4 CARD32 target0 610 4 CARD32 fill0 611 4 CARD32 target1 612 4 CARD32 fill1 613 4 CARD32 target2 614 4 CARD32 fill2 615 4 CARD32 target3 616 4 CARD32 fill3 617 618 GetVideoivNV 619 1 CARD8 opcode (X assigned) 620 1 17 GLX opcode (glXVendorPrivateWithReply) 621 2 4 request length 622 4 1336 vendor specific opcode 623 4 CARD32 context tag 624 4 CARD32 video_slot 625 4 CARD32 pname 626 => 627 1 CARD8 reply 628 1 unused 629 2 CARD16 sequence number 630 4 m reply length, m = (n==1 ? 0 : n) 631 4 unused 632 4 CARD32 n 633 634 if (n=1) this follows: 635 636 4 INT32 params 637 12 unused 638 639 otherwise this follows: 640 641 16 unused 642 n*4 LISTofINT32 params 643 644 GetVideouivNV 645 1 CARD8 opcode (X assigned) 646 1 17 GLX opcode (glXVendorPrivateWithReply) 647 2 4 request length 648 4 1337 vendor specific opcode 649 4 CARD32 context tag 650 4 CARD32 video_slot 651 4 CARD32 pname 652 => 653 1 CARD8 reply 654 1 unused 655 2 CARD16 sequence number 656 4 m reply length, m = (n==1 ? 0 : n) 657 4 unused 658 4 CARD32 n 659 660 if (n=1) this follows: 661 662 4 CARD32 params 663 12 unused 664 665 otherwise this follows: 666 667 16 unused 668 n*4 LISTofCARD32 params 669 670 GetVideoi64vNV 671 1 CARD8 opcode (X assigned) 672 1 17 GLX opcode (glXVendorPrivateWithReply) 673 2 4 request length 674 4 1338 vendor specific opcode 675 4 CARD32 context tag 676 4 CARD32 video_slot 677 4 CARD32 pname 678 => 679 1 CARD8 reply 680 1 unused 681 2 CARD16 sequence number 682 4 m reply length, m = (n==1 ? 0 : n) 683 4 unused 684 4 CARD32 n 685 686 if (n=1) this follows: 687 688 8 INT64 params 689 8 unused 690 691 otherwise this follows: 692 693 16 unused 694 n*8 LISTofINT64EXT params 695 696 GetVideoui64vNV 697 1 CARD8 opcode (X assigned) 698 1 17 GLX opcode (glXVendorPrivateWithReply) 699 2 4 request length 700 4 1339 vendor specific opcode 701 4 CARD32 context tag 702 4 CARD32 video_slot 703 4 CARD32 pname 704 => 705 1 CARD8 reply 706 1 unused 707 2 CARD16 sequence number 708 4 m reply length, m = (n==1 ? 0 : n) 709 4 unused 710 4 CARD32 n 711 712 if (n=1) this follows: 713 714 8 CARD64 params 715 8 unused 716 717 otherwise this follows: 718 719 16 unused 720 n*8 LISTofCARD64 params 721 722 723Dependencies on ARB_occlusion_query: 724 725 The generic query objects introduced in ARB_occlusion_query are 726 used as a method to asynchronously deliver timing data to the 727 application. The language describing BeginQueryARB and 728 EndQueryARB API is also relevant as the same operations are 729 implicitly performed by PresentFrameKeyedNV and 730 PresentFrameDualFillNV. 731 732Dependencies on EXT_timer_query: 733 734 The 64-bit types introduced in EXT_timer_query are used in this 735 extension to specify time values with nanosecond accuracy. 736 737Dependencies on ARB_texture_float 738 739 If ARB_texture_float is not supported, the floating point internal 740 formats are removed from the list of internal formats required to be 741 supported by the PresentFrame functions. 742 743Dependencies on EXT_framebuffer_object: 744 745 If EXT_framebuffer_object is not supported, all references to 746 targets of type RENDERBUFFER_EXT should be removed from the spec 747 language. 748 749Dependencies on GLX_NV_video_out: 750 751 Video output resources can not be used simultaneously with this 752 extension and GLX_NV_video_out. If an application on the system has 753 obtained a video device handle from GLX_NV_video_out, no other 754 application may bind any video out devices using this spec until all 755 GLX_NV_video_out devices have been released. Similarly, if an 756 application has bound a video out device using this spec, no other 757 applications on the system can obtain a GLX_NV_video_out device 758 handle until all devices have been unbound. 759 760Dependencies on WGL_ARB_extensions_string: 761 762 Because there is no way to extend wgl, these calls are defined in 763 the ICD and can be called by obtaining the address with 764 wglGetProcAddress. The WGL extension string is not included in the 765 GL_EXTENSIONS string. Its existence can be determined with the 766 WGL_ARB_extensions_string extension. 767 768Dependencies on WGL_NV_video_out: 769 770 Video output resources can not be used simultaneously with this 771 extension and WGL_NV_video_out. If an application on the system has 772 obtained a video device handle from WGL_NV_video_out, no other 773 application may bind any video out devices using this spec until all 774 WGL_NV_video_out devices have been released. Similarly, if an 775 application has bound a video out device using this spec, no other 776 applications on the system can obtain a WGL_NV_video_out device 777 handle until all devices have been unbound. 778 779 780Errors 781 782 783 784New State 785 786 Get Value Type Get Command Init. Value Description Sec Attribute 787 -------------------------- ---- ---------------- ------------- ------------------------- ----- --------- 788 CURRENT_QUERY 4xZ+ GetQueryiv 0 Active query object name 4.1.7 - 789 (occlusion, timer, 790 present time, and 791 present duration) 792 QUERY_RESULT 4xZ+ GetQueryObjectiv 0 Query object result 4.1.7 - 793 (samples passed, 794 time elapsed, 795 present time, or 796 present duration) 797 QUERY_RESULT_AVAILABLE 4xB GetQueryObjectiv TRUE Query object result 4.1.7 - 798 available? 799 CURRENT_TIME_NV 1xZ GetVideoui64vNV 0 Video device timer 4.4 - 800 801 802New Implementation Dependent state 803 804 Get Value Type Get Command Minimum Value Description Sec Attribute 805 ---------------------- ---- ---------------- -------------- -------------------------- ----- --------- 806 NUM_FILL_STREAMS_NV 1xZ GetVideouivNV 0 Number of video streams 4.4 - 807 active on a video slot 808 NUM_VIDEO_SLOTS_NV 1xZ GetIntegerv 1 Number of video slots a 4.4 - 809 context supports. 810 QUERY_COUNTER_BITS 4xZ+ GetQueryiv see 6.1.12 Asynchronous query counter 6.1.12 - 811 bits (occlusion, timer, 812 present time and present 813 duration queries) 814 815 816Issues 817 818 1) How does the user enumerate video devices? 819 820 RESOLVED: There will be OS-specific functions that 821 will enumerate OS-specific identifiers that refer to video 822 devices. On WGL, this will likely be tied to an hDC. GPU 823 affinity can then be used to enumerate SDI devices even on GPUs 824 that are not used as part of the windows desktop. On GLX, 825 SDI devices can be enumerated per X screen. 826 827 2) How does the user specify data for the second output? 828 829 RESOLVED: There will be a separate entry point that accepts up 830 to 4 buffers total. 831 832 3) When is SDI output actually enabled? 833 834 RESOLVED: The BindVideoDevice functions will enable and disable 835 SDI output. 836 837 4) Should the PresentFrame functions return the frame 838 count/identifier? 839 840 RESOLVED: No. PresentFrame will instead accept two query 841 object IDs and will implicitly begin and end a query on each 842 of these objects. The first object's query target will be 843 PRESENT_TIME_EXT. Its result will be the time in nanoseconds 844 when the frame was first displayed, and will become available 845 when the frame begins displaying or when a subsequent frame 846 begins displaying if this frame be skipped. The second 847 object's query target will be PRESENT_LENGTH_EXT. The result 848 will be the number of full-frame vblanks that occurred while 849 the frame was displayed. This result will become available when 850 the next frame begins displaying. If the frame was skipped, 851 this value will be 0 and the PRESENT_TIME_EXT result will refer 852 to the time when the first subsequent frame that was not skipped 853 began displaying. 854 855 5) Should there be any other queryable video output device 856 attributes? 857 858 RESOLVED: There are none. The glXQueryVideoDeviceNV and 859 wglQueryVideoDeviceNV calls have been removed from this 860 specification. They can be added in a separate extension if 861 they are ever needed. 862 863 6) Should this spec require a timed present mechanism? 864 865 RESOLVED: Yes, this spec will include a mechanism for presenting 866 frames at a specified absolute time and a method for querying 867 when frames were displayed to allow apps to adjust their 868 rendering time. Leaving this out would weaken the PresentFrame 869 mechanism considerably. 870 871 7) Should this specification allow downsampling as part of the 872 present operation. 873 874 RESOLVED: No, this functionality can retroactively be added to 875 the PresentFrame functions as part of a later spec if necessary. 876 877 8) What happens when two outputs are enabled but only one output's 878 worth of buffers are specified? 879 880 RESOLVED: This will be an invalid operation. If two outputs are 881 enabled, data must be presented on both of them for every frame. 882 883 9) What section of the spec should the PresentFrame functions be in? 884 885 RESOLVED: A new section has been added to Chapter 4 to describe 886 functions that control the displaying of buffers. 887 888 10) What should this extension be called? 889 890 RESOLVED: The original name for this specification was 891 NV_video_framebuffer because the motivation for creating this 892 extension came from the need to expose a method for sending 893 framebuffer objects to an SDI video output device. However, it 894 has grown beyond that purpose and no longer even requires 895 EXT_framebuffer_object to function. For these reasons, it has 896 been renamed NV_present_video. 897 898 11) Should a "stacked fields" mode be added to allow the application 899 to specify two fields vertically concatenated in one buffer? 900 901 RESOLVED: No. The stacked fields in previous extensions were a 902 workaround to allow the application to specify two fields at 903 once with an API that only accepted one image at a time. Since 904 this extension requires all buffers that make up a frame to be 905 specified simultaneously, stacked fields are not needed. 906 907 12) Should there be a separate function for presenting output data 908 for one stream? 909 910 RESOLVED: Yes. To clarify the different types of data needed 911 for single and dual stream modes, two separate entry points are 912 provided. 913 914 13) Should we allow users to override the mode-defined mapping 915 between frame-height buffer lines and field lines? 916 917 RESOLVED: No. Not only does this seem unnecessary, it is also 918 impractical. If a mode has an odd number of lines, the 919 application would need to specify incorrectly sized buffers to 920 satisfy the line choosing rules as they are specified currently. 921 922Revision History 923 924 Revision 8, 2011/7/8 925 -Fix wglBindVideoDeviceNV specification to match implemented 926 behavior. 927 928 Revision 7, 2009/2/20 929 -Remove unused VideoParameterivNV command. 930 931 Revision 6, 2008/2/20 932 -Public specification 933