1i.MX Video Capture Driver 2========================= 3 4Introduction 5------------ 6 7The Freescale i.MX5/6 contains an Image Processing Unit (IPU), which 8handles the flow of image frames to and from capture devices and 9display devices. 10 11For image capture, the IPU contains the following internal subunits: 12 13- Image DMA Controller (IDMAC) 14- Camera Serial Interface (CSI) 15- Image Converter (IC) 16- Sensor Multi-FIFO Controller (SMFC) 17- Image Rotator (IRT) 18- Video De-Interlacing or Combining Block (VDIC) 19 20The IDMAC is the DMA controller for transfer of image frames to and from 21memory. Various dedicated DMA channels exist for both video capture and 22display paths. During transfer, the IDMAC is also capable of vertical 23image flip, 8x8 block transfer (see IRT description), pixel component 24re-ordering (for example UYVY to YUYV) within the same colorspace, and 25even packed <--> planar conversion. It can also perform a simple 26de-interlacing by interleaving even and odd lines during transfer 27(without motion compensation which requires the VDIC). 28 29The CSI is the backend capture unit that interfaces directly with 30camera sensors over Parallel, BT.656/1120, and MIPI CSI-2 busses. 31 32The IC handles color-space conversion, resizing (downscaling and 33upscaling), horizontal flip, and 90/270 degree rotation operations. 34 35There are three independent "tasks" within the IC that can carry out 36conversions concurrently: pre-process encoding, pre-process viewfinder, 37and post-processing. Within each task, conversions are split into three 38sections: downsizing section, main section (upsizing, flip, colorspace 39conversion, and graphics plane combining), and rotation section. 40 41The IPU time-shares the IC task operations. The time-slice granularity 42is one burst of eight pixels in the downsizing section, one image line 43in the main processing section, one image frame in the rotation section. 44 45The SMFC is composed of four independent FIFOs that each can transfer 46captured frames from sensors directly to memory concurrently via four 47IDMAC channels. 48 49The IRT carries out 90 and 270 degree image rotation operations. The 50rotation operation is carried out on 8x8 pixel blocks at a time. This 51operation is supported by the IDMAC which handles the 8x8 block transfer 52along with block reordering, in coordination with vertical flip. 53 54The VDIC handles the conversion of interlaced video to progressive, with 55support for different motion compensation modes (low, medium, and high 56motion). The deinterlaced output frames from the VDIC can be sent to the 57IC pre-process viewfinder task for further conversions. The VDIC also 58contains a Combiner that combines two image planes, with alpha blending 59and color keying. 60 61In addition to the IPU internal subunits, there are also two units 62outside the IPU that are also involved in video capture on i.MX: 63 64- MIPI CSI-2 Receiver for camera sensors with the MIPI CSI-2 bus 65 interface. This is a Synopsys DesignWare core. 66- Two video multiplexers for selecting among multiple sensor inputs 67 to send to a CSI. 68 69For more info, refer to the latest versions of the i.MX5/6 reference 70manuals [#f1]_ and [#f2]_. 71 72 73Features 74-------- 75 76Some of the features of this driver include: 77 78- Many different pipelines can be configured via media controller API, 79 that correspond to the hardware video capture pipelines supported in 80 the i.MX. 81 82- Supports parallel, BT.565, and MIPI CSI-2 interfaces. 83 84- Concurrent independent streams, by configuring pipelines to multiple 85 video capture interfaces using independent entities. 86 87- Scaling, color-space conversion, horizontal and vertical flip, and 88 image rotation via IC task subdevs. 89 90- Many pixel formats supported (RGB, packed and planar YUV, partial 91 planar YUV). 92 93- The VDIC subdev supports motion compensated de-interlacing, with three 94 motion compensation modes: low, medium, and high motion. Pipelines are 95 defined that allow sending frames to the VDIC subdev directly from the 96 CSI. There is also support in the future for sending frames to the 97 VDIC from memory buffers via a output/mem2mem devices. 98 99- Includes a Frame Interval Monitor (FIM) that can correct vertical sync 100 problems with the ADV718x video decoders. 101 102 103Entities 104-------- 105 106imx6-mipi-csi2 107-------------- 108 109This is the MIPI CSI-2 receiver entity. It has one sink pad to receive 110the MIPI CSI-2 stream (usually from a MIPI CSI-2 camera sensor). It has 111four source pads, corresponding to the four MIPI CSI-2 demuxed virtual 112channel outputs. Multpiple source pads can be enabled to independently 113stream from multiple virtual channels. 114 115This entity actually consists of two sub-blocks. One is the MIPI CSI-2 116core. This is a Synopsys Designware MIPI CSI-2 core. The other sub-block 117is a "CSI-2 to IPU gasket". The gasket acts as a demultiplexer of the 118four virtual channels streams, providing four separate parallel buses 119containing each virtual channel that are routed to CSIs or video 120multiplexers as described below. 121 122On i.MX6 solo/dual-lite, all four virtual channel buses are routed to 123two video multiplexers. Both CSI0 and CSI1 can receive any virtual 124channel, as selected by the video multiplexers. 125 126On i.MX6 Quad, virtual channel 0 is routed to IPU1-CSI0 (after selected 127by a video mux), virtual channels 1 and 2 are hard-wired to IPU1-CSI1 128and IPU2-CSI0, respectively, and virtual channel 3 is routed to 129IPU2-CSI1 (again selected by a video mux). 130 131ipuX_csiY_mux 132------------- 133 134These are the video multiplexers. They have two or more sink pads to 135select from either camera sensors with a parallel interface, or from 136MIPI CSI-2 virtual channels from imx6-mipi-csi2 entity. They have a 137single source pad that routes to a CSI (ipuX_csiY entities). 138 139On i.MX6 solo/dual-lite, there are two video mux entities. One sits 140in front of IPU1-CSI0 to select between a parallel sensor and any of 141the four MIPI CSI-2 virtual channels (a total of five sink pads). The 142other mux sits in front of IPU1-CSI1, and again has five sink pads to 143select between a parallel sensor and any of the four MIPI CSI-2 virtual 144channels. 145 146On i.MX6 Quad, there are two video mux entities. One sits in front of 147IPU1-CSI0 to select between a parallel sensor and MIPI CSI-2 virtual 148channel 0 (two sink pads). The other mux sits in front of IPU2-CSI1 to 149select between a parallel sensor and MIPI CSI-2 virtual channel 3 (two 150sink pads). 151 152ipuX_csiY 153--------- 154 155These are the CSI entities. They have a single sink pad receiving from 156either a video mux or from a MIPI CSI-2 virtual channel as described 157above. 158 159This entity has two source pads. The first source pad can link directly 160to the ipuX_vdic entity or the ipuX_ic_prp entity, using hardware links 161that require no IDMAC memory buffer transfer. 162 163When the direct source pad is routed to the ipuX_ic_prp entity, frames 164from the CSI can be processed by one or both of the IC pre-processing 165tasks. 166 167When the direct source pad is routed to the ipuX_vdic entity, the VDIC 168will carry out motion-compensated de-interlace using "high motion" mode 169(see description of ipuX_vdic entity). 170 171The second source pad sends video frames directly to memory buffers 172via the SMFC and an IDMAC channel, bypassing IC pre-processing. This 173source pad is routed to a capture device node, with a node name of the 174format "ipuX_csiY capture". 175 176Note that since the IDMAC source pad makes use of an IDMAC channel, it 177can do pixel reordering within the same colorspace. For example, the 178sink pad can take UYVY2X8, but the IDMAC source pad can output YUYV2X8. 179If the sink pad is receiving YUV, the output at the capture device can 180also be converted to a planar YUV format such as YUV420. 181 182It will also perform simple de-interlace without motion compensation, 183which is activated if the sink pad's field type is an interlaced type, 184and the IDMAC source pad field type is set to none. 185 186This subdev can generate the following event when enabling the second 187IDMAC source pad: 188 189- V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR 190 191The user application can subscribe to this event from the ipuX_csiY 192subdev node. This event is generated by the Frame Interval Monitor 193(see below for more on the FIM). 194 195Cropping in ipuX_csiY 196--------------------- 197 198The CSI supports cropping the incoming raw sensor frames. This is 199implemented in the ipuX_csiY entities at the sink pad, using the 200crop selection subdev API. 201 202The CSI also supports fixed divide-by-two downscaling indepently in 203width and height. This is implemented in the ipuX_csiY entities at 204the sink pad, using the compose selection subdev API. 205 206The output rectangle at the ipuX_csiY source pad is the same as 207the compose rectangle at the sink pad. So the source pad rectangle 208cannot be negotiated, it must be set using the compose selection 209API at sink pad (if /2 downscale is desired, otherwise source pad 210rectangle is equal to incoming rectangle). 211 212To give an example of crop and /2 downscale, this will crop a 2131280x960 input frame to 640x480, and then /2 downscale in both 214dimensions to 320x240 (assumes ipu1_csi0 is linked to ipu1_csi0_mux): 215 216media-ctl -V "'ipu1_csi0_mux':2[fmt:UYVY2X8/1280x960]" 217media-ctl -V "'ipu1_csi0':0[crop:(0,0)/640x480]" 218media-ctl -V "'ipu1_csi0':0[compose:(0,0)/320x240]" 219 220Frame Skipping in ipuX_csiY 221--------------------------- 222 223The CSI supports frame rate decimation, via frame skipping. Frame 224rate decimation is specified by setting the frame intervals at 225sink and source pads. The ipuX_csiY entity then applies the best 226frame skip setting to the CSI to achieve the desired frame rate 227at the source pad. 228 229The following example reduces an assumed incoming 60 Hz frame 230rate by half at the IDMAC output source pad: 231 232media-ctl -V "'ipu1_csi0':0[fmt:UYVY2X8/640x480@1/60]" 233media-ctl -V "'ipu1_csi0':2[fmt:UYVY2X8/640x480@1/30]" 234 235Frame Interval Monitor in ipuX_csiY 236----------------------------------- 237 238The adv718x decoders can occasionally send corrupt fields during 239NTSC/PAL signal re-sync (too little or too many video lines). When 240this happens, the IPU triggers a mechanism to re-establish vertical 241sync by adding 1 dummy line every frame, which causes a rolling effect 242from image to image, and can last a long time before a stable image is 243recovered. Or sometimes the mechanism doesn't work at all, causing a 244permanent split image (one frame contains lines from two consecutive 245captured images). 246 247From experiment it was found that during image rolling, the frame 248intervals (elapsed time between two EOF's) drop below the nominal 249value for the current standard, by about one frame time (60 usec), 250and remain at that value until rolling stops. 251 252While the reason for this observation isn't known (the IPU dummy 253line mechanism should show an increase in the intervals by 1 line 254time every frame, not a fixed value), we can use it to detect the 255corrupt fields using a frame interval monitor. If the FIM detects a 256bad frame interval, the ipuX_csiY subdev will send the event 257V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR. Userland can register with 258the FIM event notification on the ipuX_csiY subdev device node. 259Userland can issue a streaming restart when this event is received 260to correct the rolling/split image. 261 262The ipuX_csiY subdev includes custom controls to tweak some dials for 263FIM. If one of these controls is changed during streaming, the FIM will 264be reset and will continue at the new settings. 265 266- V4L2_CID_IMX_FIM_ENABLE 267 268Enable/disable the FIM. 269 270- V4L2_CID_IMX_FIM_NUM 271 272How many frame interval measurements to average before comparing against 273the nominal frame interval reported by the sensor. This can reduce noise 274caused by interrupt latency. 275 276- V4L2_CID_IMX_FIM_TOLERANCE_MIN 277 278If the averaged intervals fall outside nominal by this amount, in 279microseconds, the V4L2_EVENT_IMX_FRAME_INTERVAL_ERROR event is sent. 280 281- V4L2_CID_IMX_FIM_TOLERANCE_MAX 282 283If any intervals are higher than this value, those samples are 284discarded and do not enter into the average. This can be used to 285discard really high interval errors that might be due to interrupt 286latency from high system load. 287 288- V4L2_CID_IMX_FIM_NUM_SKIP 289 290How many frames to skip after a FIM reset or stream restart before 291FIM begins to average intervals. 292 293- V4L2_CID_IMX_FIM_ICAP_CHANNEL 294- V4L2_CID_IMX_FIM_ICAP_EDGE 295 296These controls will configure an input capture channel as the method 297for measuring frame intervals. This is superior to the default method 298of measuring frame intervals via EOF interrupt, since it is not subject 299to uncertainty errors introduced by interrupt latency. 300 301Input capture requires hardware support. A VSYNC signal must be routed 302to one of the i.MX6 input capture channel pads. 303 304V4L2_CID_IMX_FIM_ICAP_CHANNEL configures which i.MX6 input capture 305channel to use. This must be 0 or 1. 306 307V4L2_CID_IMX_FIM_ICAP_EDGE configures which signal edge will trigger 308input capture events. By default the input capture method is disabled 309with a value of IRQ_TYPE_NONE. Set this control to IRQ_TYPE_EDGE_RISING, 310IRQ_TYPE_EDGE_FALLING, or IRQ_TYPE_EDGE_BOTH to enable input capture, 311triggered on the given signal edge(s). 312 313When input capture is disabled, frame intervals will be measured via 314EOF interrupt. 315 316 317ipuX_vdic 318--------- 319 320The VDIC carries out motion compensated de-interlacing, with three 321motion compensation modes: low, medium, and high motion. The mode is 322specified with the menu control V4L2_CID_DEINTERLACING_MODE. It has 323two sink pads and a single source pad. 324 325The direct sink pad receives from an ipuX_csiY direct pad. With this 326link the VDIC can only operate in high motion mode. 327 328When the IDMAC sink pad is activated, it receives from an output 329or mem2mem device node. With this pipeline, it can also operate 330in low and medium modes, because these modes require receiving 331frames from memory buffers. Note that an output or mem2mem device 332is not implemented yet, so this sink pad currently has no links. 333 334The source pad routes to the IC pre-processing entity ipuX_ic_prp. 335 336ipuX_ic_prp 337----------- 338 339This is the IC pre-processing entity. It acts as a router, routing 340data from its sink pad to one or both of its source pads. 341 342It has a single sink pad. The sink pad can receive from the ipuX_csiY 343direct pad, or from ipuX_vdic. 344 345This entity has two source pads. One source pad routes to the 346pre-process encode task entity (ipuX_ic_prpenc), the other to the 347pre-process viewfinder task entity (ipuX_ic_prpvf). Both source pads 348can be activated at the same time if the sink pad is receiving from 349ipuX_csiY. Only the source pad to the pre-process viewfinder task entity 350can be activated if the sink pad is receiving from ipuX_vdic (frames 351from the VDIC can only be processed by the pre-process viewfinder task). 352 353ipuX_ic_prpenc 354-------------- 355 356This is the IC pre-processing encode entity. It has a single sink 357pad from ipuX_ic_prp, and a single source pad. The source pad is 358routed to a capture device node, with a node name of the format 359"ipuX_ic_prpenc capture". 360 361This entity performs the IC pre-process encode task operations: 362color-space conversion, resizing (downscaling and upscaling), 363horizontal and vertical flip, and 90/270 degree rotation. Flip 364and rotation are provided via standard V4L2 controls. 365 366Like the ipuX_csiY IDMAC source, it can also perform simple de-interlace 367without motion compensation, and pixel reordering. 368 369ipuX_ic_prpvf 370------------- 371 372This is the IC pre-processing viewfinder entity. It has a single sink 373pad from ipuX_ic_prp, and a single source pad. The source pad is routed 374to a capture device node, with a node name of the format 375"ipuX_ic_prpvf capture". 376 377It is identical in operation to ipuX_ic_prpenc, with the same resizing 378and CSC operations and flip/rotation controls. It will receive and 379process de-interlaced frames from the ipuX_vdic if ipuX_ic_prp is 380receiving from ipuX_vdic. 381 382Like the ipuX_csiY IDMAC source, it can perform simple de-interlace 383without motion compensation. However, note that if the ipuX_vdic is 384included in the pipeline (ipuX_ic_prp is receiving from ipuX_vdic), 385it's not possible to use simple de-interlace in ipuX_ic_prpvf, since 386the ipuX_vdic has already carried out de-interlacing (with motion 387compensation) and therefore the field type output from ipuX_ic_prp can 388only be none. 389 390Capture Pipelines 391----------------- 392 393The following describe the various use-cases supported by the pipelines. 394 395The links shown do not include the backend sensor, video mux, or mipi 396csi-2 receiver links. This depends on the type of sensor interface 397(parallel or mipi csi-2). So these pipelines begin with: 398 399sensor -> ipuX_csiY_mux -> ... 400 401for parallel sensors, or: 402 403sensor -> imx6-mipi-csi2 -> (ipuX_csiY_mux) -> ... 404 405for mipi csi-2 sensors. The imx6-mipi-csi2 receiver may need to route 406to the video mux (ipuX_csiY_mux) before sending to the CSI, depending 407on the mipi csi-2 virtual channel, hence ipuX_csiY_mux is shown in 408parenthesis. 409 410Unprocessed Video Capture: 411-------------------------- 412 413Send frames directly from sensor to camera device interface node, with 414no conversions, via ipuX_csiY IDMAC source pad: 415 416-> ipuX_csiY:2 -> ipuX_csiY capture 417 418IC Direct Conversions: 419---------------------- 420 421This pipeline uses the preprocess encode entity to route frames directly 422from the CSI to the IC, to carry out scaling up to 1024x1024 resolution, 423CSC, flipping, and image rotation: 424 425-> ipuX_csiY:1 -> 0:ipuX_ic_prp:1 -> 0:ipuX_ic_prpenc:1 -> 426 ipuX_ic_prpenc capture 427 428Motion Compensated De-interlace: 429-------------------------------- 430 431This pipeline routes frames from the CSI direct pad to the VDIC entity to 432support motion-compensated de-interlacing (high motion mode only), 433scaling up to 1024x1024, CSC, flip, and rotation: 434 435-> ipuX_csiY:1 -> 0:ipuX_vdic:2 -> 0:ipuX_ic_prp:2 -> 436 0:ipuX_ic_prpvf:1 -> ipuX_ic_prpvf capture 437 438 439Usage Notes 440----------- 441 442To aid in configuration and for backward compatibility with V4L2 443applications that access controls only from video device nodes, the 444capture device interfaces inherit controls from the active entities 445in the current pipeline, so controls can be accessed either directly 446from the subdev or from the active capture device interface. For 447example, the FIM controls are available either from the ipuX_csiY 448subdevs or from the active capture device. 449 450The following are specific usage notes for the Sabre* reference 451boards: 452 453 454SabreLite with OV5642 and OV5640 455-------------------------------- 456 457This platform requires the OmniVision OV5642 module with a parallel 458camera interface, and the OV5640 module with a MIPI CSI-2 459interface. Both modules are available from Boundary Devices: 460 461https://boundarydevices.com/product/nit6x_5mp 462https://boundarydevices.com/product/nit6x_5mp_mipi 463 464Note that if only one camera module is available, the other sensor 465node can be disabled in the device tree. 466 467The OV5642 module is connected to the parallel bus input on the i.MX 468internal video mux to IPU1 CSI0. It's i2c bus connects to i2c bus 2. 469 470The MIPI CSI-2 OV5640 module is connected to the i.MX internal MIPI CSI-2 471receiver, and the four virtual channel outputs from the receiver are 472routed as follows: vc0 to the IPU1 CSI0 mux, vc1 directly to IPU1 CSI1, 473vc2 directly to IPU2 CSI0, and vc3 to the IPU2 CSI1 mux. The OV5640 is 474also connected to i2c bus 2 on the SabreLite, therefore the OV5642 and 475OV5640 must not share the same i2c slave address. 476 477The following basic example configures unprocessed video capture 478pipelines for both sensors. The OV5642 is routed to ipu1_csi0, and 479the OV5640, transmitting on MIPI CSI-2 virtual channel 1 (which is 480imx6-mipi-csi2 pad 2), is routed to ipu1_csi1. Both sensors are 481configured to output 640x480, and the OV5642 outputs YUYV2X8, the 482OV5640 UYVY2X8: 483 484.. code-block:: none 485 486 # Setup links for OV5642 487 media-ctl -l "'ov5642 1-0042':0 -> 'ipu1_csi0_mux':1[1]" 488 media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" 489 media-ctl -l "'ipu1_csi0':2 -> 'ipu1_csi0 capture':0[1]" 490 # Setup links for OV5640 491 media-ctl -l "'ov5640 1-0040':0 -> 'imx6-mipi-csi2':0[1]" 492 media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]" 493 media-ctl -l "'ipu1_csi1':2 -> 'ipu1_csi1 capture':0[1]" 494 # Configure pads for OV5642 pipeline 495 media-ctl -V "'ov5642 1-0042':0 [fmt:YUYV2X8/640x480 field:none]" 496 media-ctl -V "'ipu1_csi0_mux':2 [fmt:YUYV2X8/640x480 field:none]" 497 media-ctl -V "'ipu1_csi0':2 [fmt:AYUV32/640x480 field:none]" 498 # Configure pads for OV5640 pipeline 499 media-ctl -V "'ov5640 1-0040':0 [fmt:UYVY2X8/640x480 field:none]" 500 media-ctl -V "'imx6-mipi-csi2':2 [fmt:UYVY2X8/640x480 field:none]" 501 media-ctl -V "'ipu1_csi1':2 [fmt:AYUV32/640x480 field:none]" 502 503Streaming can then begin independently on the capture device nodes 504"ipu1_csi0 capture" and "ipu1_csi1 capture". The v4l2-ctl tool can 505be used to select any supported YUV pixelformat on the capture device 506nodes, including planar. 507 508SabreAuto with ADV7180 decoder 509------------------------------ 510 511On the SabreAuto, an on-board ADV7180 SD decoder is connected to the 512parallel bus input on the internal video mux to IPU1 CSI0. 513 514The following example configures a pipeline to capture from the ADV7180 515video decoder, assuming NTSC 720x480 input signals, with Motion 516Compensated de-interlacing. Pad field types assume the adv7180 outputs 517"interlaced". $outputfmt can be any format supported by the ipu1_ic_prpvf 518entity at its output pad: 519 520.. code-block:: none 521 522 # Setup links 523 media-ctl -l "'adv7180 3-0021':0 -> 'ipu1_csi0_mux':1[1]" 524 media-ctl -l "'ipu1_csi0_mux':2 -> 'ipu1_csi0':0[1]" 525 media-ctl -l "'ipu1_csi0':1 -> 'ipu1_vdic':0[1]" 526 media-ctl -l "'ipu1_vdic':2 -> 'ipu1_ic_prp':0[1]" 527 media-ctl -l "'ipu1_ic_prp':2 -> 'ipu1_ic_prpvf':0[1]" 528 media-ctl -l "'ipu1_ic_prpvf':1 -> 'ipu1_ic_prpvf capture':0[1]" 529 # Configure pads 530 media-ctl -V "'adv7180 3-0021':0 [fmt:UYVY2X8/720x480]" 531 media-ctl -V "'ipu1_csi0_mux':2 [fmt:UYVY2X8/720x480 field:interlaced]" 532 media-ctl -V "'ipu1_csi0':1 [fmt:AYUV32/720x480 field:interlaced]" 533 media-ctl -V "'ipu1_vdic':2 [fmt:AYUV32/720x480 field:none]" 534 media-ctl -V "'ipu1_ic_prp':2 [fmt:AYUV32/720x480 field:none]" 535 media-ctl -V "'ipu1_ic_prpvf':1 [fmt:$outputfmt field:none]" 536 537Streaming can then begin on the capture device node at 538"ipu1_ic_prpvf capture". The v4l2-ctl tool can be used to select any 539supported YUV or RGB pixelformat on the capture device node. 540 541This platform accepts Composite Video analog inputs to the ADV7180 on 542Ain1 (connector J42). 543 544SabreSD with MIPI CSI-2 OV5640 545------------------------------ 546 547Similarly to SabreLite, the SabreSD supports a parallel interface 548OV5642 module on IPU1 CSI0, and a MIPI CSI-2 OV5640 module. The OV5642 549connects to i2c bus 1 and the OV5640 to i2c bus 2. 550 551The device tree for SabreSD includes OF graphs for both the parallel 552OV5642 and the MIPI CSI-2 OV5640, but as of this writing only the MIPI 553CSI-2 OV5640 has been tested, so the OV5642 node is currently disabled. 554The OV5640 module connects to MIPI connector J5 (sorry I don't have the 555compatible module part number or URL). 556 557The following example configures a direct conversion pipeline to capture 558from the OV5640, transmitting on MIPI CSI-2 virtual channel 1. $sensorfmt 559can be any format supported by the OV5640. $sensordim is the frame 560dimension part of $sensorfmt (minus the mbus pixel code). $outputfmt can 561be any format supported by the ipu1_ic_prpenc entity at its output pad: 562 563.. code-block:: none 564 565 # Setup links 566 media-ctl -l "'ov5640 1-003c':0 -> 'imx6-mipi-csi2':0[1]" 567 media-ctl -l "'imx6-mipi-csi2':2 -> 'ipu1_csi1':0[1]" 568 media-ctl -l "'ipu1_csi1':1 -> 'ipu1_ic_prp':0[1]" 569 media-ctl -l "'ipu1_ic_prp':1 -> 'ipu1_ic_prpenc':0[1]" 570 media-ctl -l "'ipu1_ic_prpenc':1 -> 'ipu1_ic_prpenc capture':0[1]" 571 # Configure pads 572 media-ctl -V "'ov5640 1-003c':0 [fmt:$sensorfmt field:none]" 573 media-ctl -V "'imx6-mipi-csi2':2 [fmt:$sensorfmt field:none]" 574 media-ctl -V "'ipu1_csi1':1 [fmt:AYUV32/$sensordim field:none]" 575 media-ctl -V "'ipu1_ic_prp':1 [fmt:AYUV32/$sensordim field:none]" 576 media-ctl -V "'ipu1_ic_prpenc':1 [fmt:$outputfmt field:none]" 577 578Streaming can then begin on "ipu1_ic_prpenc capture" node. The v4l2-ctl 579tool can be used to select any supported YUV or RGB pixelformat on the 580capture device node. 581 582 583Known Issues 584------------ 585 5861. When using 90 or 270 degree rotation control at capture resolutions 587 near the IC resizer limit of 1024x1024, and combined with planar 588 pixel formats (YUV420, YUV422p), frame capture will often fail with 589 no end-of-frame interrupts from the IDMAC channel. To work around 590 this, use lower resolution and/or packed formats (YUYV, RGB3, etc.) 591 when 90 or 270 rotations are needed. 592 593 594File list 595--------- 596 597drivers/staging/media/imx/ 598include/media/imx.h 599include/linux/imx-media.h 600 601References 602---------- 603 604.. [#f1] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6DQRM.pdf 605.. [#f2] http://www.nxp.com/assets/documents/data/en/reference-manuals/IMX6SDLRM.pdf 606 607 608Authors 609------- 610 611- Steve Longerbeam <steve_longerbeam@mentor.com> 612- Philipp Zabel <kernel@pengutronix.de> 613- Russell King <linux@armlinux.org.uk> 614 615Copyright (C) 2012-2017 Mentor Graphics Inc. 616