Video canvas instead of rendering directly to a screen

Post Reply
User avatar
kripton
Posts: 42
Joined: Tue Sep 29, 2015 7:01 pm
Real Name: Jannis

Hi :)
So this is basically an idea of mine and I put it up for discussion here. I have not yet started any implementation or proof-of-concept but I wanted to post this idea here nevertheless. Everything is completely additional, no existing behaviour needs to be changed.

The current video function of QLC+ render the video either to a window of it's own (one window per video of a collection) or to one full screen, since v5 with multiple videos per screen. While that approach might fit a very broad range of use cases, I'd like to have something more flexible, in order to control more beamers/screens.

The main idea is to not render the videos to individual frames or full screens, but to have a common video canvas. The size of that canvas can be modified by the user (just like the size of the Virtual Console) but I could think of about 16000x10000 pixels for a start. The background color should be change-able as well (black, white, ...). The video canvas would be a new "tab" besides the existing ones ("Fixtures & Functions", "Virtual Console", "Simple Desk"). It could be displayed either with scroll bars or scaled down to fit the window (again, choice should be left to the user).
Then, a video is always rendered to the canvas (should work just like when rendering multiple videos to a full screen). Every video has the properties as now (custom geometry (offset + size), rotation) plus Z-order (don't know if that currently exists), transparency, audio volume. Hue, Brightness, Contrast, Saturation are also possible. The additional properties could be modified with sliders (XY-Pad for center?) and thus easily controlled via external inputs (I'm thinking MIDI over network or OSC).

To get the video on the screen, additional OUTPUTs can be added to the Video canvas. Either via the existing Input/Output-Manager or via a config dialog on the video canvas. Every output has an offset, an input size (area that is grabbed off the video canvas) and output size (size of the result of the output). Output*s properties could be fixed for the moment but could also be controlled via external inputs.
This way, multiple outputs can show the same area of the canvas with different screen resolutions (input sizes and center match, output sizes are different), multiple beamers can be used to display one large video without much effort. Even output rotation (in addition to input rotation) might be possible for not-well-positioned beamers.

Furthermore, the outputs don't necessarily have to go to "physically present" screens but could be available as RTSP-streams OR sent to an icecast server OR saved as a local file recording OR be sent to Ethernet-to-HDMI-adapters (see LKV-373 HDMI extender) to feed multiple beamers from one machine via one Ethernet cable. OR use DisplayLink USB3 -> HDMI adapters


Technically, I've thought to use gstreamer and the glvideomixer element for that (https://lubosz.wordpress.com/2014/06/16 ... n-the-gpu/) since it saves some CPU power, and with QLC 5+ we can assume a powerful GPU anyways. There's some very nice code on https://mapmapteam.github.io/. After having seen the video capabilities of QLC+ 5 (video 3.1), I'm very confident, that we can do with QMediaPlayer and native Qt functions just as well.
The Video Canvas is a (large) window that is either just not show()n but has all the video content, or some other way to draw a QWidget off-screen.
The single videos are played the same way as with the current "Fullscreen + Custom geometry" solution.
The outputs use https://doc.qt.io/qt-5/qwidget.html#grab and save the content to a QPixMap. This can then be drawn on a new widget/fullscreen window for direct ouput or given to gstreamer for encoding RTSP serving (step 2, later). The exact timing when a new "frame" is generated might need some experimentation (either on a new frame of any non-fully-transparent input) or synced to a fixed interval or whenever gstreamer pulls a new frame in the gstreamer-based output approach.


Things to be clarified:
- How large could the video canvas reasonably be?
- Can we render a QMediaPlayer to an off-screen or not show()n QWindow?
- How do we clock/trigger the inputs?
- Can we change all parameters of a QMediaPlayer + offset + output size on-the-fly?


Do you think such an approach is feasible? wanted inside QLC+? too complex? too resource-intensive?

Looking forward to your feedback!




References I've found in the forum regarding video functionality:
viewtopic.php?f=29&t=12541&p=53169
viewtopic.php?f=22&t=12483&p=52906
viewtopic.php?f=29&t=12325&p=52372
viewtopic.php?f=17&t=11866
viewtopic.php?f=29&t=12542&p=53559
viewtopic.php?f=18&t=12382&p=53866
Last edited by kripton on Fri Jul 12, 2019 12:36 pm, edited 1 time in total.
User avatar
kripton
Posts: 42
Joined: Tue Sep 29, 2015 7:01 pm
Real Name: Jannis

As an additional bonus, the values/pictures of the video canvas could be used to feed RGB fixtures or matrizes. Single pixels or areas with a common size and grid-like arrangement (from whom the avarage is computed) can be used as input for RGB matrizes. This way, one could easily "play a video" or provide some animation on a small LED wall.
User avatar
kripton
Posts: 42
Joined: Tue Sep 29, 2015 7:01 pm
Real Name: Jannis

I've started development in this branch: https://github.com/kripton/qlcplus/tree/videoCanvas
  • DONE Add VideoCanvas context to UI
  • DONE Add new output mode to Video function's config UI
  • TODO Add Video Canvas configuration editor to UI
  • TODO Set up a new off-screen window/view to render the videos to
  • TODO Render videos configured for canvas rendering to the canvas
  • TODO Display a scaled-down or scrollable mirror of the actual canvas to the Video Canvas tab on the UI
  • TODO Later: Video outputs that read from the canvas
User avatar
mcallegari
Posts: 4482
Joined: Sun Apr 12, 2015 9:09 am
Location: Italy
Real Name: Massimo Callegari
Contact:

I don't understand two things:
- what is the limitation of the current implementation
- why do we need a new context (very invasive) when most likely 95% of the users will not use it

As far as I understand, you are rewriting what I already written, but not in fullscreen.
Can we just discuss how to improve the current solution, once we identify a real need for a "canvas"?
Please explain a real usage case that is not currently covered.
User avatar
kripton
Posts: 42
Joined: Tue Sep 29, 2015 7:01 pm
Real Name: Jannis

Thanks for your feedback. Sure, I'm very happy to discuss in order to find the best-possible solution. Of course we can also modify the current fullscreen solution if that fits.

Limitations I currently see:
  • It's not possible for one video to span multiple screens. Of course one can use the "Windowed" mode and the size accordingly. But: Then the video will have a border added by the window manager. With the canvas, one can make the video very large and have each output only render a small part of it.
  • Output of one video function is only possible to one screen. If I want 5 beamers to show the same video at the same time, I'd need 5 video functions OR use my OS's features to clone the outputs. Now if I want to show different things on the beamers later in the show, cloning is no longer possible.
  • Output is only possible to locally-attached screens/beamers. Let's say I have the machine running QLC+ at the FOH. I'd need to run 1 HDMI/VGA cable to the stage for every screen/beamer on stage. When using a video canvas, every rectangle could be "exported" using RTSP or sent to Ethernet-to-HDMI-Adapters. Then, only one Ethernet-cable can run to the stage, carrying all video signals (+ ArtNet ;))
  • The video canvas context can be used as a (scaled-down) full-stage preview of all running videos. I'd also thought of using the mouse to interactively drag the individual videos around. I'll draw a picture of what I have in my head and attach it here
We don't need a new context but I thought it's the most intuitive approach. I personally use the "Show Manager" very seldom ;) We could also add a way to show/hide individual contexts and hide the video canvas by default.
User avatar
kripton
Posts: 42
Joined: Tue Sep 29, 2015 7:01 pm
Real Name: Jannis

Okay, so here's the drawing (attached).
The first one is what the QLC+ user sees:
  • The video canvas with two running video functions rendered. Can be displayed 1:1 (original size) or scaled to fit the available space
  • An overlay could be drawn showing the video geometries (shown in red) and the output geometries (shown in green). Those additional infos are not part of the actual video canvas where the output takes their image data from. They could also display the names of the video functions and outputs.
  • In this drawing, let's assume it's a 1:1 display in order to avoid calculations. In real, the canvas shown could be larger to improve image quality
The other two attachements are basically what the two outputs see and what will be shown by beamers on stage.

Does that explain the use cases?
Attachments
out2.png
out1.png
VideoCanvas_UI.png
User avatar
mcallegari
Posts: 4482
Joined: Sun Apr 12, 2015 9:09 am
Location: Italy
Real Name: Massimo Callegari
Contact:

Alright, I see your point now. Thanks for the explanation.

A few comments
- multiple outputs per video function can be easily achieved on the current solution but I'm not sure of the QtMultimedia part. I think I saw an example of multiple rendering in QML. I need to double check. If that's not possible, it means to decode the video multiple times, which can have severe limitations.
- video spanning across multiple outputs. I'm not sure if this is even possible with QtMultimedia
- video cropping is possible and could be implemented in the video editor
- network streaming to other devices: transmitting uncompressed video frames over the network is an insane idea. Decoding + cropping/rescaling + encoding + network is an insane idea too. Not sure if this feature is feasible at all. At least not without a massive effort.

In general I think you would like QLC+ to be able to do a sort of projection mapping. I'm not sure I want that. There are many specialized softwares to do it and the effort to have such feature in QLC+ is enormous and would end up to be worse than the mentioned softwares.
My idea has always been to have basic support of audio/video playback from within QLC+ and leave advanced features to specialized softwares. I think what I did in QLC+ 5 is already a step forward, but I wouldn't go much beyond that.

P.S. OK, you don't use the Show Manager that much, but please check how many Show Manager posts there are in this forums compared to video playback posts :wink:
User avatar
kripton
Posts: 42
Joined: Tue Sep 29, 2015 7:01 pm
Real Name: Jannis

Yes, I'd also think that multiple outputs would be possible with the current fullscreen rendering. If you find that example, please share :) Sure, decoding the video multiple times would be a bad solution.

Spanning multiple screens: Would be possible using the canvas. Grabbing the data from the QML-OpenGL context to a pixmap and then re-drawing it might be bad but at https://doc.qt.io/qt-5/qml-qtquick-item ... age-method they provide this hint on how to copy image data inside the QML-OpenGL context: "For "live" preview, use layers or ShaderEffectSource."

Agreed, Video cropping is possible using QtMultimedia.

Sure, uncompressed video is bad. That's why I mentioned the LKV373-HDMI-"extenders". They transmit FullHD-video over Ethernet. The first version uses MJPEG which can be encoded on the CPU pretty fast using libjpeg-turbo. The latest version uses H264 which can be encoded efficiently on modern CPUs. And since we are using QML, we can assume a decent GPU in most cases. So I wouldn't call that idea insane. Cropping + scaling is done by QML on the GPU and video encode can be done using gstreamer which uses the most efficient method automatically.

In general I agree that it's some kind of video mapping architecture. I know that there is specialised software for that (for example http://www.mapmap.info/ which I mentioned before). The big advantage of having this in QLC+ is that we have automatic sync of video + lighting in one software and that the existing code enables the control of the video properties (geometry, rotation, transparency, ...) via existing input plugins (MIDI, OSC, ...). If another software would be used for that purpose, one would need to make sure to interconnect QLC+ and the video mapping software via OSC or similar first. In my eyes it's an easy and light addition to a software that has advanced video features already. From video compositing and masking as currently supported, it's not a very big step to integrate the proposed feature.

Of course, if you know already that it won't be merged anyways, I'd rather fork the existing code base, remove everything the video canvas doesn't need and call it a new, specialized software for video mapping ;) That might be easier than starting off brand new or from another, existing solution.
Post Reply