Partly OT: Video streaming from FOH to multiple targets?

This is a place for sharing with the community the results you achieved with QLC+, as a sort of use case collection.
You can share photos, videos, personal hardware/software projects, interesting HOWTO that might help other users to achieve great results.
Post Reply
User avatar
mitscherdinger
Posts: 104
Joined: Fri Oct 30, 2015 12:12 pm
Real Name:
Contact:

Hi!

I have a vision 8-) :
I'm sitting at FOH, driving a theater show. I have - let's say - 3 projectors available. One on my back to cover the stage from the front, two behind the stage doing a rear projection on the right and the left. On every projector, there is a Raspberry Pi connected via HDMI, waiting to send videos to the projector. All Raspberries are connected to LAN, just like my linux-laptop from which the show is controlled.

I know, I can reach similar with QLC+: Install the app on all the computers involved, setup 3 different Artnet-channels, configure a/some video function(s) and make each one accessible through a dmx-channel. Therefore, the videos that should be presented have to be on the Raspberries. I can copy them to the devices and configure the triggers on each before the show.

But my goals are different: Keep it simple, keep it fast (in terms of latency, but also in terms of using light and fast apps and finally. in terms of not running through the venue to make some last-minute configurations) and let only one machine be the one that has to be configured - the main laptop at FOH.

I'm not so far away from that - the tools and the technology seem to be there, already. With ffmpeg for example it's possible to stream videos from point to point in realtime.

Code: Select all

ffmpeg -i [input-video] -f [streaming codec to use] udp://[reciever's network-adress]:[port]
(There are options to speed thing up and/or relieve the CPU, but take it as an easy example.) On the other side of the chain, ffplay or mpv can catch the stream and decode it in no time.

Code: Select all

mpv udp://[transmitter's network-adress]:[port]
(Again: Optimizations left aside)

Tried this myself in a LAN between a Ryzen5 2400G Desktop and a 10 year old Thinkpad and achieved latencies under 1s - which is good enough, even for professional use. Once, you found the best options for your setup you can use it over and over again with different video-inputs and destinations. Best of it: With a commandline code it's capable of being integrated in QLC+ or Linux Show Player (LiSP). And: With ffmpeg I can -tee video from audio stream, if I like, and keep the audio at the FOH. (Or send it back from one of the raspberries to FOH via net-jack or comparable. Keeping video and audio in sync will be another challenge, I see…)

But there is one downside: If the receiver already plays the video, there is no big latency between sender and receiver (if the options are chosen well, of course). But: Catching the stream can take several seconds. So, what I need is a continuous stream on which I can send my videos. OBS can do this, but it's another resource intensive app and - as far as I know - I cannot send commands from QLC+ or LiSP to it. (I want ONE cue-player for all, you know…!) Also: I *guess* OBS can't handle more than 1 stream, at once (sending to the different RPi-receivers) - but with ffmpeg-commands it's easy…!

I had the idea, sending a continuous stream by streamcasting a virtual desktop page and configure mpv to play on that in fullscreen, by demand. But I guess, this comes not so handy with more than one beamer.

Any ideas in how to reach my goals? (You can suggest other apps than ffmpeg or mpv, of course!)

(Disclaimer: I have also posted this to the Linux-Audio-Users-Mailinglist and will try to send it to a place, where ffmpeg-nerds are common. I will inform you if I get good thoughts from the other sources…)
thierry
Posts: 11
Joined: Thu Sep 05, 2019 10:07 pm
Location: Namur - Belgium
Real Name: Thierry Demonty
Contact:

Hello,
Did you already try the different suggestions to reduce the initial startup latency, explained in this ffmpeg streaming guide ?
https://trac.ffmpeg.org/wiki/StreamingGuide#Latency
<<
You may be able to decrease initial "startup" latency by specifing that I-frames come "more frequently" (or basically always, in the case of x264's zerolatency setting), though this can increase frame size and decrease quality, see ​here for some more background. Basically for typical x264 streams, it inserts an I-frame every 250 frames. This means that new clients that connect to the stream may have to wait up to 250 frames before they can start receiving the stream (or start with old data). So increasing I-frame frequency (makes the stream larger, but might decrease latency). For real time captures you can also decrease latency of audio in windows dshow by using the dshow audio_buffer_size ​setting. You can also decrease latency by tuning any broadcast server you are using to minimize latency, and finally by tuning the client that receives the stream to not "cache" any incoming data, which, if it does, increases latency
>>
User avatar
mitscherdinger
Posts: 104
Joined: Fri Oct 30, 2015 12:12 pm
Real Name:
Contact:

Thanks for your response!

If the x264's zerolatency setting with lots of i-frames is activated via the "-tune zerolatency" option, then, yes, I have already tried this on the server side. Didn't try avoiding cache on the other side of the chain. I'll try it, tomorrow, just to be sure. But I guess, that's just a latency minimizer in case the stream's already catched. Unfortunately, he zerolatency tune didn't help that much.
techniek
Posts: 4
Joined: Mon Oct 03, 2022 5:56 pm
Real Name: Lelieveld

Use OBS as player and stream permanently to it.
To activate , activate the correct scene in OBS.
You should be able to do that remotely .
User avatar
mitscherdinger
Posts: 104
Joined: Fri Oct 30, 2015 12:12 pm
Real Name:
Contact:

OBS works. The permanent stream is a good idea - But I can't send different streams to different receivers, simultaneously, right?
Post Reply