It’s been raining all this weekend, so what I wanted to do was see if I could accompany myself playing a piece of music by making multiple videos and slowly building it up.
I used my Canon S100 to record a bunch of videos and then imported them into kdenlive (the only viable multi-track video editor on Linux Desktop), this allowed me to cut and paste and do all the editing to make sure each track correctly accompanied the previous ones.
Because I wanted to show all 4 video tracks at once I figured I could use the videomix GStreamer element to do this with a custom pipeline. So by toggling which video track was “muted” I exported each video in a raw format at 960×540 so that 960×2 + 540×2 = 1920×1080 (i.e. 1080p).
The next thing I wanted to do was clean up the audio as the little microphone in the camera picked up a fair bit of hissing (not from the audience of course), multiply that 4 times and it’s pretty bad. The UI for doing this in Audacity is a bit odd…
1) Go to Effect menu select “Remove noise” 2) In the dialog click “Get noise profile” 3) Use the selection tool to select a part of the audio which contains only noise. 4) Deselect (so nothing is selected) 5) Go to Effect menu select “Remove noise” and click OK .
Or watch this tutorial. Either way It works pretty damn well. Export to WAV.
So now we have 4 video files and an audio track let’s combine them into a webm video:
gst-launch -e \ filesrc location=piano.avi ! avidemux ! ffdec_huffyuv ! ffmpegcolorspace ! video/x-raw-rgb ! mix. \ filesrc location=sing.avi ! avidemux ! ffdec_huffyuv ! ffmpegcolorspace ! video/x-raw-rgb ! mix. \ filesrc location=guitar2.avi ! avidemux ! ffdec_huffyuv ! ffmpegcolorspace ! video/x-raw-rgb ! mix. \ filesrc location=guitar1.avi ! avidemux ! ffdec_huffyuv ! ffmpegcolorspace ! video/x-raw-rgb ! mix. \ videotestsrc pattern=2 ! video/x-raw-rgb,width=1920,height=1080 ! \ videomixer name=mix sink_0::xpos=0 sink_0::ypos=0 sink_0::zorder=02 sink_1::xpos=960 sink_2::ypos=540 \ sink_3::ypos=540 sink_3::xpos=960 sink_4::zorder=0 ! video/x-raw-rgb,height=1080,width=1920 ! \ ffmpegcolorspace ! vp8enc threads=4 ! webmmux name=mux ! filesink location=full-render.webm \ filesrc location=audiotrack.wav ! wavparse ! audioconvert ! vorbisenc ! mux.
So those who know a bit of GStreamer you’ll spot the “videotestsrc” here as the 5th video input, this is because I couldn’t see a way to set the surface size of the videomixer, it seems it was mostly designed for doing picture in picture, so the output is set to the largest video source, without the videotestsrc the video would be stuck at 960×540.
I did try a videobox element but couldn’t persuade the pipeline to roll if the left/right/top/bottom properties were > 100px. I also had problems with the order of the height/width caps causing “Internal Data flow” in the avidemux and problems using decodebin2 which I abandoned pretty early on.
The videotestsrc introduces a second problem in that the videotestsrc has no EOS, it will continue streaming forever. I knew how long my video should be so I just HUP’d the process when it got to the right progress, `gstreamer -e` makes sure it sends an EOS on the pipeline on HUP which closes the filesink properly. Obviously this is pretty lame so maybe there is a better solution? you could use a python script to listen to the EOS signal on one of the filesrc elements and then propagate the EOS to the videotestsrc.
Linux is all about making it work for you?
Update: oops forgot to add the audio level right.. time to re-add the audio..
gst-launch -e filesrc location='full-render.webm' ! matroskademux ! \ video/x-vp8 ! webmmux name=mux ! filesink location=final.webm \ filesrc location='audiotrack.wav' ! wavparse ! audioconvert ! vorbisenc ! mux.