NOTE: this document covers Intel’s Media Server Studio 2016. If you want to use Media Server Studio 2017 with a Skylake processor, see this article.
ffmpeg has come a long way since the pre-1.0 days. With its elaborate system of routing filter outputs, its ability to capture video from video cards, and support for GPU-based encoding, it has become quite the powerhouse in the video world.
This article outlines how we built ffmpeg to capture video from a Blackmagic Design DeckLink mini and encode it using Intel’s QuickSync technology (h264_qsv).
Install CentOS 7.1 with Media Server Studio
Intel has a bit of a branding problem with their GPU-based encoding. I have seen it referred to as “Media Server Studio”, “MSS”, “MFX”, “QuickSync”. I never know what to call it when I’m talking about it. I guess I’ll call it “QuickSync”.
It’s also a bit tricky to install support for QuickSync under Linux. As of this writing, you can’t use QuickSync with a Skylake processor on a Linux system. Broadwell is your best bet. Skylake support is promised for Q3 2016.
First, stand up a CentOS 7.1 system with a Broadwell processor and one or more Blackmagic Design DeckLink cards. Install Media Server Studio by following the steps in this document. For convenience, I present them here.
Install from the CentOS 7.1 DVD. Under Software Selection, select Development and Creative Workstation. It is critical that you choose this configuration, as there are steps later that depend on specific packages being installed.
Once you boot up the system, you might want to change the default runlevel so that it doesn’t use the GUI. This is not strictly necessary, but I don’t like to have a graphical UI running on my video encoders. As root:
1 |
# systemctl set-default multi-user.target |
Before proceeding, double-check that the OS can see your GPU.
1 |
$ lspci -nn -s 00:02.0 |
You should see something like this:
1 |
00:02.0 VGA compatible controller [0300]: Intel Corporation Broadwell-U Integrated Graphics [8086:1622] (rev 0a) |
If you don’t see the controller, you may need to tweak some BIOS settings to get the OS to recognize the graphics hardware.
Add the user(s) who will use QuickSync to the video
group. As root:
1 |
# usermod -a -G video [LOGIN] |
Unpack the Media Server Studio tarball:
1 2 3 4 5 6 |
$ tar -xvzf MediaServerStudio*.tar.gz $ cd MediaServerStudio* $ tar -xvzf SDK*.tar.gz $ cd SDK* $ cd CentOS $ tar -xvzf install_scripts*.tar.gz |
You’ll need to modify the install_sdk_UMD_CentOS.sh
script. By default, it will use yum to install kernel-devel
, which will install a newer version of kernel-devel
and kernel-headers
. Replace this line:
1 |
yum -y -t install kernel-headers kernel-devel bc |
with:
1 |
yum -y -t install bc |
Download the version of kernel devel that matches the default CentOS 7.1 kernel:
1 2 |
$ cd /tmp $ curl -O http://vault.centos.org/7.1.1503/os/x86_64/Packages/kernel-devel-3.10.0-229.el7.x86_64.rpm |
As root, install the RPM:
1 |
$ rpm -Uvh /tmp/kernel-devel-3.10.0-229.el7.x86_64.rpm |
As root, run your modified install script:
1 |
# ./install_sdk_UMD_CentOS.sh |
After installing, reboot the system as instructed. When the system comes back up, run this as root:
1 2 |
# mkdir /MSS # chown {regular user}:{regular group} /MSS |
As a regular user, build the kernel:
1 2 3 |
$ cp build_kernel_rpm_CentOS.sh /MSS $ cd /MSS $ ./build_kernel_rpm*.sh |
Note: once you have built the kernel RPM, you can install it on other systems without building it from source. This will save you a lot of time, and you won’t have to install all of those build tools onto the other systems.
As root, install the newly-built kernel:
1 2 |
# cd /MSS/rpmbuild/RPMS/x86_64 # rpm -Uvh kernel*.rpm |
Note: you will probably get some warnings like:
1 2 3 |
warning: file /lib/modules/3.10.0-229.el7.x86_64/weak-updates: remove failed: No such file or directory warning: file /lib/modules/3.10.0-229.el7.x86_64/modules.softdep: remove failed: No such file or directory warning: file /lib/modules/3.10.0-229.el7.x86_64/modules.devname: remove failed: No such file or directory |
I believe these are harmless and can be ignored.
Make sure that you don’t have nomodeset
in your kernel boot parameters. This is not explained in the Intel documentation, but it is absolutely critical. I had to edit /etc/default/grub
to remove nomodeset
. Then I ran
1 |
# grub2-mkconfig --output=/boot/grub2/grub.cfg |
If you are using a UEFI boot, the process may be different for you.
Reboot your system, making sure that you are booting with the new MSS kernel. Once your system is back up and running, issue this command:
1 |
$ uname -r |
You should see this:
1 |
3.10.0-229.1.2.47109.MSSr1.el7.centos.x86_64 |
Assuming you’re running the right kernel, use Intel’s System Analyzer Utility to make sure that the system is ready for QuickSync.
Install Blackmagic’s Desktop Video
Install EPEL (required to install dkms for Desktop Video, as well as yasm, imlib2, libass, libdc1394, openal-soft, schroedinger, soxr, and zvbi that we will use in our ffmpeg build). As root:
1 |
# yum install -y epel-release |
Download Desktop Video (Blackmagic’s name for their DeckLink drivers) from the Support section of Blackmagic Design’s web site. The installer builds a driver custom to your specific kernel using dkms
. You’ll need to make sure you have the right build tools installed. As root:
1 |
# yum install -y cpp gcc mpfr dkms libmpc |
Note: if you’ve followed this guide from the beginning and built Media Server Studio kernel RPMs, you’ll already have all of these installed except dkms. But if you’re later installing prebuilt RPMs on a system without going through the entire build process on that machine, you might not have these packages.
Before you install the desktopvideo package, you need to make sure the version of the installed kernel headers package matches exactly the version of the running kernel. If you followed the instructions closely and modified install_sdk_UMD_CentOS.sh
before running it, you won’t have a problem.
But if you didn’t modify the script, you will have a mismatch, because the install_sdk_UMD_CentOS.sh
script ran a yum
command that pulled the latest version of the kernel-headers
package from the centos repo. This version does not match the running kernel.
To check this, run the following command:
1 |
$ rpm -qa | grep kernel |
In the list of installed packages, look at the version numbers for the kernel
package and the kernel-headers
package. They should match.
1 2 3 4 5 6 |
kernel-tools-3.10.0-229.el7.x86_64 kernel-headers-3.10.0-229.1.2.47109.MSSr1.el7.centos.x86_64 kernel-tools-libs-3.10.0-229.el7.x86_64 abrt-addon-kerneloops-2.1.11-19.el7.centos.0.3.x86_64 kernel-3.10.0-229.1.2.47109.MSSr1.el7.centos.x86_64 kernel-devel-3.10.0-229.1.2.47109.MSSr1.el7.centos.x86_64 |
If they don’t match, you can fix it by running this command as root:
1 2 |
# cd /MSS/rpmbuild/RPMS/x86_64 # rpm -Uvh --oldpackage kernel-headers-3.10.0-229.1.2.47109.MSSr1.el7.centos.x86_64.rpm |
Now you are ready to install the desktopvideo package. As root:
1 |
# rpm -Uvh desktopvideo*.x86_64.rpm |
Reboot your system. When it comes back up, make sure that the OS sees your DeckLink card(s):
1 |
$ grep -i blackmagic /var/log/dmesg |
You should see something like:
1 2 3 4 5 6 7 8 9 10 |
blackmagic: module license 'Proprietary' taints kernel. blackmagic: Loading driver (version: 10.2.1a1) blackmagic_driver 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 blackmagic_driver 0000:01:00.0: setting latency timer to 64 blackmagic_driver 0000:01:00.0: setting latency timer to 64 blackmagic: Successfully loaded device "blackmagic!dv0" [pci@0000:01:00.0] blackmagic_driver 0000:03:00.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17 blackmagic_driver 0000:03:00.0: setting latency timer to 64 blackmagic_driver 0000:03:00.0: setting latency timer to 64 blackmagic: Successfully loaded device "blackmagic!dv1" [pci@0000:03:00.0] |
Note: this system has two DeckLink Mini Recorder cards installed. Your log entries will probably be different. The key is you want to see the devices getting detected.
Update the firmware on your card(s) using /usr/bin/BlackmagicFirmwareUpdater
. As root, run:
1 |
# /usr/bin/BlackmagicFirmwareUpdater status |
If the output indicates that the card needs a firmware update, run this:
1 |
# /usr/bin/BlackmagicFirmwareUpdater update CARDNUM |
If you had to update the firmware on your card(s), reboot one more time.
Build ffmpeg
You will need to install the Desktop Video SDK (this is separate from the driver). Download it from the Support section of Blackmagic Design’s web site. As root, do the following:
1 2 3 |
# cd /tmp # unzip Blackmagic_DeckLink_SDK_10.7.zip # cp Blackmagic\ DeckLink\ SDK\ 10.7/Linux/include/* /usr/local/include |
Install some more build tools. As root:
1 |
# yum install -y nasm yasm-devel gcc-c++ |
Prepare the MFX (Intel Media Server Studio libraries) for linking. This is documented in steps 1 and 2 in the section “Installing FFmpeg with Intel® Media Quick Sync Video (Intel® QSV) hardware acceleration support” in the document
Intel QuickSync Video and FFmpeg. I have included them here for convenience.
As root:
1 2 |
# mkdir /opt/intel/mediasdk/include/mfx # cp /opt/intel/mediasdk/include/*.h /opt/intel/mediasdk/include/mfx |
As root, create the file /usr/lib64/pkgconfig/libmfx.pc
with the contents:
1 2 3 4 5 6 7 8 9 10 |
prefix=/opt/intel/mediasdk exec_prefix=${prefix} libdir=${prefix}/lib/lin_x64 includedir=${prefix}/include Name: libmfx Description: Intel Media Server Studio SDK Version: 16.4.2 Libs: -L${libdir} -lmfx -lva -lstdc++ -ldl -lva-drm -ldrm Cflags: -I${includedir} -I/usr/include/libdrm |
As root, run ldconfig
:
1 |
# ldconfig |
In order to build the ffmpeg RPM, you’ll need rpmbuild
. As root:
1 |
# yum install -y rpm-build |
Install gnutls-devel nettle-devel, which are required for the build:
1 |
# yum install -y gnutls-devel nettle-devel |
Now our build environment is nearly complete. We need to acquire some of the multimedia libraries that we want to include in our ffmpeg build. Many of them are easily obtained from CentOS itself or EPEL. Use yum to install devel versions of these libraries (which will also install the base RPMs of each as well, since the devel RPMs depend on the base RPMs). As root:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 |
# yum install -y \ SDL-devel \ bzip2-devel \ freetype-devel \ gsm-devel \ imlib2-devel \ libdc1394-devel \ libraw1394-devel \ libvorbis-devel \ libtheora-devel \ schroedinger-devel \ speex-devel \ libass-devel \ opus-devel \ libv4l-devel \ openal-soft-devel \ soxr-devel \ libvpx-devel \ openjpeg-devel \ openssl-devel \ zvbi-devel \ libstdc++-static |
There are a few RPMs that are not readily available. These you will need to build from source.
First install some build prerequisites:
1 |
# yum install -y libgcrypt-devel gtk+-devel |
Download the following source RPMs and install them (version numbers as of this writing are in parentheses):
- faac (1.28-7)
- libmp4v2-compat (1.5.0.1-17)
- lame (3.99.5-2)
- rtmpdump (2.4-0)
- xvidcore (1.3.4-1)
If you’re really paranoid, you can download the source tarballs from the various developer groups and replace the tarballs that are installed by the src RPMs. I was that paranoid, but when I downloaded the source files, there was no difference.
Use rpmbuild
to build the RPMs. Note that faac
depends on libmp4v2-compat
, so you have to build libmp4v2-compat
and install it before you can build faac
.
1 2 3 4 5 6 7 |
$ cd ~/rpmbuild/SPECS $ rpmbuild -ba libmp4v2-compat.spec $ sudo rpm -Uvh ../RPMS/x86_64/libmp4v2-compat-1*rpm ../RPMS/x86_64/libmp4v2-compat-libs*rpm ../RPMS/x86_64/libmp4v2-compat-devel*rpm $ rpmbuild -ba faac.spec $ rpmbuild -ba lame.spec $ rpmbuild -ba rtmpdump.spec $ rpmbuild -ba xvidcore.spec |
Now download the x264 spec file and the x264 source. Package up the source properly and put the tarball and the spec file under ~/rpmbuild
:
1 2 3 4 5 6 7 8 9 10 11 12 |
$ cd /tmp $ mkdir x264_source $ cd x264_source $ curl -O http://www.smorgasbork.com/content/ffmpeg/x264.spec $ curl -O ftp://ftp.videolan.org/pub/videolan/x264/snapshots/x264-snapshot-20160626-2245-stable.tar.bz2 $ tar xvjf x264-snapshot-20160626-2245-stable.tar.bz2 $ mv x264-snapshot-20160626-2245-stable x264 $ tar cvzf x264-0.148-20160626.tar.gz x264 $ mv x264.spec ~/rpmbuild/SPECS $ mv x264-0.148-20160626.tar.gz ~/rpmbuild/SOURCES $ cd ~/rpmbuild/SPECS $ rpmbuild -ba x264.spec |
Install the base and devel versions of the resulting RPMs, which will be in ~/rpmbuild/RPMS/x86_64
. As root:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
# rpm -Uvh faac-1*.rpm \ faac-devel-*.rpm \ lame-3*.rpm \ lame-devel-*.rpm \ lame-libs*.rpm \ lame-mp3x*.rpm \ librtmp-2*.rpm \ librtmp-devel-*.rpm \ rtmpdump-2*.rpm \ x264-0.148-*.rpm \ x264-devel-*.rpm \ x264-libs-*.rpm \ xvidcore-1*.rpm \ xvidcore-devel-*.rpm |
Now download the ffmpeg source RPM install it, and build from spec:
1 2 3 |
$ rpm -Uvh ffmpeg-3.1.1-3.el7.centos.src.rpm $ cd ~/rpmbuild/SPECS $ rpmbuild -ba ffmpeg.spec |
Note: this source RPM applies a patch to ffmpeg
to support decoding CEA-608 captions from VANC data. It adds a couple of extra command-line options to control which VANC line to look in and where to write the captions (both real-time captions and SRT caption files). It is not beautiful code and it isn’t implemented in an ffmpeg-like way, so there’s no way the team would accept my changes. But it gets the job done for us until there is official support for such captions. If you don’t want that patch applied, remove the “Patch0:” line and the “%patch0” line from the spec file before building.
Finally, install ffmpeg from ~/rpmbuild/RPMS/x86_64
. As root:
1 |
# rpm -Uvh ffmpeg-3*rpm ffmpeg-libs*rpm |
Testing
First, make sure that you can use the h264_qsv
encoder from within ffmpeg
. Grab a sample video file and name it test.mp4
. Then run this command:
1 2 3 4 5 6 7 |
/usr/bin/ffmpeg \ -i test.mp4 \ -pix_fmt yuv420p \ -c:v h264_qsv -profile:v baseline -preset slow \ -b:v 4000k \ -c:a aac -ac 2 -b:a 320k -ar 44100 \ -flags +global_header 'test-out.mp4' |
Second, make sure you can capture from the DeckLink card and feed it to h264_qsv
. Connect a source to your DeckLink card. Then get its name with this command:
1 |
$ ffmpeg -f decklink -list_devices 1 -i dummy |
Your output will look something like this, depending on what device(s) you have installed:
1 2 3 |
[decklink @ 0x165afc0] Blackmagic DeckLink devices: [decklink @ 0x165afc0] 'DeckLink Mini Recorder (1)' [decklink @ 0x165afc0] 'DeckLink Mini Recorder (2)' |
You will use the name of the device in your encoding commands, along with an input format specifier. To get the available format values for your card, run this command:
1 |
$ ffmpeg -f decklink -list_formats 1 -i "DeckLink Mini Recorder (1)" |
You will get a list of formats like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
[decklink @ 0xd27fc0] Supported formats for 'DeckLink Mini Recorder (1)': [decklink @ 0xd27fc0] 1 720x486 at 30000/1001 fps (interlaced, lower field first) [decklink @ 0xd27fc0] 2 720x576 at 25000/1000 fps (interlaced, upper field first) [decklink @ 0xd27fc0] 3 720x486 at 60000/1001 fps [decklink @ 0xd27fc0] 4 720x576 at 50000/1000 fps [decklink @ 0xd27fc0] 5 1920x1080 at 24000/1001 fps [decklink @ 0xd27fc0] 6 1920x1080 at 24000/1000 fps [decklink @ 0xd27fc0] 7 1920x1080 at 25000/1000 fps [decklink @ 0xd27fc0] 8 1920x1080 at 30000/1001 fps [decklink @ 0xd27fc0] 9 1920x1080 at 30000/1000 fps [decklink @ 0xd27fc0] 10 1920x1080 at 25000/1000 fps (interlaced, upper field first) [decklink @ 0xd27fc0] 11 1920x1080 at 30000/1001 fps (interlaced, upper field first) [decklink @ 0xd27fc0] 12 1920x1080 at 30000/1000 fps (interlaced, upper field first) [decklink @ 0xd27fc0] 13 1280x720 at 50000/1000 fps [decklink @ 0xd27fc0] 14 1280x720 at 60000/1001 fps [decklink @ 0xd27fc0] 15 1280x720 at 60000/1000 fps |
To capture on the 1st card with an input format of 11, you would specify the input as “DeckLink Mini Recorder (1)@11”.
Now capture some video and encode it.
1 2 3 4 5 6 7 8 9 |
/usr/bin/ffmpeg \ -probesize 10000k \ -r 30000/1001 \ -f decklink -i 'DeckLink Mini Recorder (1)@11' -y \ -pix_fmt yuv420p \ -c:v h264_qsv -profile:v baseline -preset slow \ -b:v 4000k \ -c:a aac -ac 2 -b:a 320k -ar 44100 \ -flags +global_header 'test-out.mp4' |
If all goes well, you will be encoding your video on the GPU, allowing for massive throughput. We have been able to capture 1080i video from 3 separate cards and generate seven h264 files/streams from each input, all while only using about 35% of the CPU. Encoding performance for VOD clips is also very impressive, with 5x – 6x real-time transcoding. Through some clever use of filters, you can generate a lot of outputs from your inputs without overloading your system.