Introduction
Since 2012, I've been doing yearly video projects where me, my brother, and my dad play Half-Life. I record and upload all of the footage to YouTube. The format has been almost the same every year. 1920x1080 footage. Has an intro and a proper ending which links to other videos in the series. This effective formula for content creation has held strong up until now. And I could've easily used it for the Christmas Deathmatch 2020 too. But, I could do better. So let's give the procedure an update.
Video/Audio Format
Despite the game coming out in 1998, the very nature of it being on PC means resolution is a non-issue. So, we'll go with ultrawide 2560x1080. For sound, an update that brought Half-Life to Linux back in 2013 removed EAX, and surround sound is completely broken. Luckily, there is MetaAudio, which brings these effects back. With this, 7.1 surround sound is restored.
Anyway, here's the specifications:
- 2560x1080 non-stretched output at 60 FPS
- Lossless 7.1 Surround Sound and Stereo audio tracks (FLAC S24)
- Separated microphone audio tracks of all participants, captured losslessly (either Mono or Stereo, in FLAC). Will be stored in "the vault" for archival.
- libx265 with yuv420p10le, encoded at CRF 17 (despite the source being 8-bit, better compression is achieved when encoding it in 10-bit)
-
Proper DEM file recording from all POVs. Compressed via
xz -9e.
YouTube & Surround Sound
As mentioned in my Spy Hunter Video Production Procedure post, YouTube supports surround sound. It stores a proper discrete 5.1 surround sound track. If you upload a video with a manual stereo and surround track, it'll use both. It'll also generate a 5.1 surround sound track from a 7.1 track if supplied. So we'll upload a 7.1 + Stereo MKV. YouTube stores the original video uploaded and can reprocess it when a new feature is added. So if YouTube supports 7.1 in the future, the video doesn't have to be reuploaded.
Setup
Usually when I do projects like these, I try to figure out how to automate
as much as possible.
I
wrote a post on the previous production procedure, where I generate
clips with Adobe After Effects and have to use it for the endings instead
of FFmpeg. As the years go by, I became aware of the more complex features
of FFmpeg, like -filter_complex, which is the crux of this
post.
Video Segmentation
The Christmas Deathmatch follows a format which demands that the recordings are split between 4 separate files, and then concatenated at the very end. These are:
- intro.mkv - Fake map loading cinematic
- class_sel.mkv - Fake class selection cinematic
- gameplay.mkv - The actual Half-Life 1 gameplay
- ending.mkv - Video ending showing previous/next videos
Every segment except for gameplay.mkv is generated by FFmpeg.
I didn't intend on changing this format because it has worked effectively
every year so far. Those 4 segments are compressed using the same encoders
and same settings. They can then be concatenated (stitched together) into
the final MKV deliverable that is then uploaded to YouTube.
Other Assets
In addition to those MKV files, a few other files are required. These are used to generate some of the MKV files above.
-
load_img.png - The map image used in
intro.mkv. This can be any size larger than the video resolution, as it will be panned just like in Modern Warfare (2019). In this project, they were 3840x2160. -
load_fg.png - The foreground text and shades drawn over
load_img.pngwhen generatingintro.mkv. -
intro.json - Holds information on how to pan
load_img.pngwhen generatingintro.mkv. For the first episode, Crossfire, it contains:
JSON File (intro.json)
The coordinates are obtained from Adobe After Effects, as that was used to position and simulate the panning effect. It's as intuitive as it gets. The image is scaled via{ "scale": 84.2, "x1": 1422, "y1": 568, "x2": 1280, "y2": 568 }scaleand it pans from(x1, y1)to(x2, y2). -
final_frame.png - The final frame of gameplay. This is used as a
background in
ending.mkv. Since the gameplay recordings are BMP files at first, obtaining this file is simple. Just grab the final image in each match recording. -
ending_text.png - The text that is put on top of in
ending.mkv. -
shader_mask_2560x1080.avi - A black-and-white video file which
is used to determine where to blur the background in
ending.mkv. Black pixels means no blur. White pixels means blur. Grey pixels mean somewhere in between blurring and not. -
ending_fog_2560x1080.avi - A black-and-white video file which
is used to determine the visibility of
ending_text.pnginending.mkv. Black pixels means it's completely transparent. White pixels means it's completely opaque. Grey pixels mean it's somewhere in between.
Lost? Here's visuals for the images and videos:
|
load_img.png
|
load_fg.png
|
final_frame.png
|
||
|
ending_text.png
|
shader_mask_2560x1080.avi
|
ending_fog_2560x1080.avi
|
Recording
The simplest part of the project is getting gameplay. Just play the game and record it. Well unfortunately it isn't that simple.
Thanks to Windows 10, as well as Valve removing Direct3D from Half-Life, recording solutions such as Dxtory and Fraps no longer work, as they can't detect and hook into the drawing pipeline to capture video. OBS can work, but it had its own issues that I did not have time to fix (it falls quickly with the absurdly high standards I have in video quality). That kinda sucks. However, Half-Life has a secret weapon... Demo Recording.
Demo Recording
Half-Life features
Demo
Recording, which means it's possible to record inputs and export at any
resolution desired at any moment directly from the game. I can also export
at any frame rate I want. These files are commonly 10-30 MB in size. Every
year, I archive Christmas Deathmatch like this. So it's possible for me to
go back to 2012 and re-export it in 2560x1080 60 FPS if I wanted. These
files are saved as .dem files.
I wrote a plugin for AMX Mod X which forces all players to record the moment they join. This means I don't have to think. All matches, Christmas Deathmatch or not, are archived instantly from all POVs.
There's some differences in the DEM recording vs the original run. Notably, projectiles and the laser from the RPG are smoother in the original recording. It'll have a flickering effect when played back via DEM. I'm sure there's commands to smoothen it out (like there are in the Source Engine).
"How do I record my own demos?"
Without that plugin, you may be curious how to utilise Demo Recording in your gameplays. To record a run, simply open up the developer's console and type:
record DEMO_NAME_HERE
This will create a DEMO_NAME_HERE.dem file in your game's mod
directory. Whenever you hit the end of your desired recording, type the
following command:
stop
Finally, your replay can then be replayed via:
playdemo DEMO_NAME_HERE
HLAE (Half-Life Advanced Effects)
I've been eyeing this for a few years. Quite simply, it's a collection of
tools and commands which make it easier to produce video from
.dem files. I simply force the game to run in 2560x1080 at 60
FPS. Then I have HLAE export every frame along with a sample stereo audio
track. The result is a perfect 60 FPS video without any frame drops.
The only consequence for this is that I have to replay each match after
playing just to export the frames. It's quite time consuming. But I think
it's worth it. It's also automatable with shell scripting.
Just to throw some numbers out... A 15 minute match takes around 48 minutes to export. Around 50,000 BMP files are created per match. Each frame is around 7.81 MB, meaning each match is over 390 GB in size. That's lengthy! For 14 matches, several terabytes of storage is needed. Luckily for me, storage is not a problem anymore. Throughout the project, a total of 1,046,815 images were generated, totaling 7.43 TB in size:
Generating Video Frames
A match was played and a .dem file has been created. We can
now utilise HLAE to generate the video of the match in the form of BMP
files... tens of thousands of them.
To start, open HLAE. It's a separate application. I run my Half-Life servers with mods, so the appropriate parameters for AMX Mod X have to be passed in as well. Overall, my configuration looks like this:
Upon booting up the game, the following commands need to be put into the developer's console:
mirv_movie_fps 60
mirv_movie_export_sound 1
mirv_movie_filename "PATH_TO_EXPORT_DIRECTORY"
mirv_movie_playdemostop 1
These will configure HLAE for 60 FPS and export the sound. On Windows, the
PATH_TO_EXPORT_DIRECTORY should include drive letter and use
backslashes (e.g. g:\Fraps\01_crossfire was what I used for
the first match).
Finally, play the demo and record with HLAE. Chain the commands together by
separating them with a ;. Otherwise, you will have to open up
the console while the demo is playing and tell it to record. That's not
fun or practical. Issue this as if it were a single command:
playdemo DEMO_NAME_HERE; mirv_recordmovie_start
Whenever the match ends, mirv_movie_playdemostop 1 should
have automatically stopped the recording. In my case, it didn't for
some reason. So I had to manually stop it via:
mirv_recordmovie_stop
If you do not stop the recording, and decide to just exit the game, your audio will be missing its header information and need to be recreated. I would rather have HLAE fill this information in itself.
With this done, we have what we need to create gameplay.mkv.
Making gameplay.mkv
gameplay.mkv is simple to generate and requires no editing at
all, since it's just the gameplay of us killing each other. So let's render
it out.
The BMP files are named as 00000.bmp, 00001.bmp,
and so on. An audio file, sound.wav is also generated.
Generating a video from these frames and audio is trivial with FFmpeg:
ffmpeg \
-r 60 \
-i "$1/all/%05d.bmp" \
-i "$1/sound.wav" \
-map 0:v \
-map 1:a \
-c:v libx265 \
-preset medium \
-pix_fmt yuv420p10le \
-crf 17 \
-c:a flac \
-compression_level 12 \
-shortest \
"$1/gameplay.mkv"
This is not an FFmpeg tutorial. To summarise though:
-r 60- Sets input framerate to 60 FPS.-
-i "$1/all/%05d.bmp"- Treats all 5 digit BMP files in$1/allas a single input video stream. The%5dsyntax should be familiar to users ofprintfwhen coding in C or C++. -
-i "$1/sound.wav"- Second input is set to the sound file exported by HLAE. -
-map 0:v- Tells the final MKV file to use the processed stream generated from the first input stream's video (the BMP files). -
-map 1:a- Tells the final MKV file to use the processed stream generated from the second input stream's audio (the WAV file). -
-c:v libx265- Process and compress the video with x265, a video encoder for encoding video into HEVC/H.265. -
-preset medium- This determines the efficiency of the video encoding. Generally the slower it is, the better the compression, at the cost of the render taking much longer.mediumis a sweet spot for this project. -
-pix_fmt yuv420p10le- Images are RGB. Compressed video is not (usually). It is converted to YUV colourspace. This tells it to useyuv420pwith 10-bit colour depth. As stated earlier, encoding an 8-bit source in 10-bit achieves better compression, somehow. -
-crf 17- Constant Rate Factor. Ranges from 0 (lossless) to 51 (trash). The lower the number, the better the quality. This is the better alternative to CBR (bitrate) for archival. 17 is the sweet spot I use in many projects. -
-c:a flac -compression_level 12- Process and compress the audio as FLAC with the highest possible compression configuration (12).