SETTINGS
Appearance
Language
About

Settings

Select a category to the left.

Appearance

Theme

Choose an overall theme for the entire blog. Each can have their own colours.

Colourscheme

Light or dark? Choose how the site looks to you by clicking an image below.

Light Dark
AMOLED

Language

Preferred Language

All content on blog.claranguyen.me is originally in UK English. However, if content exists in your preferred language, it will display as that instead. Feel free to choose that below. This will require a page refresh to take effect.

About

"blog.claranguyen.me" details

Domain Name: claranguyen.me
Site Version: 2.0.0
Last Updated: 2025/03/09
Christmas Deathmatch 2025 (Production Procedure)
Thursday, December 25, 2025

Introduction

Since 2012, there's been a project on the DERPG YouTube channel called "Half-Life: The Christmas Deathmatch". Up until 2020, this was a yearly event that consisted of me, my brother, and my father all playing Half-Life matches. From 2014 onwards, I introduced a brand new map made from scratch into the mix every year. This year, I introduced a new map called Abstraction, a map oriented around randomness.

The format was the same every single year up until 2020, when I did a well-needed update to the production procedure in "Christmas Deathmatch 2020 (Production Procedure)". This update was my farewell to Adobe After Effects (for the most part) and a welcome introduction to FFmpeg and automation of video processing. I'll largely be using the same scripts as in 2020, but with some rewrites to suit my needs this year. Let's get into it.

Video/Audio Format

We have no issues going high resolution, despite the game being 27 years old. So we'll go with similar specifications as 2020. That is to say,

Setup

Video Segmentation

Like always, the goal here is to automate as much as possible. The Christmas Deathmatch actually follows a very simple workflow of files needed to give a final deliverable. The recordings are split into 4 separate files, and then are concatenated at the very end. These are:

  1. intro.mkv - Fake map loading cinematic
  2. class_sel.mkv - Fake class selection cinematic
  3. gameplay.mkv - The actual Half-Life 1 gameplay
  4. ending.mkv - Video ending showing previous/next videos

Every segment is generated by FFmpeg. Those 4 segments are compressed using the same encoders and same settings. Then they can be concatenated (stitched together) into the final MKV deliverable that is then uploaded to YouTube.

Other Assets

In addition to those MKV files, a few other files are required. These are used to generate some of the MKV files up above.

Lost? Here's visuals for the images and videos:

load_img.avif
load_fg.png
final_frame.avif
ending_text.png
shader_mask_2560x1080.avi
ending_fog_2560x1080.avi

Specifying HDR10 Mastering Display for VP9

Since we are doing this in HDR10, we will need the mastering display information from our monitor. I go over this in "A software 'solution' to recording HDR10 via Dxtory & FFmpeg" so I won't be going through much of it here. The short of it is that when we encode a video, we will have to inject the metadata via this:

FFmpeg Command Line Arguments
-x265-params "colorprim=bt2020:colormatrix=bt2020nc:transfer=smpte2084:colormatrix=bt2020nc:hdr=1:info=1:repeat-headers=1:max-cll=1499,799:master-display=G(15332,31543)B(7520,2978)R(32568,16602)WP(15674,16455)L(14990000,100)"

Recall though, we are using VP9 instead of H.265 encoding for YouTube this time because, for some reason, videos encoded with libx265 on my YouTube accounts result in video skipping. When encoding with VP9, FFmpeg provides no ways to specify mastering display metadata, as well as MaxCLL and MaxFALL data directly. There is no such thing as -vp9-params. However, if we encode via x265 in lossless mode, then toss it into VP9, it retains its mastering display metadata. Because I don't want to deal with a huge intermediate file on my drive, I will utilise FFmpeg piping to do the job. This looks something like this:

FFmpeg Command Line Arguments
ffmpeg \
	... YOUR_INPUT_AND_PARAMETERS
	...
	-x265-params "colorprim=bt2020:colormatrix=bt2020nc:transfer=smpte2084:colormatrix=bt2020nc:hdr=1:info=1:repeat-headers=1:max-cll=1499,799:master-display=G(15332,31543)B(7520,2978)R(32568,16602)WP(15674,16455)L(14990000,100):lossless=1" \
	-f matroska - \
	| ffmpeg -i - \
		-map             0:v         \
		-c:v             libvpx-vp9  \
		-pix_fmt         yuv420p10le \
		-profile:v       2           \
		-crf             17          \
		-b:v             0           \
		-color_primaries 9           \
		-color_trc       16          \
		-colorspace      9           \
		-color_range     2           \
		"out.mkv"

It's massively complicated and is a two-step process. But my hardware is powerful enough to handle it to the point it encodes at the same speed as a normal x265 encode on slow preset.

Recording

The simplest part of this entire project is just playing the game. Unlike in 2020, we have better software nowadays. To capture HDR10 gameplay, we will have to use a combination of GeForce Experience and Demo Recording. In "Grind Series: Quantity without compromising Quality", I go over how we simply record at a high enough bitrate for the concept of "lossy" to be visually indistinguishable from a lossless master.

This makes the whole production procedure simpler. Though, we will have to reload the game with MetaAudio and record the audio as well to get a 7.1.4 master. Then it's as simple as remuxing the audio in with the video track. That can be done in seconds.

Demo Recording

As I stated in 2020, Half-Life features Demo Recording, which means it's possible to record inputs. Then you can replay them back and export at any resolution you want, directly from the game. This means it's possible for me to go back to the Christmas Deathmatch 2012 masters and export them at like... 8K resolution if I wanted to. These files are saved as .dem files.

I wrote a plugin for AMX Mod X which forces all players to record the moment they join. This means I don't have to think. All matches, Christmas Deathmatch or not, are archived instantly from all POVs.

There's some differences in the DEM recording vs the original run. Notably, projectiles and the laser from the RPG are smoother in the original recording. It'll have a flickering effect when played back via DEM. I'm sure there's commands to smoothen it out (like there are in the Source Engine).

To record a demo, open up the game with the developer console enabled. Then go into a match or something. Then type in the console:

GoldSrc Console Command
record DEMO_NAME_HERE

This will create a DEMO_NAME_HERE.dem file in your game's mod directory. Whenever you hit the end of your desired recording, type in the following command:

GoldSrc Console Command
stop

Additionally, you can just disconnect from the server/game you are in and the recording will stop automatically. Finally, your replay can be played back via:

GoldSrc Console Command
playdemo DEMO_NAME_HERE

Generation of segments

Now to discuss generation of gameplay. I have scripts prepared to work magic on all of the clips, images, and other assets that we have. First off, every Half-Life match has its own directory containing several directories and files. Let's assume Crossfire for a moment, as well as the raw files and script files. Here is the directory structure:

Directory Listing
master/
	a00_crossfire/
		assets/
			ending_text.png
			final_frame.avif
			intro.json
			load_fg.png
			load_img.avif
		audio/
		segments/
		video/
			gameplay.mp4
			preview_a.mp4
			preview_b.mp4
			preview_c.mp4
			class_sel.txt
	a01_.../
	a02_.../
	b00_transcend/
	b01_.../
	b02_.../
raw/
	ending_fog_2560x1080.avi
	ending_fog_2560x1080_alpha.avi
	overlay.avi
	overlay_cubix.avi
	shader_mask_2560x1080.avi
script/
	compile.sh
	gen_intro.sh
	gen_outro.sh
	gen_class_sel.sh

It looks complicated. But really most of these assets are put into place prior to recording the Christmas Deathmatch. We can generate the intro and class selection prior to recording gameplay. Then we can extract final_frame.avif from gameplay.mp4 and generate the final segment: ending.mkv.

Generating intro.mkv

This relies on the following files:

Assuming the assets are in the right place shown up above, go to the root directory and simply launch:

Bash Command
./script/gen_intro.sh master/a00_crossfire/assets

It will generate the intro in the style of Modern Warfare (2019) where the map image slides. It will also fade in and out.

The reason we used AVIF for the loading image is because this video, like all of the other assets, needs to be HDR. So I simply used Special K and took an HDR screenshot in AVIF format. Then tossed it in. FFmpeg handled the rest automatically.

Generating class_sel.mkv

Half-Life has no concept of "custom classes" unlike modern FPS games. However, I wanted to give it the appeal of having them anyway. So with that, I designed the custom class stuff in Figma and generated 2 video files out of it: overlay.avi and overlay_cubix.avi. The reason Cubix gets a separate video is because when I designed the map Cubix, I made it so that you start with the Tau Cannon instead of the Glock 17 pistol. You can see it in the intro to Cubix, where the user selects the second class instead of the first.

This is one of the only aspects where Adobe After Effects was used. To generate those two AVI files. They have an alpha channel embedded. So we can then overlay them (hence the name) on top of preview videos. Speaking of which, the following files are required:

If you know FFmpeg, you know how concatenation of files works. class_sel.txt simply references the 3 preview files and will join them together into a single file:

class_sel.txt
file 'preview_a.mp4'
file 'preview_b.mp4'
file 'preview_c.mp4'

Usually this is done with ffmpeg -f concat -i class_sel.txt. But, we will let our handy script handle the heavy-lifting for us. So go into a video's directory (e.g. ./master/a00_crossfire) and then simply run:

Bash Command
../../script/gen_class_sel.sh

Here are two videos to showcase the differences between the regular overlay, as well as Cubix's:

Transcend
Cubix

It works. With that, we are 50% done with the production already. And we didn't even record any gameplay yet.

Generating gameplay.mkv

This is the simplest of all of the scripts. It simply takes a gameplay.mp4 (which is a GeForce Experience recording), and compacts it into a gameplay.mkv with appropriate codecs. There is nothing else to explain. Record the gameplay, compress it, prepare it for concatenation.

Generating ending.mkv

This one requires gameplay to have been recorded. First, I wrote a script to extract the final frame from gameplay.mp4. This writes to assets/final_frame.avif, which will serve as the background to the ending video. Second, it does some alpha magic with both shader_mask_2560x1080.avi and ending_fog_2560x1080.avi. I won't go into much of the details on how it generates it, because I'd have to explain FFmpeg's -filter_complex, as well as alpha masking. But simply put, a blurred version of final_frame.avif is generated and is drawn based on the pixels of the shader mask, giving it a very unique look. Then the text in ending_text.png is faded in via the ending fog's black and white pixels.

Overall, it requires the following files:

Confused? Yeah, it's hard to explain. But here's the look of it.

It's a throwback to when YouTube had annotations. Back then, we would make it so that you could navigate videos in the Christmas Deathmatch at the end of the video. This is obsoleted by playlists. But I kept it in anyway just for the old charm.

7.1.4 channel 24-bit Audio

I decided to keep audio as a separate file for most of the project for simplicity. That way, I can edit it quickly before putting it into the MKV container.

Every intro.mkv and ending.mkv has a respective WAV file that contains silent 7.1.4 audio. gameplay.wav contains gameplay audio from gameplay.mp4 re-recorded using MetaAudio. This is because GeForce Experience does not support surround sound. It only records in stereo. So I recorded the 7.1.4 gameplay audio in Audacity and lined it up with the GeForce Experience recording. The same happens for class_sel.wav.

All audio is stored in the map's audio directory (e.g. a00_crossfire/audio).

Getting true 7.1.4 on YouTube

This is tricky. But while I was writing this blog post, I came across a format ID on YouTube labelled 773. This is Eclipsa Audio, based on IAMF (Immersive Audio Model and Format). This format not only supports surround sound formats like 5.1, it also supports 7.1, 5.1.4, 7.1.4, etc. It does support 9.1.6 too. But that will not work on YouTube, as that failed when I uploaded a test video to YouTube.

Normally, giving YouTube a surround sound track will generate a 5.1ch version. But given that the Christmas Deathmatch 2025 was recorded in 7.1.4, I wanted to give this a shot and see how it fares. No point in losing those extra channels of information. Down in the next section, you will see how I embed IAMF information into an MP4 file to prepare for YouTube delivery.

Finalising a deliverable (final.mp4)

With the 4 MKV files created, we have our Christmas Deathmatch footage almost ready to be uploaded to YouTube. There's just one thing left: combine all of the MKV files into one and upload it to YouTube. To do that, we need two files. First is segments/final.txt which always contains the following:

Text file (segments/final.txt)
file 'intro.mkv'
file 'class_sel.mkv'
file 'gameplay.mkv'
file 'ending.mkv'

Additionally, create a audio/concat.txt with the following:

Text file (audio/concat.txt)
file 'intro.wav'
file 'class_sel.wav'
file 'gameplay.wav'
file 'ending.wav'

Then launch FFmpeg and concatenate all videos losslessly together:

Bash Command
ffmpeg -f concat -i "segments/final.txt" -f concat -i "audio/concat.txt" -map 0:v -map 1:a -c:v copy -c:a pcm_s24le "final.mkv"

This generates an MKV file that can be uploaded to YouTube or archived if we wanted. But we aren't done yet. Remember I said about IAMF up above. So let's take it a step further and utilise that.

Bash Command
ffmpeg -i "final.mkv" \
	-filter_complex "
		[0:a]channelmap=0|1:stereo[FRONT];
		[0:a]channelmap=4|5:stereo[BACK];
		[0:a]channelmap=6|7:stereo[SIDE];
		[0:a]channelmap=8|9:stereo[TOP_FRONT];
		[0:a]channelmap=10|11:stereo[TOP_BACK];
		[0:a]channelmap=2:mono[CENTER];
		[0:a]channelmap=3:mono[LFE]
	" \
	-map "[FRONT]" -map "[SIDE]" -map "[BACK]" \
	-map "[TOP_FRONT]" -map "[TOP_BACK]" \
	-map "[CENTER]" -map "[LFE]" \
	-map 0:v -c:v copy \
	-stream_group "
		type=iamf_audio_element:id=1:st=0:st=1:st=2:st=3:st=4:st=5:st=6
		:audio_element_type=channel,layer=ch_layout=7.1.4
	" \
	-stream_group "
		type=iamf_mix_presentation
		:id=3:stg=0:annotations=en-us=default_mix_presentation,submix=parameter_id=100
		:parameter_rate=48000:default_mix_gain=0.0
		|element=stg=0:headphones_rendering_mode=binaural:annotations=en-us=7.1.4
		:parameter_id=101:parameter_rate=48000:default_mix_gain=0.0
			|layout=sound_system=7.1.4:integrated_loudness=0.0:digital_peak=0.0
			|layout=sound_system=5.1(side):integrated_loudness=0.0:digital_peak=0.0
			|layout=sound_system=stereo:integrated_loudness=0.0:digital_peak=0.0
	" \
	-streamid 0:0 -streamid 1:1 -streamid 2:2 -streamid 3:3 -streamid 4:4 -streamid 5:5 -streamid 6:6 -streamid 7:7 \
	-shortest \
	-c:a flac -compression_level 12 "final.mp4"

I am not going to go in-depth about this command. I hardly understand it myself, other than that it works. I based it off of some examples for 7.1.4 audio found here. All you need to know is that it takes the MKV file and gives us an MP4 file with FLAC-compressed audio.

Okay, I'll bite a little. I experimented with a few IAMF commands such as making several Mix Presentations and one Mix Presentation with several layouts. YouTube did not accept the former, but it accepted the latter. I had to do this because the example commands given on the link above only render audio out to Stereo. So when YouTube generated a 5.1ch version of my videos, it only had the FL and FR channels populated. Everything else was silent. That being said, when I uploaded a 7.1.4 version of my video to YouTube, it didn't even generate ac-3/ec-3 5.1ch versions of my audio despite me explicitly stating it in the layouts. Maybe I am doing things wrong, and feel free to correct me if so.

Sure, I could have done this in one step. But I wanted the final.mkv so I can actually listen to the file prior to uploading to make sure the concatenation of audio and video synced up. You can't play the IAMF file in VLC 3. It doesn't know how to decode it. And I am a simple lady so I went with what I knew before converting it to IAMF.

Anyway, we got our final deliverable file. Let's upload it to YouTube.

Struggles

This project had its unique set of challenges.

Struggle with YouTube's HDR support

Most of the project was going quite well so far. We got the final delivery files, and it checks all of the boxes in the specifications that I wrote up above. HDR10, 7.1.4 (IAMF), 2560x1080. YouTube can handle all of this. But recently, YouTube has been hiccuping on HDR processing. It seems kind of random when a video processes in HDR. Do note, I have over 1658 videos on YouTube in HDR10 due to my MWIII playlist... among other games I play. I frequently upload HDR10 and surround sound.

Back when I uploaded those videos, YouTube wouldn't struggle with HDR processing, with very rare exceptions. Most of the time, it would finish them within hours of uploading. Nowadays, it is much more inconsistent. Numerous other people have noticed the same thing. They offered a few solutions though:

  1. Contact Support (I am a YouTube Partner, so I get the support chat) and have them reprocess the video from their end.
  2. Reupload the video until HDR processes.
  3. Open the video editor in YouTube Studio and trim a fraction of the last section in the video off, to force YouTube to reprocess the video.
    • You used to be able to just "Revert to Original". But Youtube removed the option to do that sometime earlier this year.

I've tried all three of these solutions. The third one is actually the worst option for me. Because when you throw a video into YouTube's editor, it only saves a stereo audio stream. And my project is 7.1.4. When I contacted support, I got 2 videos reprocessed. One of them got HDR. The other didn't. The final solution, which is exhausting, is to simply reupload the video until it processes HDR. This is the only way I tried that retains all of the features of the video I uploaded to YouTube.

I don't know what is going on over at YouTube, but you got a real problem with HDR processing, and I suggest you fix it. Please.

Struggle with HLDS (Half-Life Dedicated Server)

The first 9 episodes were recorded utilising HLDS (Half-Life Dedicated Server). This was so Ardorous (Doom Kitty) could connect remotely. You can tell by looking at my ping and seeing that it was not 0. Anyway, when we got to the bonus maps, they pushed HLDS to its limits. Maps like ZONE and Cubix straight up crashed it. It seems to ignore launch flags like -num_edicts 4096. However, when we played Christmas Deathmatch in previous years, we had no issues with these maps. The reason is because hosting a server locally through the Half-Life 1 client directly seems to be better than the dedicated server.

I know. It sounds like bullshit. But we got Ardorous over to our place and I hosted a LAN party bypassing HLDS altogether and the maps ran perfectly. HLDS isn't even able to run Abstraction. The map straight up crashes the server upon loading. But it runs through Half-Life 1 directly. I thought that was very interesting. Speaking of Abstraction though...

Struggle with Abstraction

I wanted everyone to enjoy this new map. My goal was to play it with 6 people and we all have a blast (literally). But unfortunately the game crashed several times while we played it. We tried 4 times. The Demo files recorded during it also got corrupt, with one exception. So we had to scale back the number of players. So for the finale of the Christmas Deathmatch 2025, we had the usual 3 suspects: iDestyKK (me), Dr. DOOM, and KINGPIN.

It's safe to say the map still has some optimisations that need to take place. I did get lazy with all of the explosions going off. When a series of explosions go off, it goes off in every zone, which is probably overwhelming the engine. When you boot the map without -num_edicts 4096, the game refuses to load the map entirely. This is probably why HLDS straight up crashes.

The goal, accomplished

With that, we have successfully set up a method to generate video files and upload them to YouTube. This future-proofs me as well, so I can just reuse the same files again and again in future years. I might make slight tweaks to them in the following years to add some variety. We'll see.

How to watch Christmas Deathmatch 2025

You can watch the final encoded videos on YouTube here:

At the time of writing this, the series is not public yet. YouTube is still trying to process HDR on videos. I will add the videos to the playlist as new HDR versions are processed. And when all 15 videos are processed HDR, I will make all videos and the playlist public.

Source Code

To get the Christmas Deathmatch scripts, simply check out my hl_cdm repo's dev/cdm2025 branch on GitHub. It has my scripts for this project, as well as assets. I plan to archive all Christmas Deathmatch stuff there from here on out.

So, what's next?

This project is not done yet. I have the POVs from everyone and plan to make a multi-POV mash of all of the perspectives throughout the Christmas Deathmatch 2025. That way, you can see all of the chaos from all people involved. Obviously this would take much more time to produce. I plan on releasing a blog post soon with the history of DERPG collaborations and multi-perspective recordings. It will detail my work on this very project for the Christmas Deathmatch 2025 as well.

Until then, you can see Christmas Deathmatch 2025 through my perspective. That's how it's been since 2012. Given I won most of the matches, it shouldn't be a problem, right? Hah. Just kidding. Anyway happy holidays to everyone. I hope you all enjoyed it.

Special Thanks

Lastly, I would like to thank Dr. DOOM (JMS), KINGPIN, Ardorous (Doom Kitty), IntrepidWarlord, and grace for their contributions to this project. Without them, this project would not have been possible. It was a blast playing Half-Life with 3-6 people this year. We should do that again sometime. It's a tradition, of course.




Clara Nguyễn
Hi! I am a Vietnamese/Italian mix with a Master's Degree in Computer Science from UTK. I have been programming since I was 6 and love to write apps and tools to make people's lives easier. I also love to do photography and media production. Nice to meet you!


Blog Links
Post Archive