Showing posts with label Standards. Show all posts
Showing posts with label Standards. Show all posts

Sunday, January 24, 2016

Loudness

So a thing which concerns me is that I think in the future we're going to start seeing demands for ATSC A/85 loudness standards in deliverables.
But we do care. We care so much.

Now for them of us in the indy world, there's frequently some allowances for certain standards which exist in the big network TV world which we get to slide by. But I don't think the international loudness standards are going to be one of those things.
So I want to be ahead of the... 8-ball? Curve? Whatever it is one needs to be ahead of to make sure we're delivering masters which are compliant to CALM act and that sort of thing.
The high-end version of the audio mixing/editing software I use (Samplitude) is called Sequoia. It is comically expensive (almost $3000). But it has built-in tools for decent loudness metering.
There are cheaper LUFS meters, but not necessarily what one is looking for in the way of broadcast audio.
In any case, measuring A/85 or any of the other loudness standards is... weird. You're measuring an average of an average of a level but only within a certain frequency range and only when the signal is above a certain level. Right? Because loudness is a subjective thing and making a meter to measure it is a pain in the tuchus.
Dig this (from the above TC Electronics link):
Target levels are specified in various broadcast standards, but only vary slightly. For instance, the ATSC A/85 standard recommends a target of -24 and uses the LKFS term, whereas the EBU R128 standard sets the target level at -23 and uses the LUFS term. One of the reasons for this difference is that the R128 standard employs the above-mentioned gate, which in effect makes most measurements equivalent to -24 LKFS/LUFS without the gate - yet more useful for aligning loudness across genres.
Yup. The broadcast standards are sort of difficult to get one's head around. But it's doable. And if we can guarantee the deliverables (which I imagine might be important for VOD) it will help.

Monday, November 30, 2015

Roboticide

I've been killing a lot of androids lately. I know that's going to come back to me.



Yes, Mother, you can export ProRes out of AfterEffects in Windows. It requires this free plugin from the company DuBon. And you can only export files, not compositions, so you have to pre-render first. But it can be done. It can. Be done. H/T Ian Hubert.

Tuesday, November 24, 2015

What's for me

Of things that do not exist yet, the Blackmagic Micro Cinema camera is the thing that speaks most to me.

First of all, it's going to be cheap. Like a thousand dollars cheap.
Secondly, Chance Shirley has convinced me that 16mm is a better sized format for shooting because it's somewhat easier to focus than 35mm. Indeed, 35mm is a pain in the tuchus to focus.
Thirdwise, it's micro-four-thirds. I have a micro four thirds lens that's pretty fast. I'm totally down with that.
Quadranaically, there's HDMI video out. Oh man, the SDI on other Blackmagic cameras irked me. HDMI is so much easier.
On the Five Spot, it records to SD cards, not to weird stuff.
Sixly, it's got a global shutter and rolling shutter irritates me half to death.

The problems with it? Well for one it doesn't exist yet. Also, it's not 4K. Blackmagic is indeed coming out with a 4K Micro but it has no onboard recording. So as long as buyers don't care about "Ultra HD" we're good. The problem with 4K is that nobody can actually see it unless they sit with their face right up in the screen just like their moms told them not to do.

I feel like just as we got computers to get decent at rendering high-def and now we have to do 4K. Sigh. I feel HD really is the top resolution. Nobody really sees anything higher. I mean the boys down at THX say that film prints have an effective resolution of about 700 lines. So why all this resolution stuff? Ugh. Now I am complaining.

I think the Micro Cinema camera seems cool.

Monday, April 22, 2013

CALM Down

You are utterly fascinated by relative broadcast levels of dialog, aren't you? Yes. Yes you are. 
The Angry Sound Professional explains the CALM act and what it really means. Here's part 1.
We used to have one of these jackets. It was stolen out of my car. 
In part 2 we go deeper into the meaning of dialnorm as Vince Tennant explains how the Act works. I suspect that part 3 will show how the act doesn't work but we're still waiting.

Wednesday, November 23, 2011

Output and Intermediary Formats

Did you know that AfterEffects will cycle mask colors? Why isn't this the default? Sheesh.

More conversation between me and Nathan Vegdahl regarding delivery formats.

Me: I worry about legacy issues. In 10 or 15 years, will a movie I make today still be in a readable format? I dunno!

Nathan: The more open the standard, the more likely it is to be readable in the future. h.264 and AAC in an mp4 container is good, because it has open-source implementations. Worst-case scenario: someone has to pull up old source code and use it to convert your movie to a new format.
OpenEXR is also open source, so same deal.
The formats you have to worry about are proprietary formats, because if the companies behind them stop supporting them, you're out of luck.

Me: The main trick for our lab is: will they be able to take the file we give them and make a DigiBeta tape out of it? Because we still need to deliver DigiBetas.

Nathan: [editor's note: TL;DR: use Handbrake and/or skip to*] Hmm... I don't know if ffmpeg can encode for digital beta. I'll look into it. I use currently use ffmpeg 0.8.5 on Linux for all of my final encoding. I do everything from the command-line, because it gives me more precise control over the encoding process, and I get to play with all kinds of weird settings. But for the most part I keep it simple.
If I want to encode to a nearly (but not quite) lossless H.264 file that is widely compatible with other software, I use this command-line:
ffmpeg -i input_file -vcodec libx264 -vprofile baseline -crf 1 output_file.mp4
Where "input_file" is the name of your input video file, and "output_file.mp4" is the name of the file you want to create. Ffmpeg auto-detects what container format you want from the output file extension. In this case, it knows that .mp4 means the mpeg4 container.

The "-vcodec libx264" tells it to encode the video as H.264. The "-vprofile baseline" tells it to only use the most widely supported H.264 features, for maximum compatibility. The "-crf 1" tells it to encode nearly lossless (0 would be lossless, but isn't supported in the baseline profile; higher numbers are more lossy).

Sometimes you also need to tell it what pixel format to use, especially if you're encoding from an image sequence (I'll get to image sequences in a moment). You can do that with the pix_fmt option:
ffmpeg -i input_file -vcodec libx264 -vprofile baseline -crf 1
-pix_fmt yuv420p output_file.mp4

For fully lossless encoding, we drop the profile specification and use crf 0 and 444 chroma:

ffmpeg -i input_file -vcodec libx264 -crf 0 -pix_fmt yuv444p output_file.mp4

The resulting file, however, will not be widely supported, including by Apple's h.264 support.

You will also probably want to use AAC for your audio. Ffmpeg uses aac by default with the mp4 container, but we can specify it manually to be certain:

ffmpeg -i input_file -vcodec libx264 -vprofile baseline -crf 1 -acodec
libfaac output_file.mp4

And if we want to specify the bitrate of the audio (for example, 320kb/s):

ffmpeg -i input_file -vcodec libx264 -vprofile baseline -crf 1 -acodec
libfaac -b:a 320k output_file.mp4

If you have separate video and audio source files, you can specify them both, and ffmpeg well merge them:

ffmpeg -i input_video_file -i input_audio_file -vcodec libx264
-vprofile baseline -crf 1 output_file.mp4

When I render anything from a 3d application (for example, Blender) I always render to an image sequence. When I'm ready for final encoding of the animation, I render to png's, which are lossless, and then use
ffmpeg afterwards to manually encode them into a video file. To do this you need to tell ffmpeg where in the file names the frame number is. You do this with "%d" and some variants thereof.

If your files are named like this:
image_1.png
image_2.png
image_3.png
...
image_125.png

Then you specify the image sequence as "image_%d.png". The "%d" goes wherever the frame number is. Ffmpeg will then find all the files matching that pattern.

If your files are named like this:
image_0001.png
image_0002.png
image_0003.png
...
image_0125.png

Then you specify the image sequence as "image_%04d.png". The "04" (that's zero four) between the % and the d tells ffmpeg how many digits long the number is.

So, using this in an actual ffmpeg command-line:

ffmpeg -i image_%04d.png -vcodec libx264 -vprofile baseline -crf 1
output_file.mp4

The problem with images sequences, though, is that they contain no information about frame-rate. So we need to tell ffmpeg what frame-rate they are supposed to be in. You must specify this _before_ the image sequence on the command-line. This, for example, would give the image sequence a frame-rate of 24fps:

ffmpeg -r 24 -i image_%04d.png -vcodec libx264 -vprofile baseline -crf
1 output_file.mp4

You can also specify the frame rate with decimals and fractions:

ffmpeg -r 29.97 ...
ffmpeg -r 30/1.001

If you plan to use the file for video editing, make sure to set the GoP to 1, which means that every frame will be encoded on its own without reference to other frames in the video (such frames are called "intra frames" or "I-frames"). This makes the file size much larger, but it means that a video editor can pull frames out at random very easily, which is good for scrubbing etc. You do this by adding "-g 1" to the command-line:

ffmpeg -i input_file -vcodec libx264 -vprofile baseline -crf 1 -g 1 output_file.mp4

So there you go, that's a quick-and-dirty tutorial on how I do my video encoding. Although, internally I usually use Matroska (http://matroska.org) as my container format instead of mp4. But... then again, I use an open-source pipeline, where mastroska is well supported. I always use mp4 containers when sending material to other people.

If you don't want to use command-line ffmpeg, you can use Handbrake (http://handbrake.fr), which is a cross-platform GUI-ified version of ffmpeg. It exposes most of these options, though sometimes they can
be hard to find.

*Come to think of it, ffmpeg has an open-source ProResdecoder as well. So, for example, you could use ffmpeg to convert from pro-res to h.264 if you wanted to. I don't recall if it supports ProRes422 yet, though. But presumably it will in the future if it doesn't already.

Sunday, January 21, 2007

Project Standards

Every idea at the "let's write something down" stage, whether it is a treatment or a script or even just a story, will have a job number assigned to it.
Since we have to have only one person doling out job numbers, in order to prevent duplicates, that person will be me.
A job number will have four digits. "0701" is the first job of 2007. We can only have 100 script/treatment/stories per year, and we'll be out of business by the year 2100.

Here's an example the file name of a script in Final Draft format:

0701 v1.27 Angry Planet.fdr

Notice that the name of the movie ("Angry Planet") can be changed without affecting the alphabetical order of the project. Also note that every time a script is "handed off" to another writer, or any time anyone makes a change to a script, the "v" number is incremented. Here's an example of the above project when someone has made a change and changed the name:

0701 v1.28 Happy Planet.fdr

The job number (0701) will follow this movie until the very end of its life. "0701" will end up on slates and on contracts in order to identify the movie.