Category: Technical Articles

Video Production and Broadcast Standards

Technical Articles | By: indie

It’s important to know the inner workings of the various standards for video resolution and frame rate in a little more detail than my video glossary goes into, so here is an explanation of the most common types you may come in contact with.

In the United States, the NTSC standard has been used since the 1950’s for video broadcasts. NTSC is also used in certain parts of Asia, while PAL and SECAM are standards used in Europe. Each standard carries its own specifications – and in fact there are several variations of each standard – but the video standard you will be using is going to be mainly determined by the country in which you live.

Frame Rate

Frame rate describes the number of frames, or images, that are displayed per second of video. The human eye can detect jumps in anything below about 15 frames per second, but every video standard is well above this number. The NTSC standard frame rate, for instance, shows 29.97 frames every second.

It might seem odd that NTSC uses a fraction of a frame. This came about because the original television broadcasts in the 1950’s were at 30 frames per second in black and white. The fractional frame is missing because engineers found that when color was added to the signal the easiest way to incorporate the added
information into the signal was to slow it down very slightly. PAL and SECAM are both 25 frames per second, while film is shot at 24 fps.


Timecode measures video frames in realtime, and is set to Hours, Minutes, Seconds, and Frames. Since NTSC runs at fractions of a frame each second, it uses a type of timecode called drop-frame that adjusts itself every minute so that those 29.97 frames match exactly to the actual amount of time that has passed. By contrast, non-drop-frame timecode does not make this adjustment, so as time goes by the timecode becomes further and further offset from the actual amount of time that has passed.


Great strides have been made in the realm of resolution, and broadcast standards in the US have shifted to HD – High-Definition video. When you pop in a standard definition DVD and sit down to watch, you’re looking at video that is 720×480, or 720 pixels wide by 480 pixels high, which is a total of 345,600 pixels. High-definition video starts at the 720p standard and goes up from there. 720p is 1280×720, or 921,600 pixels of resolution. There is also 1080p and 1080i, which are each 1920×1080. So you can see that high-definition broadcast standards have significantly increased the ways in which we experience video.

The i and the p in these formats stand for interlaced and progressive. An interlaced video is one where each frame actually contains a split image of two separate frames. When the video is played back, the frames are deinterlaced and the two images are split so that they can be viewed separately again. This method is used to save bandwidth, while progressive scan imagery uses full-frame transmissions.

Aspect Ratio

Aspect ratio is the measure of height in relation to width. A video’s resolution is measured in pixels, and its aspect ratio tells you the relationship between its vertical and horizontal pixels. The standard television aspect ratio is 4:3, meaning that for every four horizontal pixels, there are three vertical ones. Widescreen video used in widescreen televisions is 16:9, or sixteen horizontal pixels for every nine vertical ones.

There are a few other commonly used aspect ratios, for example those used in some films which are even wider than 16:9, the anamorphic formats 1.85:1 and 2.39:1. The term anamorphic refers to using a wide lens to record onto normal 35mm film, which is traditionally formatted to capture a standard 4:3 image. This is why when you watch an unedited widescreen film on a regular television you see black bars above and below the picture; these are the areas of the film’s frame where no image has been recorded due to the use of the anamorphic lens.

Resolution vs. Aspect Ratio

It’s important to understand the distinction between resolution and aspect ratio; resolution measures the total size of a video’s frame in pixels, whereas aspect ratio is a measurement of the relationship between its vertical and horizontal dimensions. Take the 4:3 aspect ratio, for example. You could have one video at 640×480 resolution and another at only 320×240. While the smaller 320×240 video has only 1/4 as many pixels of resolution as the larger one, they are nonetheless both still in the 4:3 aspect ratio.

Video Standards in Indie Filmmaking

Which of the various standards outlined above you use will be determined by where you live and what equipment you are using. Since most independent projects are done using video rather than film, you may have seen low budget movies that used a number of varying standards without even realizing it.

You can make a good film regardless of the video standards you are using, but knowing what you’re dealing with will help you to troubleshoot any situations that arise where you may be having a problem along the way. For any additional questions or concerns, feel free to contact me. I can’t always answer right away, but I will always try to respond.

Glossary of Video Terms

Technical Articles | By: indie

Arc – camera rotation around an object that maintains the same distance from it for the duration of the movement.

CCD – charge-coupled device. The light-sensing chip(s) inside a digital video camera.

Close-up – a tight shot of a person’s bust, from the top of their head to their neck, shoulders, or upper chest. Also used to show an object, so that it fills the frame in its entirety.

DAW – digital audio workstation.

Depth of Field – the portion of a shot that is in focus.

DEW – digital editing workstation.

Dolly – actual movement of the camera past or alongside an object. The camera moves forward while facing sideways, essentially.

Extreme Close-up – a very tight framing method that shows only a tiny part of the subject in great detail. On a person, usually this is the face or the eyes, and on an object this tends to be a small portion showing an element or piece of it.

Establishing Shot – a wide shot that depicts the environment in which the scene takes place. Normally used as the opening shot in a scene, it establishes the greater area around the action happening in that scene.

Flash Pan – a very quick pan, where the camera moves so quickly the area between the starting and ending points of the pan are blurred by the motion.

Focal Length – a measurement of the depth in which objects in the frame are in focus (sharp and non-blurred). A lens set to a wide angle has a greater focal length, meaning that objects both closer and farther away from the lens stay in focus. As the lens moves out and the telephoto value increases, the focal length decreases, causing an increasingly narrower band of the field to stay in focus while the rest begins to blur.

Focus – a measure of image quality wherein an on-camera object is clearly depicted without blurring.

Indie Film – technically, a film project on a budget of less than $4 million.

Medium Shot – framing a person from just above the top of the head to around their navel or midsection.

NLE – Non-linear editing or Non-linear editor.

Pan – Keeping the camera in place while turning from side to side along the horizontal axis of the frame.

Pedestal – lifting or lowering the camera’s height while keeping its viewable area level.

Rack Focus – a camera technique used with a narrow depth of field; the camera changes focus from a near object to a far one, or vice versa.

Telephoto – the distancing of a camera’s lenses from one another that takes place when the zoom level is increased.

Three Point Lighting – the standard portrait and single-subject lighting technique used to most effectively illuminate an onscreen object or person.

Tilt – the camera pivots vertically, along an imaginary line on its x axis, to “face” upwards or downwards.

Truck – physical movement of the camera toward or away from the subject.

Wide Angle – the furthest, most zoomed-out position at which a camera’s lens can be set.

Wide Shot – framing of a shot from a distance so that a larger amount of the action taking place can be seen in the frame.

Zoom – A camera function where the camera stays in place while the lens moves in or out. This changes the viewable area and alters the perceived distance of the subject. This also has the effect of changing the focal length of the lens.

Film Composition

Technical Articles | By: indie

For those of you who have photography experience, this term may already be familiar to you. Composition is the term used to describe the way an image is set up; film composition refers specifically to its use in filmmaking.

The rectangle that encloses every frame of your film becomes the window through which the viewer looks into the world you have created. For the duration of the film, you want this to be their reality.

However, this “window” is unlike reality in that said viewer does not have the ability to look around and take in more information than you give them; they are limited to only the knowledge you provide them through visual and aural means.

Think of the way you set up each shot as a tool that you can use to shape your viewers’ perception of every moment. If you’re going to provide them with this information, you want it to appear in a way that draws the eye to its most important parts so that these parts are quickly recognizable.

The Z-Axis

Although a film is technically a two-dimensional (2D) image on a screen, part of an audience’s suspension of disbelief relies on your ability to appeal to their subconscious and make them think they’re looking at a 3D reality.

What I mean by that is, if they’re looking at an image that resembles something they might see in real life, they’ll be more likely to submit themselves mentally to what you’re putting in front of them.

When we speak about the layers of film composition available in an image, we’re talking about the appearance of depth – how far away things appear to be, or what we call the Z-axis. Video shot on a digital camcorder with a fixed lens has a tendency to be really flat; they don’t have a lot of z-axis depth (because of the way video cameras work. More on that here).

To separate our general layers along the Z-axis, we use the terms foreground, middleground, and background. Lighting, depth of field (which is explained in this tutorial), elevation, and perspective can be used to give depth to each of these layers of distance.

Light and shadow, for example, play an important role in an audience’s perception of the Z-axis. When a person stands in the doorway of a dark room with light coming in from the other side, a silhouette is created. When a three-point lighting setup is used to illuminate half of a person’s face slightly brighter than the other half, it gives the person three-dimensional qualities.

The X and Y Axes

The other part of film composition involves the spatial arrangement of focal points in the frame from left to right and from top to bottom. Applicable to both photography and cinematography, the rule of thirds specifies the points in an image where the center of interest should be.

Imagine a picture – any picture – or use an actual image you’ve got lying around. Start at the top left corner, and then go a third of the way across the top. Draw an imaginary line on the image from top to bottom. Go another third of the way across and draw another imaginary line. Now your image is divided into thirds vertically. Do the same thing horizontally, and you’d have a total of nine “boxes” divided by these imaginary lines.

There are also four points at which the lines intersect. These points, according to the rule of thirds, are the best places around which to orient the focal points of an image.

Audience Perception

Using a combination of factors on the Z, X, and Y axes is key if you want to take your imagery to a new height. When you consider film composition, remember that what’s outside the frame is often just as important as what’s shown within it.

Video Vs. Film – The Differences

Technical Articles | By: indie

It’s become an epic debate among filmmakers as to whether one medium is really superior to the other. But while there are several fundamental differences between video and film, which I’ll explain here, many times the deciding factor between who uses video and who uses film is cost.

Film is Expensive – Video is Not

Whereas film needs to be developed and have light shone through it in order to be projected, video is captured on magnetic tape and scanned back over a playhead. Whether the tape itself is analog or digital, the process of taping is fundamentally a digital thing, which means it can only reach a certain resolution before it starts to degrade in quality. Film, on the other hand, can become as large as the distance from projector to screen (determined somewhat by the strength of the projector) allows.

The average indie filmmaker doesn’t use film because, well, it costs a lot of money. If you’re old enough you may remember the days before digital cameras became commonplace and you used to have to load rolls of 35mm film into your camera to take pictures. When the roll was done, you’d have to wind it back into its casing, take it out and get it developed.

Nowadays it seems like a foreign concept to have to wait to look at your pictures, doesn’t it? A roll of camera film containing 24 or 36 exposures used to cost around $3-5 to buy and another $3-5 to develop.

Now stop. Think about that for a second; think about a roll of 24 pictures of film costing even $2.

Using a film camera, 24 frames of film is one second of screen time. One. Second. Multiply $2 by 60, and then by 90. That’s to say, if you roll camera and cut camera at the exact instant you start and end your scene, do only one take of each shot, and film a full-length 90-minute movie, that film alone at $2 a second costs you $10,800. So in a monetary sense, the difference between film and video is huge.

You probably don’t have that much money to spend on even 90 minutes of film, let alone the amount of film it would actually take after you cut the outtakes, pre-roll, post-roll, and any deleted scenes or B-roll footage. If you do have that kind of cash, you’re either incredibly rich, crazy, or you have investors who believe very strongly in your directing skills. So let’s go with you using video instead of film.

Image Quality

While cost plays a major role in the use of video vs. film, the most major bone of contention comes from the way each medium captures and displays imagery. Because film simply captures light waves its creating lines of depth and color, so it looks smooth and soft when projected, even at large sizes.

Digital video has a native resolution and is made up of pixels, so it’s sharper than film and it has more of a rigid appearance. When you increase or decrease the resolution of any digital file you start to see interpolation, which is when the computer mathematically re-interprets the pixels in an image and either adds new ones to make up for a larger size, or takes them away when the resolution becomes smaller.

Since a pixel (which, by the way, is short for ‘picture element’) is essentially a tiny square containing a single color, an increase an image’s output size without actually changing the number of pixels it contains will result in pixelation – your eye will more easily recognize the presence of pixels in the image.

Lessening the Differences Between Film and Video

So while digital imagery is cheaper and easier to produce, manipulate, and control, it also has certain constraints. That’s why High-Definition video is such a huge advancement; HD video contains an insanely large number of pixels, meaning a higher resolution that can be displayed at larger sizes. Therefore, you can go bigger before you start to see the pixelation normally associated with inflating video.

By moving to an HD standard and creating cameras that continually increase the quality of digital image reproduction, we are essentially lessening the differences between film and video. The closer we get to being able to replicate the human eye with video cameras, the better imagery we can create.

One day we’ll have digital camcorders with all the visual advantages of film and without any of the disadvantages like dust and scratches or graininess that sometimes plague film productions. Luckily, you won’t have any of the disadvantages of film if you’re using video for your productions.

Speaking of the way film looks, there are several specific technical differences between video and film, so if you want to find out more about those you can try your hand at making your video look like film.

Depth of Field

Technical Articles | By: indie

The depth of field in an image is the portion of it that appears to be sharp and crisply in focus. In this tutorial I’ll explain depth of field and demonstrate how you can use varying degrees of it to add interest to your shots.

Later, I’ll show you some examples of depth of field that contain wide vs. narrow depth of field comparisons.

This technique works by giving visual priority to the subject you want your audience to focus on. If you want them to see everything your image has to offer and give each object equal importance, the whole image should be in focus and therefore you should use a wide depth of field. If there is a single object, person, or group that take precedence over all else in the image, using a narrower depth of field will help to draw the viewer’s eye to them.

Wide Depth of Field

In order to shoot with a wide depth of field, your camera’s lens needs to be set at a wide angle, and you can achieve this by simply zooming all the way out.

Shots where you have multiple objects whose distances from the camera vary greatly, such as landscapes and horizons, are best captured with a wide depth of field. You can bring out your depth of field to literal infinity with a wide angle lens configuration this way, but using a narrower field will lend a degree of polished professionalism to your work.

Narrow Depth of Field

If you want to focus in on a particular subject, your image will look more natural if you use a narrow field depth. To narrow your depth of field, dolly back, away from your subject, and then zoom in.

A camera lens that is zoomed in is said to be in telephoto mode as opposed to wide angle mode. While zoomed in, every movement of the camera affects the frame to a larger degree, so it becomes especially important to maintain a steady shot when in telephoto.

After you have set up a shot with a narrow depth of field, it’s important to make sure your subject stays in focus. Movement toward or away from the camera could put the subject out of focus quickly, depending on just how narrow your field depth is.

Audio and Video Editing Software Reviews

Technical Articles | By: indie

Video Editing Software

If you’re just getting started in the realm of digital multimedia editing for video or audio, it’s helpful to know what software is out there and available for you to use. Following is my brief review of just about every digital video editing software I’ve ever used.

Windows Movie Maker [6/10]

As basic as basic can be. The program has become more robust in its latest versions, but it’s still just a bare bones system for the most part. Gives you ease of use in exchange for control and functionality.

Adobe Premiere [8/10]

Adobe has made great strides in improving the stability of their workhorse as time has gone on, but I still find the program to be only relatively reliable at best. The layout has changed and now uses tabs you can pick up and move around to suit your tastes, and its support for import and export of multiple video formats is fantastic. It even has a direct-to-FLV export feature for Flash video, which comes in handy for me quite often.

Sony Vegas [9/10]

I’ll tell you straight away, this is my video editor of choice. It comes from an older program called ScreenBlast Movie Studio, which Sony bought and fleshed out into a full-fledged professional-grade editor. Compared to some other video programs it doesn’t cost much, and there are a few versions at different prices that offer some purchasing flexibility (while the cheaper ones have obviously limited features). It syncs up perfectly with SoundForge, which I use for audio editing. The toolkit is great, with tons of included effects and transitions, and the learning curve is relatively low because the interface is intuitive. Press ‘S’ to slice a video in two at your current scrubber location. Grab the top corner of a video and pull it inwards to create a smooth fade up from black. Very smooth, especially for those novice users trying to do so for the first time. For a pittance, you can pick up a copy that’s only a version or two back from the most current release.

Pinnacle Studio [7/10]

It has a sweet capturing setup, but that’s about all I ever used it for (in the days before drag-and-drop file transfer, when we had to capture all our video real-time). To be honest I’ve never gotten much into editing with it. It has both storyboard and timeline editing capabilities and Pinnacle includes it with most of their sound cards and breakout boxes, which is nice.

Apple Final Cut Pro [9.5/10]

Entire films have been edited and post-produced using FCP. Probably the best video editing suite I’ve ever used, but the price tag is indicative of the quality of this product. It’s a shame Apple is such a snobby company and they don’t offer a PC version. You can tell I’m not an Apple fan in general, although I do love my iPad. Again, the limiting factor of Final Cut Pro is its outrageous price.

Audio Editing Software

These programs don’t all have the ability to record multiple simultaneous tracks, but they can all be used to record and tweak audio for films and videos.

Audacity [7.5/10]

One element that makes this a great audio program is that it’s free. You can get it from SourceForge here. A great piece of software for the price!

Cool Edit Pro [8/10]

An old favorite of mine; it’s no longer manufactured, and in fact Adobe bought the rights to it several years ago and turned it into Adobe Audition. Cool Edit has limited multitrack recording capabilities but it’s great for finalizing audio, mixing down, tweaking and signal processing.

Cakewalk [8/10]

This is a dual mode program that allows you to program digital music using MIDI, which is basically computerized audio, or record live audio yourself. There are hundreds of MIDI voices available that mimic every instrument imaginable. You can program in each note and beat and integrate it with actual recorded sounds, which is a nice feature if you don’t have the actual instrumentation for every sound you want. Its basic audio manipulation functionality is very limited, however, and usually when I’m editing with it I find myself having to bring pieces into other programs to mess with them.

Steinberg Cubase [5/10]

Even with lots of experience using audio editors, I find it difficult to navigate this program. We use it on our mobile recording station at work and every time I open it up I find myself having to go back to the help section to re-learn things; in other words, the interface is not very intuitive. Can’t say I like it all that much.

Adobe Audition [7/10]

A polished version of its predecessor, Cool Edit Pro. It has some minor stability issues, but overall its a decent program other than the fact that it costs a lot unless you buy it included in a suite or package from Adobe.

Sony SoundForge [8.5/10]

Sony bought this from its creator, Sonic Foundry, and added it to its suite of products. Great support for plug-ins and a host of them are already included in the main package. You can right-click a segment of audio in Vegas and open it in SoundForge for editing, which for me is one of the best features it has. Older versions of this software can be had very reasonably.

Sony Acid Music Studio [8/10]

This is more of a music composition software than an editing software, but it’s one of the leading packages out there if you’re interested in creating your own synthesized loops. It comes in several different varieties, from the free Xpress version, to the mid-level Music Studio and up to Acid Pro at the top. Also has recording capabilities for adding your own sounds.