Using Filters & Functions

Functions are the instructions a user gives to VapourSynth, allowing video and audio data to be imported and manipulated. The most common way users manipulate footage in VapourSynth is by using filters that can alter the video information. A filter may contain one function or a group of functions that will affect the output of the video. It is important to read the documentation for each filter. Many filters do not support all color space types so you may need to convert to a specific color space before using a particular function.

Most filters will only require one video input and functions that are specific to a filter will provide the variables that a user can manipulate. Though VapourSynth has many built in filters, many users download additional external filters. This guide will cover a some of the most commonly used filters, however there is a countless number of filters that you can experiment with.

A. Inverse Telecine & VIVTC

VIVTC is a plugin that contains several filters that are used for handling telecined (interlaced) footage. The VFM filter is used to restore progressive frames from the telecined source, and then VDecimate is used to delete duplicate frames from the TFM result. The filter order is important; running VDecimate before VFM will eliminate information VFM needs to properly restore the progressive frames. Below is an image displaying an interlaced sequence and the restored original progressive sequence.

Interlaced Frame

Interlaced Sequence (A, B, B/C, C/D, D)

Progressive Sequence (A, B, C, D)

Here is an example of using these filters to remove the interlacing from a MPEG-2 source:

Example (A-1):

from vapoursynth import core
video = core.d2v.Source(r'video.d2v')
video = core.vivtc.VFM(video, order=1, cthresh=10)
video = core.vivtc.VDecimate(video)
video.set_output()

NOTE: If your video still has left over interlacing after running VFM you can lower the cthresh setting for stronger detection. Lowering the chthresh setting too much can cause combing artifacts, so be careful when changing from the default value!

VFM and VDecimate are very versatile and have a variety of settings, it is best to refer to the user guide for more detailed instructions on how to handle different types of interlaced sources.

B. Combing Artifacts

When interlaced signals are improperly handled then combing artifacts will often occur. These artifacts in the video cause the edges of things to be jagged horizontal lines. It is important to adjust the IVTC filter settings to insure that the current set of filters are not causing the problem. If the filter settings do not fix the issue then it may be possible that the artifacts were inherited from previous transfers of the material that were not properly handled. While it is not ideal to have to fix this type of issue, the best method to handle this is using function DAA. This script uses filters to blend then re-sharpen the image to eliminate the jagged edges from the video. Below is an example of this visual flaw and an example script how DAA might be used:

Combing Artifacts Example

Example (B-1):

from vapoursynth import core
import havsfunc as haf
video = core.d2v.Source(r'video.d2v')
video = core.vivtc.VFM(video, order=1, cthresh=10)
video = core.vivtc.VDecimate(video)
video = haf.daa(video)
video.set_output()

C. Dot-Crawl & Rainbow Removal

Dot-Craw and Rainbowing are visual flaws in video that are often seen together and are a result of how the footage was handled in an analog format before being captured digitally. Both can typically be seen along the edges of objects in the video and dot crawl can additionally be noticed along the edge of the picture as well. Below is an image containing these visual flaws.

Dot-Crawl & Rainbowing Example

There is no way to completely eliminate these flaws, however filters can be used to reduce the appearance of them. One must be careful when applying such filters as stronger the settings increasingly degrade the picture quality. It is best to handle these flaws before removing the interlacing and filters like TComb & Bifrost are great solutions. Here is an example of how to use TComb & Bifrost:

Example (C-1):

from vapoursynth import core
video = core.d2v.Source(r'video.d2v')
video = core.tcomb.TComb(video, mode=2)
video = core.bifrost.Bifrost(video, interlaced=True)
video = core.vivtc.VFM(video, order=1, cthresh=10)
video = core.vivtc.VDecimate(video)
video.set_output()

NOTE: Bifrost works best if you are able to determine if the rainbowing were added before or after the video was telecined! Please refer to the documentation for details on how to determine which it is.

D. Temporal Noise Reduction

Temporal filters are used for reducing grain, mosquito noise, and minor blocking by examining the surrounding frames. There are plenty of different filters to try, but my recommendation would be the MVTools2 function Degrain2. This function requires you to use the Super and Analyze functions to prepare the source and read information from the surrounding frames. MSuper is used to prepare an alternate clip for use with Analyse and Degrain2. Analyse reads frames and returns information for Degrain2 to use. The Analyse isb stands for "is backwards" and is used to read previous frames, while the delta setting tells the function which frame before or after the current one to examine. Below is an image containing these visual flaws and an example of how to use Degrain2 in a script:

Temporal Denoiser Example

Example (D-1):

from vapoursynth import core
video = core.lsmas.LWLibavSource(r'video.mkv')
super = core.mv.Super(video, pel=2, sharp=1)
backward_vec2 = core.mv.Analyse(super, isb = True, delta = 2, overlap=4)
backward_vec1 = core.mv.Analyse(super, isb = True, delta = 1, overlap=4)
forward_vec1 = core.mv.Analyse(super, isb = False, delta = 1, overlap=4)
forward_vec2 = core.mv.Analyse(super, isb = False, delta = 2, overlap=4)
video = core.mv.Degrain2(video, super, backward_vec1,forward_vec1,backward_vec2,forward_vec2,thsad=400)
video.set_output()

While Degrain2 is very accurate, it is also extremely slow. It is recommended that you look into other filters if speed is an issue when filtering video sources. At times it may be more beneficial to use a less accurate filter that is faster if the subjective quality of the result is similar.

E. Spatio-Temporal Denoisers

These types of filters are really the best all-around approach to cleaning up animated video footage that needs both spatial smoothing and temporal noise reduction. While there are many to chose from, I will narrow my recommendation down to FFT3DFilter. One advantage of this filter is the built in ability to sharpen, so the need for an external sharpening filter may not be necessary. Here are some examples of how to use FFT3DFilter:

Spatio-Temporal Denoiser Example

Example (E-1):

Basic Filtering

from vapoursynth import core
video = core.lsmas.LWLibavSource(r'video.mkv')
video = core.fft3dfilter.FFT3DFilter(video, sigma=1.5)
video.set_output()

Example (E-2):

Adjusting Block Parameters and Sharpening

from vapoursynth import core
video = core.lsmas.LWLibavSource(r'video.mkv')
video = core.fft3dfilter.FFT3DFilter(video, sigma=1.5, bt=5, bw=32, bh=32, ow=16, oh=16, sharpen=0.4)
video.set_output()

Example (F-3):

Two Pass Filtering

from vapoursynth import core
video = core.lsmas.LWLibavSource(r'video.mkv')
strength = 2
video = core.fft3dfilter.FFT3DFilter(video, bw=6, bh=6, ow=3, oh=3, bt=1, sigma=strength)
video = core.fft3dfilter.FFT3DFilter(video, bw=216, bh=216, ow=108, oh=108, bt=1, sigma=strength/8, sigma2=strength/4, sigma3=strength/2, sigma4=strength)
video.set_output()

F. Banding Reduction

One of the limitations of the current standard for color depth causes an issue called “banding” with flat gradients. Banding appears as a series of distinct color changes across what is supposed to be a smooth gradient. Since this is caused by a limitation in the bit depth, the best way to reduce this is to use a filter which will dither or create a random pixel pattern around the edge of color steps. This allows the video to trick the human eye into thinking there are intermediate colors there that the current bit depth is not able to achieve. Below is a before and after example of banding reduction.

Banding Example

One of the most common filters to use for banding reduction is Flash3kyuu Deband and using the default values is sufficient for most situations. Advanced users should refer to the documentation for other settings and tweaks to improve the overall quality. Here is an example of how to use Flash3kyuu Deband:

Example (F-1):

from vapoursynth import core
video = core.lsmas.LWLibavSource(r'video.mkv')
video = core.f3kdb.Deband(video)
video.set_output()

NOTE: For best results use Flash3kyuu Deband after any smoothing filters but before using any sharpening filters.

G. Sharpening

It is important to consider sharpening the video since many filters cause some degree of blurring. While it is important to sharpen the image, regular sharpening filters can introduce artifacts. Filters like Hysteria will allow you to sharpen the edges and outlines without affecting the rest of the image. The default values are good as is, and only in rare cases would the values need to be changed. Here is an example of how Hysteria is used:

Example (G-1):

Basic Sharpening

from vapoursynth import core
import hysteria as hys
video = core.lsmas.LWLibavSource(r'video.mkv')
video = hys.Hysteria(video)
video.set_output()

If you need to apply other filters to a video it is always best to apply sharpening last to avoid preventing details that need to be filtered out from being sharpened and surviving the filter chain. It also helps to insure the final output is sharp and not blurry as a result of the other filters applied to the source. Here is an example of sharpening after other filters:

Example (G-2):

Sharpening after Filter Chain

from vapoursynth import core
import hysteria as hys
video = core.lsmas.LWLibavSource(r'video.mkv')
strength = 2
video = core.fft3dfilter.FFT3DFilter(video, bw=6, bh=6, ow=3, oh=3, plane=0, bt=1, sigma=strength)
video = core.fft3dfilter.FFT3DFilter(video, bw=216, bh=216, ow=108, oh=108, plane=0, bt=1, sigma=strength/8, sigma2=strength/4, sigma3=strength/2, sigma4=strength)
video = hys.Hysteria(video)
video.set_output()

H. Crop & Resize

Sometimes footage will have excess black space that needs to be cropped off. When doing this it is important to maintain the aspect ratio by also cropping from the perpendicular sides so that the image will not be distorted if resized to the original resolution. To do so use an AR calculator to ensure the aspect ratio remains the same after cropping. The values are entered into CropRel() the following order: left, right,top, bottom. Here is an example of cropping and resizing a 1920x1080 video that needs 4 pixels cropped from each side:

Example (H-1):

from vapoursynth import core
video = core.lsmas.LWLibavSource(r'video.mkv')
video = core.std.CropRel(video, 4,4,2,2)
video = video.resize.Spline36(1920,1080)
video.set_output()

NOTE: When cropping footage that has YUV color space you will need to crop in multiples of 2.

Conclusion

While these filters do not handle all the issues you may encounter, they do cover the problems that you will come across most often. Make sure to read the documentation for each filter and learn how it works in order to optimize it to the individual needs of your video source. An extensive list of filters can be found on the VapourSynth website or the Doom9 Forums.