Using Filters & Functions
Functions are the instructions a user gives to AviSynth, allowing video and audio data to be imported and manipulated. The most common way users manipulate footage in AviSynth is by using filters that can alter the video information. A filter may contain one function or a group of functions that will affect the output of the video. It is important to read the documentation for each filter. Many filters do not support all color space types so you may need to convert to a specific color space before using a particular function.
Most filters will only require one video input and by default will use the last floating video source if a video source stored as a variable is not specified. Functions that are specific to a filter will provide the variables that a user can manipulate. Though AviSynth has many built in filters, many users augment these by downloading external filters and adding the DLL or AVSI files to the plugins directory. This guide will cover a some of the most commonly used filters, however there is a countless number of filters that you can experiment with.
A. Inverse Telecine & TIVTC
TIVTC is a plugin that contains several filters that are used for handling telecined (interlaced) footage. The TFM filter is used to restore progressive frames from the telecined source, and then TDecimate is used to delete duplicate frames from the TFM result. The filter order is important; running TDecimate before TFM will eliminate information TFM needs to properly restore the progressive frames. Below is an image displaying an interlaced sequence and the restored original progressive sequence.
Interlaced Sequence (A, B, B/C, C/D, D)
Progressive Sequence (A, B, C, D)
Here is an example of using these filters to remove the interlacing from a D2VSource:
NOTE: If your video still has left over interlacing after running TFM you can lower the cthresh setting for stronger detection. Lowering the chthresh setting too much can cause combing artifacts, so be careful when changing from the default value!
TFM and TDecimate are very versatile and have a variety of settings, it is best to refer to the user guide for more detailed instructions on how to handle different types of interlaced sources.
B. Combing Artifacts
When interlaced signals are improperly handled then combing artifacts will often occur. These artifacts in the video cause the edges of things to be jagged horizontal lines. It is important to adjust the IVTC filter settings to insure that the current set of filters are not causing the problem. If the filter settings do not fix the issue then it may be possible that the artifacts were inherited from previous transfers of the material that were not properly handled. While it is not ideal to have to fix this type of issue, the best method to handle this is using function DAA. This script uses filters to blend then re-sharpen the image to eliminate the jagged edges from the video. Below is an example of this visual flaw and an example script how DAA might be used:
Combing Artifacts Example
C. Dot-Crawl & Rainbow Removal
Dot-Craw and Rainbowing are visual flaws in video that are often seen together and are a result of how the footage was handled in an analog format before being captured digitally. Both can typically be seen along the edges of objects in the video and dot crawl can additionally be noticed along the edge of the picture as well. Below is an image containing these visual flaws.
Dot-Crawl & Rainbowing Example
There is no way to completely eliminate these flaws, however filters can be used to reduce the appearance of them. One must be careful when applying such filters as stronger the settings increasingly degrade the picture quality. It is best to handle these flaws before removing the interlacing and filters like TComb & Bifrost are great solutions. Here is an example of how to use TComb & Bifrost:
NOTE: Bifrost works best if you are able to determine if the rainbowing were added before or after the video was telecined! Please refer to the documentation for details on how to determine which it is.
D. Temporal Noise Reduction
Temporal filters are used for reducing grain, mosquito noise, and minor blocking by examining the surrounding frames. There are plenty of different filters to try, but my recommendation would be the MVTools2 function MDegrain2. This function requires you to use the MSuper and MAnalyze functions to prepare the source and read information from the surrounding frames. MSuper is used to prepare an alternate clip for use with MAnalyse and MDegrain2. MAnalyse reads frames and returns information for MDegrain2 to use. The MAnalyse isb stands for "is backwards" and is used to read previous frames, while the delta setting tells the function which frame before or after the current one to examine. Below is an image containing these visual flaws and an example of how to use MDegrain2 in a script:
Temporal Denoiser Example
While MDegrain2 is very accurate, it is also extremely slow. It is recommended that you look into other filters if speed is an issue when filtering video sources. At times it may be more beneficial to use a less accurate filter that is faster if the subjective quality of the result is similar.
E. Spatial Smoothing
Spatial smoothers are filters designed to smooth out areas of the picture and are useful for filtering out noise in anime or cartoons; however using them on live action or CG sources to look "flat" and unrealistic. The development of filters that are only spatial smoothers has decreased in favor of hybrid spatio-temporal denoisers. I recommend using spatio-temporal filters instead of using separate temporal and spatial smoothers as this will reduce the processing time and the likelihood of memory errors. If you need separate spatial smoothing in conjunction with a filter like MDegrain2, I recommend using the filter Deathray. It is the filter that has had the best results for me when smoothing anime and cartoons. Here is an example of how to use Deathray in a script:
IMPORTANT: Deathray uses GPU acceleration and is only compatible with ATI and some Nvidia graphics cards. It is recommended that you use Spatio-Temporal filters instead if this if it is not compatible with your hardware setup.
Spatial Smoothing Example
Temporal Denoiser with Spatial Smoothing Example
Example with MDegrain2
NOTE: Deathray can also work as a Temporal filter by adjusting the tY and tUV settings, refer to the documentation for more details.
F. Spatio-Temporal Denoisers
These types of filters are really the best all-around approach to cleaning up animated video footage that needs both spatial smoothing and temporal noise reduction. While there are many to chose from, I will narrow my recommendation down to FFT3DFilter. FFT3DFilter is simple to use and there is also FFT3DGPU which is the hardware accelerated version and runs much faster. One advantage of this filter is the built in ability to sharpen, so the need for an external sharpening filter may not be necessary. Here are some examples of how to use FFT3DFilter:
Spatio-Temporal Denoiser Example
Adjusting Block Parameters and Sharpening
Two Pass Filtering
G. Banding Reduction
One of the limitations of the current standard for color depth causes an issue called “banding” with flat gradients. Banding appears as a series of distinct color changes across what is supposed to be a smooth gradient. Since this is caused by a limitation in the bit depth, the best way to reduce this is to use a filter which will dither or create a random pixel pattern around the edge of color steps. This allows the video to trick the human eye into thinking there are intermediate colors there that the current bit depth is not able to achieve. Below is a before and after example of banding reduction.
One of the most common filters to use for banding reduction is Flash3kyuu Deband and using the default values is sufficient for most situations. Advanced users should refer to the documentation for other settings and tweaks to improve the overall quality. Here is an example of how to use Flash3kyuu Deband:
NOTE: For best results use Flash3kyuu Deband after any smoothing filters but before using any sharpening filters.
It is important to consider sharpening the video since many filters cause some degree of blurring. While it is important to sharpen the image, regular sharpening filters can introduce artifacts. Filters like Hysteria will allow you to sharpen the edges and outlines without affecting the rest of the image. The default values are good as is, and only in rare cases would the values need to be changed. Here is an example of how Hysteria is used:
If you need to apply other filters to a video it is always best to apply sharpening last to avoid preventing details that need to be filtered out from being sharpened and surviving the filter chain. It also helps to insure the final output is sharp and not blurry as a result of the other filters applied to the source. Here is an example of sharpening after other filters:
Sharpening after Filter Chain
I. Crop & Resize
Sometimes footage will have excess black space that needs to be cropped off. When doing this it is important to maintain the aspect ratio by also cropping from the perpendicular sides so that the image will not be distorted if resized to the original resolution. To do so use an AR calculator to ensure the aspect ratio remains the same after cropping. When using the Crop() function in AviSynth the bottom and right sides take negative values. The values are entered into Crop() the following order: left, top, right, bottom. Here is an example of cropping and resizing a 1920x1080 video that needs 4 pixels cropped from each side:
NOTE: When cropping footage that has YUV color space you will need to crop in multiples of 2.
While these filters do not handle all the issues you may encounter, they do cover the problems that you will come across most often. Make sure to read the documentation for each filter and learn how it works in order to optimize it to the individual needs of your video source. An extensive list of filters can be found on the AviSynth website or the Doom9 Forums.