GrIP: Graph-Based Image ProcessingGrIP: Graph-Based Image Processing

Abstract
GrIP is a graph-based post processing framework aimed towards flexibly improving visual quality in virtual environments while keeping the performance hit low.

Introduction
Post processing of image data is one possibility of achieving an improved image quality. Available methods cover filters for changing brightness and contrast as well as scaling algorithms based on different concepts. Adding data like depth or normal values to the original image information leads to a large amount of new possibilities for post processing. Based on a combination of these information, it is e.g. possible to approximate illumination effects in screen space, which are usually calculated using physically based methods like path tracing. Also, it is possible to use depth information for simulation of depth of field or fog effects. Many other – artistic as well as realistic – effects are available, supporting the observer’s perception or enhancing the visual attractiveness of images.

Implementation
The project’s goal was the implementation of a graph-based framework for post processing filters, which is now called GrIP (Graph-based Image Processing). This means that compatible filters can be arranged and connected in a directed, acyclic graph. The construction of whole filter graphs is thus possible through an external interface, avoiding the necessity of a recompilation cycle after changes in post processing. Filter graphs are implemented as XML files containing a collection of filter nodes with their parameters as well as linkage/dependency information.

Another goal was the system’s extensibility, so that new filter nodes could be developed and used in GrIP easily. This is done by providing a plugin system which loads filters dynamically at runtime, while each filter is responsible for getting its owm parameterization from the framework. The latter is achieved by passing an instance of a wrapper class to each called node, wrapping the concrete graph information into a uniform interface, so other representations besides XML are also possible to implement by just programming to the wrapper’s interface. All nodes were to be developed so that they are applicable in interactive applications and therefore executable at real-time frame rates. For this reason, NVIDIA CUDA was used for the proof-of-concept implementation of filter nodes.

A visualization component for filter graphs was implemented, as well as a second GUI component which consists of automatically generated sliders for value manipulation. This is useful for testing newly implemented filters as well as finding good combinations of parameters when a specific output result is desired.

Results
Results are shown in figures 1 to 5. Fig. 1 shows a combination of depth darkening, depth of field and a slight fog effect, applied to a hardware rendered image. Fig. 2 shows the according parameterization of the filter nodes. In fig. 3, a code snippet of the XML-based graph representation is shown, while fig. 4 shows the according visualization of the nodes, also incorporating a loop.

Figure 5 shows a path-traced image of the same scene as in figure 1. Tonemapping, gamma correction and the application of a slight bloom effect are all performed with GrIP-based filters here, which make them freely parameterizable by the user. Note that GrIP is completely independent of the application it is used with. The application used in figure 1 is an OpenSceneGraph-based model viewer, while the application in figure 5 is an “NVIDIA OptiX”-based path tracer.