This is a real LPE from a footwear configurator and represents what we call the residual, all of the static parts that don’t need changing. Imagine having to construct this LPE by hand. You can learn more about the LPE syntax here, however the goal of the compositing system is that you don’t have to understand this in order to use it. What does this give you? It tells the renderer to give us all of the indirect lighting contributed by objects in the scene with the handle String attribute set to ‘threads’. To use them you specify expressions like this. An LPE allows the rendering engine to separate the individual contributions of specific objects and light transport interactions into their own images. The compositing feature leverages functionality that has existed in RealityServer for some time, namely Light Path Expressions or LPEs for short. You can skip this section if you’re eager to get started with using the system, however even though the system hides the complexity it’s good to know it’s there if you need to access it. Backgroundīefore looking at how it’s used it is worthwhile to cover a little background on the technology behind the compositing system. Now that you’ve seen an example of what if can do, let’s get into the detail of how it works, first with a little background and then by describing how to actually use the compositing system. Finally we are taking advantage of the ability to output UV information to remap our tint with a texture that correctly conforms to the shape of the object. There are also more subtle effects caused by the indirect light also being tinted by our chosen colours, also impossible with traditional compositing. This would be impossible to achieve with a mask or alpha based approach. DemoĪlso notice that even in the out of focus region (caused by the depth of field we have enabled) at the rear of the second shoe, all of the compositing is still working perfectly. It can even be run on CPU based resources if needed (although it will get accelerated by GPU hardware if used). This can be done much faster than rendering a high quality image and with fewer resources. For example changing the colour of objects in the scene. Using compositing we can render once, store a lot of extra data, then use that data to reconstruct new images that would normally require re-rendering. This might work well in some use cases (for example B2B), however for large scale consumer configurators devoting the full resources of a GPU server to a single user is often not practical. For example when you are building a sizeable configurator application and expect a large number of users, using server-side rendering basically means your are buying GPU hardware for all of the visitors. RealityServer is a rendering web service right? So why do we want to avoid rendering? In short, scalability. In this article we will dive into the detail of how to use the new system to render without rendering and speed up your configurator. We call this Compositing even though it’s actually very different to traditional compositing techniques. RealityServer 5.1 introduced a new way to generate images of configurations of your scenes without the need to re-render them from scratch.
0 Comments
Leave a Reply. |