Pipeline thoughts

A recent thread on 3D-Pro made me want to put my personal views on “mighty pipeline” which seems to me not perfectly defined and probably many of us understand very different things when we talk about them.


First of all, we should define this name “pipeline” and if we like Wikipedia definition for software pipeline then we could reinterpret it to the computer graphics industry easily.

We agree then that a computer graphics work pipeline is just a series of transformations of certain material on just one direction. Which seems pretty good but we can spot to clear issues;

  • One one direction which means data starts in A and finishes in Z flowing in just one direction, therefore there is no real opportunity for any kind of validation.
  • These transformations are happening in many cases inside a black box, therefore we should probably aim to describe these as tunnel rather than pipeline for matter of understanding, we basically don’t know what is going on

These two issues are clearly weak spots, the artists involved perform as robots and need to learn a series of steps that they don’t understand, and second, there is no validation in the process so we can flag a problem so at the end of the pipeline we get surprises like your render now crashes for “no reason”.

In practical terms
Well, the point of having a methodology is a number of reasons; from coordinating people, standarising processes, quality control, planning so it is not that is vital but the bigger the company or/and the project the more necessary it becomes.

This is clearly very similar to what goes on software design, you setup a library of objects or functions that all other modules use so if you need to change the maths behind a particular manipulation of numbers and this will be automatically replicated on all modules using that function for free without breaking everything. Or at least that is the theory.

Seems like a dream come true isn’t it? well, it is not as every module sets a particular set of constraints in one way or another which simply makes the whole idea of pipeline your worst enemy;

  • Software languages
  • Syntax changes on one module
  • Particular operating system requirements
  • Particular software requirements
  • Network capabilities and specially latency
  • File format incompatibilities
  • others…

So potentially we are setting a processing chain where many transformations take place hardwired to proprietary technologies (like FBX for example) mixed with open technologies (like image file formats), this of course opens the door for catastrophes and wise film companies, once running a particular show, can’t move from that chain of processes and methods (pipeline) easily or at all unless major reasons like a vital piece of technology or a major bug happens to appear down the line.

Is there any good then to have a pipeline? well, my view is that having a pipeline is not good at all unless you have to either wrap every single process under a layer of abstraction so you get basic reporting and also common interfaces between tasks and you have pipeline version controls so you can evolve on long projects.

What do I mean by common interfaces? well, instead of calling a particular tool to do for example a image file conversion you wrap it under a script that does the call for you and does all the basic checking for disk space, errors, etc… if you need to change the internals of that conversion the interface to the rest of tools should be still be the same so you are protecting yourself.

What do I mean by using version control? well, all tools could be under a version control system so you can have multiple variations of a pipeline and therefore you can evolve in a project.

the cathedral vs. the bazaar
What about vertical development? sure you want to download all those nice tools that float around internet, sure those mel scripts will be great to have yet you have to be tough on this one for a few reasons;

  • That software has not been built thinking on the end user experience
  • Inconsistencies and API outdated parts will play against you
  • No central library of functions will be used and therefore changes will be very hard
  • Partial quality testing on all of them
  • Hardwired to operating system or constants

Therefore my suggestion is to either convert them by hand to your correct cathedral like approach that has been carefully planned instead and for those not vital scripts that will be used from time to time you may want to control them separately so they don’t become part of the pipeline.

Everything that goes to the pipeline should be designed to run through the common library of tools and should function like it is a user with no documentation in front, error catching, modal modes etc… should be thought very carefully too.

Conclusions
What is the safest and easiest approach? my take is that you have the key to decide which pieces of technology are in your hands and instead of thinking only of feature set you also now will have to think in terms of interfacing with the outside world, version to version coherence (for example changing the API of Maya is not very wise mid project), common scripting language support (in this case python has become the clear winner to be your backbone library), common image file format and manipulation tools, common color management toolset, etc…

By choosing well you will minimize the amount of glue and transformations and therefore your pipeline will be minimal to the point you want rather than the point you need.

Solutions
In real life you will want to pick and choose the best tool for the best job, for example you will want the animation from XSI, fluids from Maya, the particles from XSI and Maya and may you decide to choose the render depending on the shot?

One approach if I was able to start from scratch probably would be to use Houdini as a central backbone where all 3D is assembled using our own proprietary format and then channel all the effects and lighting and rendering inside Houdini.

Now why not render directly from XSI for example? well, why not? this is the interesting bit of not having a pipeline, a fast adoption of new techniques is possible and as I shown on this case the only bridge of data (glue) is just the proprietary format we have all the control we need.

Maya, XSI and Houdini run Python so all the toolset and GUIs can be centrally handled plus you can write extra tools that use other’s software resources, like for example the excellent mplayer from Houdini.

Real conclusion
Your pipeline in reality is your rendering engine, which means that you should not aim to build stuff other than to feed the render engine from different places using a common library of functions and objects, yet at the same time all the connections and “tools” should probably aim to connect directly to the render engine, hence the important issue that your render engine should have an API so you can modify (parse?) the scene and modify things at any point.

This is particulary modern in Mental Ray and therefore I am inclined to say it is the most advanced renderer out there, but this is just my opinion and really at the end of the day what counts are those pixels you produce with whatever you use.

This post is tagged: ,


2 Responses to “Pipeline thoughts”
  1. sam cuttriss
    12.13.2008

    hey jordi, loving the blog.
    you should talk to andy buecker about this whole arena, hes been doing some lively stuff that may be of interest.

    _sam

  2. 12.15.2008

    Nice blog, I will keep checking it.


Leave a Reply