metropolis light
transport



For our cs224 final project, taylor and I implemented a chunk of metropolis light transport, as described by eric veach in his thesis and subsequent SIGGRAPH '97 paper

As of the due date, we had implemented the basis of the sampling framework, including several of the mentioned refinements. Not completed were several of the more exotic mutation strategies including caustic/lens perturbations and multi-chain mutations. Also implemented (but not yet integrated with with MLT renderer) was a monte carlo direct lighting stage (as described in Monte Carlo Methods for Direct Lighting Calculations" and Wang's thesis).

The project also required the implemention of a number of other things, including a generic raycasting framework with support for a whole slew of implicit objects and rather slow mesh intersection. You'll see many of the scenes have very few objects, or objects with rather coarse meshes. We did not have enough time to implement octrees or some other form of spatial subdivision, so in order to keep rendering times down we had to use smaller numbers of objects and coarse meshes. A generic BSDF framework was also completed, which enabled any type of (physically correct) BSDF to be sampled, and a bunch of utilities for spectral sampling/resampling and color format conversion code to handle doing color correctly. We didn't implement any of the modifications that are necissary to render physically incorrect BSDFs, thus you won't see any shading normals, bump mapping or the like.

We're planning (after a little rest...) to try to complete the remainder of the thesis, including the rest of the mutation stratagies and refinements, properly handled refraction, shading normals and more neat stuff. Also addressed will be optimizations to the raycasting framework, including spatial subdivision for both meshes and the scene itself. This should allow much more interesting scenes and models.

Something that should be noted is that there is no postprocessing performed on our images. We simply scale the values generated by the algorithm to be between 0..1, and thus we get images with terribly bright spots and very dark backgrounds, etc. Sometimes we have tried to improve this by adjusting the gamma/curves of the image, but this is also unsatisfactory since there has already been so much resolution lost in the conversion to 8bit rgb. To ease these problems, and to try to make the images appear as they would to the human eye, we are currently taking a stab at implementing A Visibility Matching Tone Reproduction Operator for High Dynamic Range Scenes. We should also try to put the images on this site up using PNG or something similar, since JPEG really murders them sometimes.


(first specular success)


(holy cow)


... and the pictures. NOTE: most have been scaled to 200x200 to save space... if you want a larger one just mail us -- We also plan on putting up some pictures that show the great robustness of MLT once the high dynamic range filters are done (they currently are too difficult to see, as they are mostly dark with a few bright spots).