Textured graphics can be captured in a flash

Movie Camera
  • 12:30 27 August 2008
  • NewScientist.com news service
  • Colin Barras
Printable versionEmail to a friendRSS FeedSyndicate
 
Comparing a flashlit photo with one taken in ambient light can reveal the texture of a surface, and could help develop more realistic computer games
 

The virtual worlds in computer games provide a realistic backdrop to the action. But step too close and the effect is lost – you'll see that textures and patterns are usually displayed on flat surfaces that look dull and artificial.

A simpler way to add depth to textured surfaces could change that.

The new technique can reconstruct the depth of a surface simply by taking two photos of it – one with a flash and one without (see video, right). Merely analysing the resulting shading patterns can capture the surface's 3D texture.

Until now making realistic textures required the use of bulky and expensive laser scanners, says Mashhuda Glencross at the University of Manchester, UK. And the process is really time-consuming, she adds.

3D in a flash

Glencross and the Manchester team worked with Gregory Ward at Dolby Canada in Vancouver to develop their quick and cheap alternative.

At the heart of the technique is the assumption that the brightness of a pixel in the image is related to its depth in the real scene. Parts of the surface deep in a crack or pit receive light from a restricted area of the sky, and appear relatively dark.

By contrast, protruding parts of the surface receive more light and appear brighter in a photo.

Correct colours

But the colour of the surface also affects its brightness in a photo. With the same illumination, light-coloured spots appear brighter than dark ones.

Taking a photo using the flash removes that effect. The surface is flooded with light and the camera can record the true colour of every part it can see, even those in cracks and pits.

The flashlight image is paired up with a photo taken without extra lighting. Software then compares the brightness of every matching pair of pixels in the two images and calculates how much of a pixel's brightness is down to its position, and how much is due to its colour.

That information is used to produce a realistic rendering of a surface's texture. By altering the direction of illumination on the virtual surface the system can generate realistic shadow effects.

Spot the difference

To test the realism of the results, the researchers asked 20 volunteers to compare images of a surface made using two photos to versions of the same surface rendered using lasers. The volunteers couldn't tell the difference.

The new technique is already being used to add depth and realism to the ancient carvings that will appear in Maya Skies – a full-dome digital projection for planetariums that tells the story of the Mayan people. Maya Skies will be released in 2009.

Glencross and Ward presented their results at the SIGGRAPH conference in Los Angeles last week.

Comment subject
Comment
No HTML except lower case italic tags or lower case bold tags, please:
<i> or <b>
Your name
Your email
 

We need your email in case we need to contact you about the comment. We will not use it for any other purpose.

 
 
There are 18 comments on 2 pages
1  | 
2
 | Next
 | Most Recent
 | See all

Welcome To 3d College Kids!

By Face

Wed Aug 27 16:23:49 BST 2008

Class number 4. Bump maps. Nothing exciting. The statement about having to manually create depth in video game models... Is she saying that all depth on a character in modern games like gears of war and MGS4 are all created manually? So all the wrinkles in their faces are all full polygon models? With meshes that are hundreds of thousands of points in a characters face alone? So, my 400$ gaming system has to process all those points on the fly making it more powerful than a program on a 5000$ with an equally expensive 3d modelling program where all those points would have to be rendered frame by frame? This method has been in use by 3d artists for ages by the way. I learned it from a co-worker in my "basic training" of Blender. Or maybe I don't really get it, it is a truly "new and exciting" method and these college kids are geniuses and are going to put the idiots like me and the gods at ILM out of their jobs.

REPORT | REPLY

Welcome To 3d College Kids!

By Jerkface

Wed Aug 27 17:39:48 BST 2008

Did you intend to come across as a smug shithead in your comment?

REPORT | REPLY

Welcome To 3d College Kids!

By Face

Wed Aug 27 18:23:46 BST 2008

No, just someone who relys on actual points to make an argument rather than resulting to 6 year old name calling antics.

REPORT | REPLY

Welcome To 3d College Kids!

By Frau Blucher

Wed Aug 27 20:35:19 BST 2008

In that endeavour, you failed. ...

1 more reply »

REPORT | REPLY

Welcome To 3d College Kids!

By Konrad

Wed Aug 27 17:49:20 BST 2008

Yep, you raelly don't :-) It's not about the 3D CGI, it's about spatial imaging, i.e. Getting the bump/depth map out of a PHOTOGRAPH of a REAL object and scanning and the excitemetn is that instead of getting a $5k+ laser scanner an academin project (or a photo/HDR/3D enthusiast like yourself) can do with a $500 DSLR.

PS: and yes, you CAN render your model a bit better with just applying the very same texture as a bump map, but this is for the 3D hobbiests but not the archeologists :-)

REPORT | REPLY

Welcome To 3d College Kids!

By Soylent

Wed Aug 27 21:07:13 BST 2008

"Is she saying that all depth on a character in modern games like gears of war and MGS4 are all created manually?"

Yes.

"So all the wrinkles in their faces are all full polygon models? With meshes that are hundreds of thousands of points in a characters face alone?"

Yes. Many millions of polygons usually.

"So, my 400$ gaming system has to process all those points on the fly making it more powerful than a program on a 5000$ with an equally expensive 3d modelling program where all those points would have to be rendered frame by frame?"

No. The high res mesh containing millions of polygons is used to create normal or parallax maps and ambient occlusion maps for the ingame assets which use a few thousand polygons.

REPORT | REPLY

Welcome To 3d College Kids!

By Cut Off Your Nose

Thu Aug 28 02:07:28 BST 2008

What the hell are you talking about? Yes, the bumpmaps have to be created manually on models that take up millions of polygons, normally using either Mudbox or ZBrush, including the wrinkles on their faces. Then that mesh is projected down to a lower-res, game-ready mesh and the normal maps are created.

The other method is to paint heightmaps and then convert them into normal maps.

But the point is, they DO have to be created manually, and it's a very painstaking process, so what the hell is your problem?

REPORT | REPLY

Someone Is Wasting Their Time. . .

By Adam

Wed Aug 27 17:35:53 BST 2008

I've been in the games business for over 20 years, and in the last eight years we've been perfecting methods of mapping texture depth (previously greyscale Bump Mapping, now Normal Mapping & Parallax Mapping) using the same (and better) techniques that those in this article. I don't understand why this article is written without research into the subject, and presented as if it is "cutting edge".

As is, camera flash isn't appropriate for most surfaces that reflect light - and there's a lot of light reflecting surfaces in the real world. Most cases require a setup which creates an approximation of diffuse lighting, but you can't easily apply ideal studio setting in situ when wandering around looking for interesting surfaces.

To the poster of "Welcome to 3d College Kids!", bump detail in your games (i.e. Gears of War) is rendered with normal maps, which are generated by projecting detail from high resolution models onto a low resolution mesh. So yes, the depth is manually created initially.

A normal map is an image texture, typically utilised by hardware shading, where each pixel is referred to as a texel, and the R,G,B values of each texel indicate X, Y and Z vectors for reflecting light, thus allowing a (perceived) level of polygon detail equivalent to the square of the texture map.

REPORT | REPLY

Someone Is Wasting Their Time. . .

By Konrad

Wed Aug 27 17:57:28 BST 2008

Once again, it's not about perfecting the 3D, it's about measuring and preserving the bevel levels on the textures - watch the clip - did they have a single frame with a "now happy" 3d artist? What they had, is a the mayan frescs :-) Besides, I doubt all academic research programms can afford a 3d texturing consultant :-)

REPORT | REPLY

Someone Is Wasting Their Time. . .

By Adam

Wed Aug 27 18:40:14 BST 2008

Nomal mapping, by proxy, defines bevel edges rather than actual depth (a bevel being a collection of tangents - normal mapping can also be described as surface-tangent mapping).

Another aspect of detail mapping that a camera can't achieve is non-parallel cavity mapping - that being details that aren't flush with the surface vector, but that's a good thing I think - got to keep the texture artists in paying jobs after all ;)

REPORT | REPLY

Someone Is Wasting Their Time. . .

By Another Adam

Wed Aug 27 19:52:51 BST 2008

I don't think they're trying to replace normal-mapping, parallax maps, self-shadowing bumpmaps or anything like that - they've simply created a quick and easy way of getting real-world input data for those algorithms.

From a texture artist point of view, nstead of photographing a surface and laboriously creating a heightmap in Photoshop, or from modelling a 3D version and generating a heightmap from that, you just take another photo of the same surface with the flash on, and run both images through some magical plugin.

The resulting heightfield can be further edited, cleaned up and mangled along with the albedo texture - and processed to create a normal map or whatever weird lookup texture the algorithm du jour requires.

I'd love to get my hands on this tech. And may use it as an excuse to get a decent flash and tripod for use on my game-texture-hunting trips...

REPORT | REPLY

Someone Is Wasting Their Time. . .

By Adam

Thu Aug 28 11:15:44 BST 2008

Indeed you're quite right. I appear to have developed "factory worker" reflexes to anything that might put me out of a job, allow me a taste of that humble pie...

Having followed the paper trail on this, it appears the tech (called SurfaceImager) has been doing the circuits for some time (at least since 2005), and is owned by Eykonya Technologies. Sadly there's nothing on their website alluding to SurfaceImager specifically, but rather some medical scanning stuff, which may or may not be SurfaceImager (there's a rather poor foot scan on there). I'm not sure who you'd even email to find out more...but I have a feeling it's not going to appear anytime soon, and if they're trying to sell it to the medical industry, they're probably thinking of money...

On a side note, their whitepaper describes the process of getting from in situ photography to normal maps, and looks a lot more involved than just wandering around taking snapshots of anything that may take your fancy - but that depends on how much junk you want to carry around with you :)

REPORT | REPLY

Someone Is Wasting Their Time. . .

By Steve

Thu Aug 28 13:40:32 BST 2008

So the comment by the author at the start of the article about getting up close to a textured surface in a game and finding it 'flat and dull looking' is completely irrelevant to this tech.

This is just a quick way of generating your initial map which you'll use to create textures, nothing to do with how well it scales in your game engine.

REPORT | REPLY

Someone Is Wasting Their Time. . .

By Adam

Thu Aug 28 19:13:20 BST 2008

Yes; this is what set me off on an irate stabbing match. When I saw the RSS intro to the article I thought someone had invented something truly game-changing.

Looks like a press release rather than an article.

REPORT | REPLY

This Is Not New

By Freetoboat

Thu Aug 28 22:18:00 BST 2008

http://Http://zarria.net/nrmphoto/nrmphoto.html

REPORT | REPLY

This Is Not New

By Freetoboat

Thu Aug 28 22:20:51 BST 2008

Weird link detection on these comments

maybe this will work:

zarria.net/nrmphoto/nrmphoto.html

REPORT | REPLY

There are 18 comments on 2 pages
1  | 
2
 | Next
 | Most Recent
 | See all

All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.

If you are having a technical problem posting a comment, please contact technical support.

Printable versionEmail to a friendRSS FeedSyndicate
Cover of latest issue of New Scientist magazine
  • For exclusive news and expert analysis every week subscribe to New Scientist Print Edition
  • For what's in New Scientist magazine this week see contents
  • Search all stories
  • Contact us about this story
  • Sign up for our free newsletter
 
Password Login
Subscriptions