Adam Frisby

Archive for the ‘wpf’ tag

3D in the Browser

with 7 comments

There’s been a rush of discussion recently about embedding virtual world technologies in the browser, I have seen two Java3D based implementations (which are very clever and deserve a closer look – especially if either is Open Source).

Tish has written up an interview with Avi who I mentioned earlier, on 3D in the Browser, the technologies and where things should be going – he’s very eloquent and I agree with pretty much everything he has to say. Avi added a companion piece with just a table of definitions and a small commentary on the article, which leads me to add my own comments on things.

First – the best browser is one that we dont need to install much onto the clients system, because the hurdle of making a plugin popular is a big one. XBAP is great for this since you dont need to install it to run it – it runs on a sandbox on it’s own just fine.

The downside to this is that we lose things like Local Caches which dramatically increase the load times when frequenting common areas – it would be nice to be able to optionally “Preview, then install” something we might be able to do via the XBAPs (one mode restricts to sandbox, other presumes it’s installed). Some experimentation in installing easily is going to need to be done.

I’ve been doing some further experimentations with XBAP (and yes, I’ll be posting the latest Xenki code shortly), and discovered some interesting things. First – 3D performance is no worse than a standalone WPF application, Second – it runs on Windows 2000/XP and up – I previously assumed it was Vista only, and was pleasantly suprised to find out this is not the case. Third – getting security certificates needed to automatically launch directly without install actually wont be too painful afterall (as long as libopenmv can get signed, we’re good.).

Cross Platform, Java3D?

The cross platform issue still remains elusive, which is why Java3D has caught my eye recently – given the similarities between C# and Java, it strikes me as potentially useful to be able to take large chunks of Xenki, run it through a special compiler and produce something that will run on Linux and Mac too.

The question becomes: why not use Java directly and skip the C#/WPF bit? Well the answer here is somewhat a personal one, first is that WPF is a lot easier to work with – this is an unfortunate fact of life, but doing things like UI ontop of 3D frames and doing it cleanly and efficiently is something that WPF has done so much better that there is almost no competition between them.

I personally have a preference towards doing things as quickly and cleanly as possible, worrying about making it functional rather than working on structural work that has already been done before – however if someone has already produced a good embeddable engine for Java on 3D pages – I’d be very tempted to at least try and give it a shot.

Java3D does also have the downside that it still sits in a sandbox, and the sandbox is mandatory – there’s no way we can implement a local cache using it to the best of my knowledge, I personally feel that the installable option is a very good way to cross the barrier between ‘hosted’ and ‘installed’ once the user is familiar.

Updated Xenki Sources

As stated above – I’m going to push the new xenki sources out in the next 24-48 hours. Watch this space as I will post the URLs and also the SVN address where I will be keeping changes.

Written by Adam Frisby

August 9th, 2008 at 6:17 pm

Posted in Xenki

Tagged with , , , , , ,

Thanks: Zain, David and Gerhard at Microsoft

with one comment

This is just a very quick shout-out to the guys over at Microsoft – in particular Zain Naboulsi, their Virtual Worlds evangalist, for introducing me to David and Gerhard, the Project Manager and Lead Programmer respectively on the WPF Graphics Team, who have graciously offered me their time to help answer some performance related questions with regards to the Xenki viewer I have been developing.

My absolute thanks – it’s great to have such esteemed assistance on-hand, and I will be sure to be contacting you guys with all sorts of hairy questions about things I shouldnt be doing but am. *grin*

Written by Adam Frisby

August 7th, 2008 at 1:02 pm

Posted in Kudos

Tagged with , , , ,

Ideas for Scene Graph Optimisation

with 2 comments

Authors note: I use terms such as ‘disadvantage’ when refering to Second Life’s building tools as a comparison to professional tools with professional artists, naturally user generated content tends to lean towards less efficient building techniques. This is not a slight on the content creators themselves, just that the tools make lots more work for people writing renderers and dealing with efficiency.

Second Note: Like my previous post, a large deal of this is speculation. I plan on confirming or denying a large number of my suspicious with the Xenki viewer’s design, but at this point should be just ramblings on the authors blog rather than any authorative statement.

As a sidenote from my previous post – I have some more ideas I’d like to try put into practice directly with rendering Second Life(tm)-style scenes faster for Xenki. The mainline SL client achieves it as far as I can tell through a combination of utter brute force (equivilent to sending an entire dam through a garden hose every minute – It’s pretty impressive.) and lots and lots and lots of caching.

This is not going to play well with WPF at all (I can see that much already), first we dont have access to low level hardware, and second I dont want to debug a thousand graphics glitches with every nuanced bit of hardware. Thanks, but no thanks, I’d rather let MS worry about that part.

So, if brute force is out of the question, what options exist for making things render faster.

First is the obvious one – let’s cache better.

One of the things that has been lamented previously has been the fact that Second Life has dynamic content, ergo we cannot cache the scene – I suspect this isnt the whole deal, while it is true that every object in the scene can potentially be moved (scripted or avatar building) at any moment, we can evaluate a lot of them on probabilities and discount swathes as likely to move.

Objects

Objects can be pretty easily split between “Likely to move” and “Unlikely to move.” Likely to move objects were either recently created, marked temporary or physical, or contain scripts. While it is true the others could still move, the probability is significantly lower, and therefor we can more readily cache them. If they get moved, then we’ll need to rebuild that cache (without the object that moved), but for now – it’s acceptable.

This cache could take the form of rendering the entire ’static’ portion of the scene to a single massive vertex buffer, and then rendering the dynamic elements individually (or in smaller caches). This is very similar to how modern games work – however in that case you have the advantage of being able to build a BSP tree in the editor. I am uncertain as to whether we are capable of doing BSP generation fast enough to make this dynamic cache feasible, but it is an interesting idea nontheless (Insert additional concerns about wide open spaces and BSP trees here).

A potential downside here is that we’ll need to change how LOD works for this to be effective – rather than having LOD calculated “on the fly” as your camera navigates, we will need to force the scene, then only update LOD periodically as the cache refreshes. In this case, LOD may become a function of the size of the object in absolute terms rather than relative to screen space.

Maintaining this cache on an idle processor

One of the great things about processors lately has been the abundance of cores added, this means chances are there is a piece of hardware sitting on this machine without much to do. We can leverage this by doing the cache building and maintainence on a seperate thread which runs on another processor, because the cache is not a prerequisite to rendering – we can optimise the cache in the background, then use it when it is availible.

Handling Textures Better

Second Life has the disadvantage of not using professionally created textures on every surface – this means that it’s possible for a microscopic object that you cannot really see having a massive 1024×1024 sized texture attached to it, increasing both bandwidth usage – and the amount of texture memory that is consumed in displaying your scene.

An idea for fixing this problem could be to measure the surface area each texture is applied to, then using this surface area to approximate what resolution we should render each texture as. (Converting that 1024×1024 texture down to a 32×32 texture if it is only used once, on that object).

By doing this, in combination with careful management of the amount of texture memory availible (downsampling to fit memory and applicability together) this may get around at least part of the “huge texture memory consumption problem”.

Written by Adam Frisby

August 6th, 2008 at 4:43 pm

Posted in Xenki

Tagged with , , , ,

Procedural Generation of Prims considered harmful?

with one comment

Yep.

I said it – one of the things that’s been touted as so fantastic about SL’s rendering performance is the speed at which you can push them to the graphics card, the amount of caching in vertex buffers that can be done, etc.

I’m about to say that it actually doesnt seem to matter that much, and Prims lose out in a lot of cases for some very interesting, but difficult to fix reasons, and doing performance workarounds for this is going to be complex, irritating and make me wish I was dealing with my precious meshes.

I should note here, that the performance of the XBAP application on my crummy laptop graphics card is still relatively solid – and I’m brute forcing nearly every operation at this point.

Reason Number Uno: Fill rate, “invisible” triangles.

Prims waste a lot of triangles in areas we cannot see – occlusion culling of whole objects works well here, but it doesnt work when we’re dealing with potentially a few thousand triangles that are part of an object, but inseperable. This is mostly due to construction techniques than something we can fix at the renderer level, but nonetheless it has a major impact on performance.

Possible Solutions

I’m experimenting with using CSG (Constructive Solid Geometry – boolean operations) at the moment as a method of reducing the number of hidden triangles pushed to the screen. This will have some complexity when involving transparent surfaces, but if we discount transparent primitives from the algorithm we may get a reasonable reduction in the number of triangles pushed to the screen, at the expense of increasing the number of vertex buffers used (prims do have vertex caching on their side).

This is something I plan on experimenting with and am looking at ways to do CSG in C# without me having to dig out research papers.

Reason Number Duo: Really Inefficient Texturing

This is a more annoying issue – namely that as we start drawing triangles for the procedural surface, we have to flick texture index multiple times to render the primitive (assuming it isnt the same texture on all sides), on a spherical or curved surface this isnt so much of a problem – we push a few thousand, flip, push a few thousand more. Fine.

On boxes – Push 2 triangles. Flip. Push 2 triangles. Flip. Now, of course it’s better not to flip at all – and as some people will point out, pushing 2 triangles vs a few thousand is better and still more efficient. The problem here is how primitives differ from mesh based models.

Traditionally in mesh based modelling, you generate a single texture with a uv map for the entire object. By wrapping and contorting it, you can render the entire object as one single pass, which means we dont need to pause, do a new texture lookup, repeat as many times. It still happens occasionally, but the number is much much lower.

If your scene (such as in a modern game) only has 50 uniquely textured objects on scene at once (look closely and you will find it’s probably not much higher than this number) this is fine. It works well – if we appropriately stage our render pipeline, we might even be able to group these into a single pass each.

SL? Your lucky if your scene has less than 100 textures visible. I’ve seen regions where this number is many times more, potentially in the thousands — and as I pointed out earlier, we’re flipping textures midway through rendering single object collections, which is possibly hurting the performance gains we are making by being able to cache those collections originally.

Yeuck.

Some possible solutions here

There’s a couple of potential solutions to this, but I think the easiest one is to leave this to ATi/NVidia/Intel – pipelining similar textures is something I expect their drivers to do. If this does become a problem, I have some ideas in place for grouping similarly textured faces from different primitive groups into single vertex collections.

Written by Adam Frisby

August 6th, 2008 at 3:30 pm

You beauty.

with one comment

A quick update: Well, it looks like I’ve managed to almost solve the issue which I listed previously.

Here’s some screenshots of things sort of almost rendering correctly.

and one more showing the detail of one of Cubey’s planes (in this case his Ornithopter – and it renders correctly!)

Will post more screenshots later once I have completely fixed that issue.

Written by Adam Frisby

August 6th, 2008 at 2:03 pm

Posted in Xenki

Tagged with , , , , ,

Xenki Renderer: Now less broken(tm).

with one comment

So, I’m sitting here banging my head this morning over two issues. One, why the heck is terrain never deviating from zero height, and secondly why things werent looking quite right – as you could see in the previous posts it was clearly rendering prims but things were ‘missing’ or not quite right. Turns out the answer was both.

First, Terrain.

Yesterday I got the heightfield behaving properly, but couldnt understand why it was never being set when placed onto the live feed coming from the network stack. Answer turned out I was doing something stupid and had typo’d on a variable name for my indexer. Once that was fixed, we were rendering scenes like the one below.

It’s beggining to look a lot prettier, but as you may have noticed, the prims dont appear to line up in any recognisable pattern. While there is definetely a pattern there – something is very off.

Second, Objects – Part A.

Showing this one to Easy [Babcock] in the office, we quickly worked out that infact objects were never being rotated – every object had exactly the same identical zero rotation. After a few minutes debugging, this turns out to be related to the conversion from a Quaternion to a Euler rotation for WPF. Switching from Vector3 to Media.Quaternion internally solved the problem nicely.

Here’s a view of Abbotts aerodrome with the fix inplace.

Much better, although there’s still some clumps missing.

Second, Objects – Part B.

So, it’s looking like we’ve almost got this rendering correctly. At least object shape and rotation is being displayed correctly, although there’s still something lacking. It turns out that most of the bits missing corresponded nicely with camera view – so I’ve fixed this by telling libsl to ‘rove’ the camera position around the sim to download the entire thing. The above screenshot had this fix inplace.

This solved at least terrain loading completely and most objects.

Loading it up on OSGrid in Wright Plaza – everything seems to actually render properly now, at least so far as primitives go – we’re missing sculpties right now which form a large component of Wright Plaza’s design.

Second, Objects – Part C!? Huh?

Panning our Camera around a little, we notice something a little bit … odd. Namely that around 0,0,0 there’s a large congregation of primitives. I’ve been noticing this already and had discarded it as possibly being neighbouring sims in which case, I’ll just knock them out later.

But it turns out, there’s valuable prims being thrown there – it looks to me like maybe the child primitives in link sets having their “Position” being relative to the parent primitive rather than the sim (in OpenSim we have both .Position and .AbsolutePosition to seperate these).

So, I’m going to be working on that for the rest of this afternoon – after which I’m going to play with either texture rendering, or getting Meshmeriser to work so we can discard the dependency on the black-box GPL’d rendering library.

Written by Adam Frisby

August 6th, 2008 at 1:26 pm

Posted in Xenki

Tagged with , , , , ,

 

You need to log in to vote

The blog owner requires users to be logged in to be able to vote for this post.

Alternatively, if you do not have an account yet you can create one here.

Powered by Vote It Up