There are a plethora of WPF charting solutions out there, both commercial and open source. I have tried quite a few but have yet to find one that is capable of handling highly dynamic data whilst also providing useful customisations such as centered axis or generating polar diagrams. I do most of my dynamic charts in OpenGL but labelling axis and data points using textured fonts lacks some of the polish I have seen in many WPF-based solutions. Thus I thought that, for my Windows apps, I would try porting my OpenGL chart library to WPF.
My wife and I had a bouncing baby boy late last year and man, did he arrive like a shrieking banshee from hell. That was four months I hope never to experience again and, thanks to a serious and prolonged lack of REM sleep, I hope soon to forget. Like all new parents, you just dig in and pray for things to slowly get back to normal. However, with the passing of time comes the dawning realisation that this is the new normal. Even though I really liked my previous life, this post isn't a rant about that. It's about some of the things I've discovered about my new one.
As I have mentioned in a couple of other posts, I have been working on the API for a suite of building performance analysis classes, with a particular focus on dynamic visualisation and interactive manipulation. Part of this involves implementing an event notification system so that changes can propagate through even the most complex object hierarchies without the user having to worry about the internal plumbing. Unfortunately the standard approaches that I have found aren't particularly suitable here, so I have had to hack out something of my own.
Whilst working on an analysis library recently, I have been trying to reconcile how best to define positional and directional information. Most 3D frameworks and APIs use a single
Vector class (or similar) to define a generic object with X, Y and Z properties, which is then used interchangeably to describe both points in space and direction vectors. The closer I look, however, the more I think that it's important to clearly distinguish between the two and treat them as separate classes in an API in order to avoid inadvertent user mistakes. Unfortunately there is no single knock-out justification for either approach, so the following are some simple contemplations on the subject as I try to work out which way to go.
One of the projects I have been working on over the last few months is a graphical user-interface (GUI) library for Processing. The Java API to the library makes extensive use of sub-classing and the intention is that it be easily sub-classed by the end user to allow for deep customisation of its functionality. Deep customisation means allowing for any property in any of its classes to be easily overridden. Sounds trivial as this is what object-oriented programming and Java is all about, but it actually leads to an interesting code-design dilemma.
Technical illustrations and 3D graphs tend to need lots of different kinds of text - some bold/italic, some large/small, and often in a range of different colors or shades of grey. Having struggled with image-based font glyphs in 3D for ages, I've kinda had enough - so set out to develop my own parametric vector text in Processing/OpenGL. This applet is a demo of some of the types of 3D text object I've been playing with as well as some experiments with annotation arrows.
I recently upgraded to Reason 6 and had a bit of time over Christmas for some remixing and further experiments. Okay, I know what you are thinking, but I do have history here as I used to have a great MIDI setup in the late 80s and early 90s when I was a student. I've had Reason 4 for a while, but never the time to properly finish anything before. They're not everyone's cup of tea, but if you are interested in a little electronic music...
When interactively manipulating objects in 3D, having clear dimensions that update dynamically while dragging around allows for much greater confidence and accuracy in the process. This is especially true on a tablet where accurate alignment is well near impossible as your finger (and sometimes hand) effectively obscures the drag point. However, what started out as a quick experiment ended up sucking me into a 6 day vortex, swirling around with quaternion maths and 3D text manipulation in an attempt to get radial dimensions visible when viewed from any direction. This tiny applet is the result of those 6 days.
I have been struggling for a while in Processing with the reverse projection of 2D screen coordinates back into 3D world coordinates. It's relatively straightforward in OPENGL sketches using the JOGL view matrices and the GLU.gluUnproject() function. However, I could never get anything solid when using the P3D renderer or the PGraphics3D matrices. For some reason there was always a slight scaling issue which varied with different views. Previously I've used fudge factors to get it pretty close but, as I still prefer P3D for browser embedded work, I finally spent some time on it and made the breakthrough I needed. This is a brief description of what I learnt along the way as well as a demo applet and custom gluUnproject() function.
I have recently been building a small multi-node cluster of Mac Mini Servers as a development tool to explore some cloud services and parallel processing techniques. For this purpose, the Mac Mini is great as they are dead quiet, use very little power, boot with no problems without a keyboard, mouse or monitor attached and are easily set up to allow full remote management, configuration and screen sharing. For me the Server version is best as only they come with an Intel quad-core i7 CPU, giving effectively 8 processing nodes each. The CPU speed is a bit slower compared to the best non-server version (2.0MHz vs 2.7MHz), however the non-server version is only dual-core.
When dealing with complex analytical models, visually checking that all the spaces within a building have been generated correctly is never easy. This is because there are usually so many of them and their bounding surfaces are invariably adjacent to those of other spaces. You often wish that there was a way to 'explode' the building apart so you could see all of its constituent parts. Well, this the first of my experiments to do exactly that.
Having just done the Spherical Harmonics demo, and with most of the infrastructure already in place, it would have been remiss not to do a Super Shapes demo. Super Shapes are 3D forms generated using Johan Gielis' generalisation of the superellipse formula, often termed the superformula. This was proposed in 2003 as a framework for simulating natural forms and is basically an equation with four input paramaters that generate a range of natural polygons.
I have recently been looking at the use of spherical harmonics as a way of doing real-time diffuse lighting and shadowing effects in OpenGL. As I usually only really understand stuff when I can see it, I did a quick viewer in Processing to help make sure I was getting all the algorithms correct. Some of the visualisations and shapes started to look pretty good, so I figured I’d polish it a bit and put it up on my site.
The intention of this next iteration on the theme of overshadowing was to look at surface shading on a glazing panel. Unfortunately I got a bit bogged down trying to work out interactive design rules for louvres and brise soleil so didn't get as far as I'd have liked over the long weekend. However, I did finish a basic horizontal/vertical shade example and thought I'd put it up as, even though it doesn't yet show the shading effect on the glazing surface, simply being able to drag a shading mask around seems to give some useful insight into the solar apperture and obstruction effects.
The latest version of Processing makes exporting sketches directly to Android relatively easy. However, as everyone on the Processing for Android wiki keeps saying, interacting with a mobile app is very different from using a standard mouse and keyboard. This update is the first of my attempts to at Android development.with Processing.