Andrea Giammarchi has created PyramiDOM a “Spectrum DOM Analyzer”. When I first saw it, and read “Spectrum” I thought I was looking at an old 48k video game, but in fact it is showing you info on the DOM:
The generated spectrum will contain every single nodeType 1 present in the document and will show via tooltip info about that node (nodeName, id if present, className if present). Moreover, it highlights somehow that node temporary changing the background (yellow one). The most wicked effect was in jQuery website itself, since it is dark, and since it is linear enough (you scroll and the spectrum is almost there where you scroll).
On the other hand, the most interesting spectrum was in Gmail, where you can spot a proper pyramid of nested divs. Each nodeName will have the same color, but for practical reasons each time this color will be different (random).
Steve Souders has a bunch of best practices, and it seems that there is definitely nuance that makes advice very much tailored to your circumstance.
Nicholas though, has an opinion:
I’ve come to the conclusion that there’s just one best practice for loading JavaScript without blocking:
Create two JavaScript files. The first contains just the code necessary to load JavaScript dynamically, the second contains everything else that’s necessary for the initial level of interactivity on the page.
Include the first JavaScript file with a <script> tag at the bottom of the page, just inside the </body>.
Create a second <script> tag that calls the function to load the second JavaScript file and contains any additional initialization code.
The doctor is in, and this time the specialist is Mark Boas who walks us through HTML5 Audio in various browsers and how to get audio working on the various implementations that are in the wild today.
This early in the game especially, all implementations are not equal. For one there is the codec support:
But also there are the various levels of API support (and even the fact that Opera current supports Audio() but not the audio tag for example).
Mark worked on the jPlayer jQuery plugin which lead him down this path, and in conclusion it comes down too:
Check for HTML 5 audio support and if not present fall back on Flash.
Check the level of HTML 5 audio support and adapt your code accordingly for each browser.
Check what file types are supported and link to appropriate formats of the files.
You can go to a audio checker in various browsers to see them poked and prodded.
Matt Thompson has created a fun little jQuery plugin called Chroma-Hash that "dynamically visualizes secure text-field values using ambient color bars":
Password entry can be frustrating, especially with long or difficult passwords. On a webpage, secure fields obscure your input with •'s, so others can't read it. Unfortunately, neither can you—you can't tell if you got your password right until you click "Log In".
Chroma-Hash displays a series of colored bars at the end of field inputs so you can instantly see if your password is right. Chroma-Hash takes an MD5 hash of your input and uses that to compute the colors in the visualization. The MD5 hash is non-reversible, so no one could know what your password just from the colors. Your password will display the same sequence each time, so you can learn to expect "blue, red, pink", for instance; if you instead see "green, purple, yellow", you'll know you typed it wrong.
A quick reminder that $300 off The Ajax Experience conference expires this Friday, July 31. The Ajax Experience is September 14 - 16 in Boston, MA. www.AjaxExperience.com
As developers, it can be hard to get your voice heard in a company. Whilst our products depend on developers building them the right way, other people seem to call the shots about where they are going.
This becomes disastrous when a company tries to reach developers with their product. Normal marketing and PR stunts normally fail to get us excited. To work around this issue, clever companies allow developers to move into a role of developer evangelists.
A developer evangelist is a spokesperson, mediator and translator between a company and both its technical staff and outside developers.
This is my job at the moment, and I was asked by people I trained if there is a handbook about the skills needed to do this job, so I wrote one.
An interesting piece by Neil Fraser shows that using JSON-P with generated script nodes can be quite a memory leak. Normally you'd add information returned from an API in JSON-P with a generated script node:
// Add the script to the head, upon which it will load and execute.
var head = document.getElementsByTagName('head')[0];
head.appendChild(script)
The issue there is that you clog up the head of the document with lots of script nodes, which is why most libraries (like the YUI get() utility) will add an ID to the script element and remove the node after successfully retrieving the JSON data:
The issue with this is that browsers do remove the node but fail to do a clean garbage collection of the JavaScript at the same time. This means to cleanly remove the script and its content, you need to also delete the properties of the script by hand:
Weston Ruter has created a very cool library that enables CSS gradients on non-WebKit browsers (at least, a subset). Incredibly cool:
CSS Gradients via Canvas provides a subset of WebKit's CSS Gradients proposal for browsers that implement the HTML5 canvas element.
To use, just include css-gradients-via-canvas.js (12KB) anywhere on the page (see examples below). Unlike WebKit, this implementation does not currently allow gradients to be used for border images, list bullets, or generated content. The script employs document.querySelectorAll()—it has no external dependencies if this function is implemented; otherwise, it looks for the presence of jQuery, Prototype, or Sizzle to provide selector-querying functionality.
The implementation works in Firefox 2/3+ and Opera 9.64 (at least). Safari and Chrome have native support for CSS Gradients since they use WebKit, as already mentiond.
This implementation does not work in Internet Explorer since IE does not support Canvas, although IE8 does support the data: URI scheme, which is a prerequisite (see support detection method). When/if Gears's Canvas API fully implements the HTML5 canvas specification, then this implementation should be tweakable to work in IE8. In the mean time, rudimentary gradients may be achieved in IE by means of its non-standard gradient filter.
CSS Gradients via Canvas works by parsing all stylesheets upon page load (DOMContentLoaded), and searches for all instances of CSS gradients being used as background images. The source code for the external stylesheets is loaded via XMLHttpRequest—ensure that they are cached by serving them with a far-future Expires header to avoid extra HTTP traffic.
The CSS selector associated with the gradient background image property is used to query all elements on the page; for each of the selected elements, a canvas is created of the same size as the element's dimensions, and the specified gradients are drawn onto that canvas. Thereafter, the gradient image is retrieved via canvas.toDataURL() and this data is supplied as the background-image for the element.
An aside. I only just noticed the Gears Canvas API. It doesn't quite do what you think..... I always wanted to implement Canvas in Gears. It is also strange that Gears is so under the radar at Google these days. One blog post per year?. I guess all of the work is going into Chrome / WebKit itself.
Over the past few weeks we've finalized over 40 key sessions across 7 tracks for The Ajax Experience conference, including Frameworks, User Experience, Standards and Cross-Browser Issues, High Performance and Scalability, Security, Architecture, JavaScript, and Cutting-Edge Ajax. The agenda-at-glance is ready for your review now. There's something for everyone! Check it out
The Ajax Experience is September 14-16 in Boston. $300 early bird savings are good through this Friday, July 31.
They build on the great intro from John and tweak it to use simple PHP on the backend to do things such as making sure that your favourite library has been loaded into each Worker, so you can use it in your script.
// Recieve data FROM the client with postMessage()
onmessage = function(event){
if( event.data === 'load'){
postMessage('-----BEGIN TRANSMISSION-----');
postMessage({'server_objects': this});
$.ajax.get({
url: 'xhr_test_content.txt',
callback: function(response){
var text = response.text;
postMessage('AJAX GET RETURNED: ' + text );
}
});
postMessage('-----END TRANSMISSION-------');
}
};
The Worker implementations are doing well these days. For awhile Safari didn't have importScripts but that changed before Safari 4. Chrome had some weird side effects too recently, and Malte Ubl posted on some strangeness that we ran into with our Bespin Worker Facade.
For some reason the statement above is actually true in Chrome 2, even though as stated above support for the Worker API has not been implemented.
I then tried to instantiate the Worker object. All this does is to throw an exception with the message "Worker is not enabled". This looks like an unfinished implementation that was only partially removed or something in that direction.
The way @font-face works is that whatever font attributes you specify for a @font-face rule, they don’t determine how the font looks but rather when it’s gonna get used. For example if you have the following two rules
Then if you use the “newfont” font-family with weight 200 it’s going to use Arial, but if you use it with weight 300 it’s going to use Calibri. So we can take advantage of that, and since it uses @font-face we don’t even have to worry if the user’s computer has fonts or not.
We posted on TypeKit recently, and we have another playa Kernest in the "fix friggin type on the Web" game.
And for a final little nugget of font goodness, from @schill:
Typekit looks to include jQuery, loads CSS with base64-encoded data:font/otf URLs for @font-face. "Safer" than a plain open .TTF, I suppose.
The Jetpack project is still a young 'un from Mozilla Labs (disclaimer: I work for labs!) but they are moving swiftly indeed, and each new release has a wicked cool new API that let's you do something you couldn't easily do before.
Play the local file back with jetpack.audio.playFile() or upload it and just use the audio tag itself.
A great showcase of this is the voice memo jetpack "which lets you annotate any webpage you are looking at with your voice." Live streaming is even coming soon. Here comes video conferencing the Open Web way?
Jetpack is a great way to do Greasemonkey-like work. To make it even easier, you need a way to define when the jetpack kicks in etc, and this is exactly what the Page Mods API gives you.
Christophe Eblé has kindly written a guest post on Swell JS and his drag and drop manager that works with your desktop. Here he tells us more:
At Swell we were about to create a Drag & Drop Manager just like in other Javascript libraries such as Jquery, YUI, Mootools, Scriptaculous, but we were not really satisfied with this decision.
What motivated us to adopt another strategy is that we didn't want to provide yet another simulated solution but instead a drag & drop library that would use the real power of the web browser.
We've been faced to a lot of problems and we are still struggling with API differences. The Pros and cons of using system drag&drop over simulated solutions:
Pros
Accuracy and performance, mouse move tricks to position an element under the pointer and detect target collision are things of the past, system DD is wicked fast!
system DD can be anything you like (simple images or complex dom nodes) and can interact within your browser window, the chrome (search field, address bar...) or tabs (if the browser allows it, FF 3.5 does it right) and even your OS.
system DD through the dataTransfer object can carry powerful meta information that are not necessary the dragged object itself, this can be arbitrary text (JSON data for ex.), urls and for the latest browsers complex data types see this
system DD has true DOM Events
Cons
Browser differences in this subject are a real pain, I couldn't list all the oddities here :)
The drag & drop implementation in Swell is still at an early stage, and not officially released but here's some details on what we're working on.
Provide a single way to create native drag & drop objects in every possible browser
Provide setDragImage method on browsers that don't support it yet
Provide a DD Manager that acts as a container for all drag and drop events on which you can place your handlers, cancel events, or quickly batch all the DD objects on the page. (useful when you deal with dozens of DD Objects)
Provide a way to associate complex metadata with each DD objects
Provide a way to easily create visual feedback for your DD Objects
There's a TRAC available on the project with a roadmap and release dates, a blog, and even a SVN repo that you can check out. Be careful, as I said above the library is still in heavy development and is missing docs! We are looking for any kind of help and just hope to receive massive feedback ;).
See some examples in action:
In the video, we are providing a simple yet powerful application to import a RSS feed in your webpage. The classical way is to type in the feed URL and then getting redirected to it, which commonly takes 3 to 5 steps. With this implementation you just have to drop the RSS icon onto an appropriate target and that’s all folks!
We use YQL and JSONP to transform RSS into JSON and of course a Swell Element to dynamically attach the behavior to the webpage:
He starts out by explaining the problem at hand: making a dynamic language such as JavaScript fast is hard.
How do you get type info in dynamic type land?
Our goal in TraceMonkey is to compile type-specialized code. To do that, TraceMonkey needs to know the types of variables. But JavaScript doesn’t have type declarations, and we also said that it’s practically impossible for a JS engine to figure out the types ahead of time. So if we want to just compile everything ahead of time, we’re stuck.
So let’s turn the problem around. If we let the program run for a bit in an interpreter, the engine can directly observe the types of values. Then, the engine can use those types to compile fast type-specialized code. Finally, the engine can start running the type-specialized code, and it will run much faster.
There are a few key details about this idea. First, when the program runs, even if there are many if statements and other branches, the program always goes only one way. So the engine doesn’t get to observe types for a whole method — the engine observes types through the paths, which we call traces, that the program actually takes. Thus, while standard compilers compile methods, TraceMonkey compiles traces. One side benefit of trace-at-a-time compilation is that function calls that happen on a trace are inlined, making traced function calls very fast.
Second, compiling type-specialized code takes time. If a piece of code is going to run only one or a few times, which is common with web code, it can easily take more time to compile and run the code than it would take to simply run the code in an interpreter. So it only pays to compile hot code (code that is executed many times). In TraceMonkey, we arrange this by tracing only loops. TraceMonkey initially runs everything in the interpreter, and starts recording traces through a loop once it gets hot (runs more than a few times).
Tracing only hot loops has an important consequence: code that runs only a few times won’t speed up in TraceMonkey. Note that this usually doesn’t matter in practice, because code that runs only a few times usually runs too fast to be noticeable. Another consequence is that paths through a loop that are not taken at all never need to be compiled, saving compile time.
Finally, above we said that TraceMonkey figures out the types of values by observing execution, but as we all know, past performance does not guarantee future results: the types might be different the next time the code is run, or the 500th next time. And if we try to run code that was compiled for numbers when the values are actually strings, very bad things will happen. So TraceMonkey must insert type checks into the compiled code. If a check doesn’t pass, TraceMonkey must leave the current trace and compile a new trace for the new types. This means that code with many branches or type changes tends to run a little slower in TraceMonkey, because it takes time to compile the extra traces and jump from one to another.
Then it gets practical, with examples of tracing in action, and even discussing what items are not traced yet:
Recursion. TraceMonkey doesn’t see repetition that occurs through recursion as a loop, so it doesn’t try to trace it. Refactoring to use explicit for or while loops will generally give better performance.
Getting or setting a DOM property. (DOM method calls are fine.) Avoiding these constructs is generally impossible, but refactoring the code to move DOM property access out of hot loops and performance-critical segments should help.
Before you optimize for that.... there is already work on the way to trace here too. Finally, we end up with a comparison of various JIT approaches, for example how Nitro/V8 do their thang.