Well, we certainly live in interesting times, at least as far as JavaScript runtimes go...
Just recently, WebKit got SquirelFish. I was admittedly surprised that its main innovation seems to be switching from an AST interpreter to a bytecode model (as Rhino has been doing bytecode from the very start, and not only Java bytecode for its compiled mode, but also an internal bytecode format for its interpreted mode).
Then, Mozilla brings out TraceMonkey, which brings execution path tracing, inlining, and type specialization.
Finally, yesterday Google unveils V8, which, while it had to create some amenities we already enjoy in JVM (like, precise GC), also brings some intriguing new dynamic optimizations, like retroactively creating classes for sufficiently similar objects and then optimizing for these classes, which is a nice thing to do considering the source language (JS) is classless.
Whew.
So, I'm thinking here about which of these techniques are adequate for JVM based language runtimes.
TraceMonkey's type specialization seems like something that'd make quite a lot of sense. Well, it's trading off memory (multiple versions of code), for speed. Basically, if you'd have a simplistic
function add(x,y) { return x + y; }
that's invoked as add(1, 2) then as add(1.1, 3.14), then as add("foo", "bar"), you'd end up with three methods on the Java level:
(Of course, for this to work efficiently, you'd also need a bunch of type inference, i.e. knowing that you can narrow a numeric value to an integer etc.) Combined with HotSpot's ability to inline through invokedynamic, we could probably get the same optimal type narrowed, inlined code that TraceMonkey can.
The V8's class retrofitting is also quite interesting (making similar objects instances of a class that's constructed from their similar attributes, sometime after the objects were constructed as generic ones); especially tied into the above type specialization. On the other hand, I'm wondering if type specialization + invokedynamic won't actually give the same benefits that such class retrofitting would give. It seems to me that type specialization is a more broadly applicable, more generic, and thus more powerful concept that allows for finer-grained (method level) specializations/optimizations than doing it on a level of whole classes.
So, on the first sight, it appears to me that TraceMonkey's type specialization is the one feature from these three new JS engines that would make sense in JVM dynamic language runtimes.
On Wednesday 03 September 2008 14:23, Attila Szegedi wrote:
> Well, we certainly live in interesting times, at least as far as > JavaScript runtimes go...
> Just recently, WebKit got SquirelFish. ...
> Then, Mozilla brings out TraceMonkey, ...
> Finally, yesterday Google unveils V8, which, ..., also > brings some intriguing new dynamic optimizations, like retroactively > creating classes for sufficiently similar objects and then optimizing > for these classes, which is a nice thing to do considering the source > language (JS) is classless.
I read the "press release" yesterday (probably the most informative comic I've ever seen, not that I'm a big comic book... I mean graphic novel reader) but I don't think I can see right off how inferring a class structure from a lot of instances with similar attribute structure helps a VM optimize execution.
Has this issue been treated before? If so, could anyone supply a few references? It seems intriguing, but to me so far, only vacuously so.
> Well, we certainly live in interesting times, at least as far as > JavaScript runtimes go...
To paraphrase a loan commercial: "When VMs compete, language implementors win."
> TraceMonkey's type specialization seems like something that'd make > quite a lot of sense. Well, it's trading off memory (multiple versions > of code), for speed. Basically, if you'd have a simplistic
> function add(x,y) { return x + y; }
> that's invoked as add(1, 2) then as add(1.1, 3.14), then as add("foo", > "bar"), you'd end up with three methods on the Java level:
That's something you can build in JVM bytecodes using invokedynamic. The call site should use not a generic signature but a signature which reflects exactly the types known statically to the caller, at the time it was byte-compiled.
So "x+1" would issue a call to add(Object,int), but "x+y" might be the generic add(Object,Object).
When the call site is linked (in the invokedynamic "bootstrap method"), a customized method can be found or created, perhaps by adapting a more general method.
The language runtime can also delay customization, choosing to collect a runtime type profile, and then later relink the call site after a warmup period, to a method (or decision tree of methods) which reflects the actual profile.
> Combined with HotSpot's ability to inline through > invokedynamic, we could probably get the same optimal type narrowed, > inlined code that TraceMonkey can.
Yes, that's the easier way to get customization, via inlining. We probably need an @Inline annotation (use this Power only for Good).
> It seems to me that type specialization is a more broadly > applicable, more generic, and thus more powerful concept that allows > for finer-grained (method level) specializations/optimizations than > doing it on a level of whole classes.
The V8 technique sounds like a successor to Self's internal classing mechanism; it sounds more retroactive. A key advantage of such things is removal of indirections and search. If you want the "foo" slot of an object in a prototype based language, it's better if the actual data structures have fewer degrees of freedom and less indirections; ideally you use some sort of method caching to link quickly to a "foo" method which performs a single indirection to a fixed offset. If the data structure has many degrees of freedom (because there is no normalization of reps.) then you have to treat the object as a dictionary and search for the foo more often. You might be able to lookup and cache a getter method for obj.foo, but it would be even better to have a fixed class for obj, which you test once, and use optimized getters and setters (of one or two instructions) for all known slots in the fixed class.
Randall R Schulz wrote: > I read the "press release" yesterday (probably the most informative > comic I've ever seen, not that I'm a big comic book... I mean graphic > novel reader) but I don't think I can see right off how inferring a > class structure from a lot of instances with similar attribute > structure helps a VM optimize execution.
> Has this issue been treated before? If so, could anyone supply a few > references? It seems intriguing, but to me so far, only vacuously so.
My understanding of this optimization in V8 is that it's laser-targeted at probably the biggest bottleneck of Javascript: property lookup. Every JS impl seems to have their own way of tackling the problem, but the bottom line is that if you want a decent JS impl you're going to need a way to optimize name-based property lookup beyond the dumb hash every impl uses on day 1.
This probably won't apply to static-typed JVM languages, at least ones that have static sets of fields at compile time, but for languages like Ruby and Python, it would be an enormously useful technique. In Ruby's case, it would only help instance variable/class variable/constant access, which even JRuby implements as a mostly dumb (but fast) hash. In Python's case, the situation is similar to JavaScript, where everything is in a slot. So I'd expect to see Python get a bigger boost than Ruby (we've never seen ivar/cvar/const access in Ruby be a major bottleneck, and local variables are statically determined at parse time...eval being a notable and not terribly common exception).
Attila Szegedi wrote: > So, on the first sight, it appears to me that TraceMonkey's type > specialization is the one feature from these three new JS engines that > would make sense in JVM dynamic language runtimes.
I've toyed with these techniques in JRuby before and always came back to the same point: yes, they could make things faster by varying degrees, but in no case was it worth the pain of managing all that transient code. That is, until anonymous classloading came along.
IMHO the biggest things we need to make easier on JVM:
- make it easier to generate bytecode...ASM and friends mostly solve this, but I think we need some frameworks, DSLs or somesuch to aid it a bit more - make it absolutely trivial to load new bytecode into the system in such a way that it can enlist in optimizations - make it TOTALLY TRANSPARENT that something like PermGen even exists. PermGen as a separate memory space is an abomination and needs to be eliminated.
The best future for new languages on JVM will come from full freedom to generate bytecode on a whim and throw it away just as quickly.
John Rose wrote: > The V8 technique sounds like a successor to Self's internal classing > mechanism; it sounds more retroactive. A key advantage of such > things is removal of indirections and search. If you want the "foo" > slot of an object in a prototype based language, it's better if the > actual data structures have fewer degrees of freedom and less > indirections; ideally you use some sort of method caching to link > quickly to a "foo" method which performs a single indirection to a > fixed offset. If the data structure has many degrees of freedom > (because there is no normalization of reps.) then you have to treat > the object as a dictionary and search for the foo more often. You > might be able to lookup and cache a getter method for obj.foo, but it > would be even better to have a fixed class for obj, which you test > once, and use optimized getters and setters (of one or two > instructions) for all known slots in the fixed class.
I'm fresh from reading Rhino source last night. Rhino currently does probably "level zero" optimization for slot access by caching the most recently-access hash bucket. As you'd expect, it's only a minimal gain, and only really useful if you're only using one bucket in sequence. In practice I'd be surprised if this is very common. I tried upping the code to have three MRU slots and got only a modest increase on nsieve. Turning it off resulted in only a minor decrease. So obviously there's more to be done there.
I'm excited about the new VMs for the same reason you are...because I'm learning from the way they've solved dynlang problems and I'm trying to see how to apply these techniques on the JVM. What saddens me a bit, however, is that pre Java 7 JVMs suffer from a bytecode "totalitarian regime" whereby it's hard to get bytecode into the system and harder (or impossible) to get it out when you're done with it. I've got a bushel of optz I'd love to do in JRuby but simply cant because of the cost of all that bytecode churn.
So to happy myself up, I'm working on JRuby + invokedynamic today. This stuff *has* to get into a shipping JVM.
> My understanding of this optimization in V8 is that it's laser- > targeted > at probably the biggest bottleneck of Javascript: property lookup.
Excuse my curiosity, but my basic understanding of this optimization is like this: in JS, you don't have classes, so every object is potentially completely different from all others. Therefore, you'd always need to make the hash lookup when something says "foo.bar", as everything foo might be is always different. Thus creating these "hidden classes" allows you to classify sets of similar "foo" things, so that you can cache the meaning of "bar" (i.e. offset from object pointer) for these classes, I think this is called polymorphic inline caching?
> but for languages like Ruby and Python, it would be an enormously > useful technique.
Now when you have a class based language like Python or Ruby, you already have these sets of similar things where the same property names resolve to the same things (the classes). I.e. if your "foo.bar" statements gets hit with foo being a specific instance, you can see if you cache contains the resolved location of bar for this class. Ruby has this "eigenclass" exception, but that's probably a rare case.
So, I wonder how this optimization will help Ruby/Python/any class based, dynamic language? Am I missing something?
> Attila Szegedi wrote: >> So, on the first sight, it appears to me that TraceMonkey's type >> specialization is the one feature from these three new JS engines >> that >> would make sense in JVM dynamic language runtimes.
> I've toyed with these techniques in JRuby before and always came > back to > the same point: yes, they could make things faster by varying degrees, > but in no case was it worth the pain of managing all that transient > code. That is, until anonymous classloading came along.
> IMHO the biggest things we need to make easier on JVM:
> - make it easier to generate bytecode...ASM and friends mostly solve > this, but I think we need some frameworks, DSLs or somesuch to aid > it a > bit more > - make it absolutely trivial to load new bytecode into the system in > such a way that it can enlist in optimizations > - make it TOTALLY TRANSPARENT that something like PermGen even exists. > PermGen as a separate memory space is an abomination and needs to be > eliminated.
Spot on. Code is data. If it were easy to load new snippets of code into the JVM and have them GCed when not used, I'd be much less uneasy about creating a bunch of type-specialized methods from the same source code; heck, I'd probably even have them being soft-referenced if possible (as they're just a optimized representation of something else and can be regenerated if needed) so that they can also be reclaimed for memory.
> > My understanding of this optimization in V8 is that it's laser- > > targeted > > at probably the biggest bottleneck of Javascript: property lookup.
> Excuse my curiosity, but my basic understanding of this optimization > is like this: in JS, you don't have classes, so every object is > potentially completely different from all others. Therefore, you'd > always need to make the hash lookup when something says "foo.bar", as > everything foo might be is always different. Thus creating these > "hidden classes" allows you to classify sets of similar "foo" things, > so that you can cache the meaning of "bar" (i.e. offset from object > pointer) for these classes, I think this is called polymorphic inline > caching?
As I understand it each object is an instance of a "hidden class" and contains a reference to that class. If an object's structure is mutated by adding a property then a new class, which is a subclass of the original class, is created and replaces the "hidden class" reference in the object. The effect of this is that in a normal program you will have very many objects which share the same class.
The code generated for property access first tests to see if the hidden class is the class it expects. If it is the code falls through to instructions which implement the property access directly. If it does not a call is made to the runtime system which rewrites the generated code to use the new hidden class and this new code is then executed.
This seems fast and simple. It would not be hard to write code which continuously forced code rewriting but I think it's highly unlikely you would commonly encounter such code in real world systems.
The fact that Javascript does not support threads considerably simplifies the situation.
> The fact that Javascript does not support threads considerably > simplifies the situation.
Well, JS as such has no threading or other concurrency primitives, this much is true. But you can have a JS environment where a program accesses objects shared by multiple threads, i.e., with Rhino in JVM... You're right that a JS runtime in a browser would most likely be single threaded (at least, have a thread-per-document, where it still wouldn't have any shared state with another script instance that operates in a different document).
> On Sep 3, 2008, at 2:23 PM, Attila Szegedi wrote:
>> Well, we certainly live in interesting times, at least as far as >> JavaScript runtimes go...
> To paraphrase a loan commercial: "When VMs compete, language > implementors win."
>> TraceMonkey's type specialization seems like something that'd make >> quite a lot of sense. Well, it's trading off memory (multiple >> versions >> of code), for speed. Basically, if you'd have a simplistic
>> function add(x,y) { return x + y; }
>> that's invoked as add(1, 2) then as add(1.1, 3.14), then as >> add("foo", >> "bar"), you'd end up with three methods on the Java level:
> That's something you can build in JVM bytecodes using invokedynamic. > The call site should use not a generic signature but a signature > which reflects exactly the types known statically to the caller, at > the time it was byte-compiled.
> So "x+1" would issue a call to add(Object,int), but "x+y" might be > the generic add(Object,Object).
> When the call site is linked (in the invokedynamic "bootstrap > method"), a customized method can be found or created, perhaps by > adapting a more general method.
> The language runtime can also delay customization, choosing to > collect a runtime type profile, and then later relink the call site > after a warmup period, to a method (or decision tree of methods) > which reflects the actual profile.
Indeed, that's exactly what I had in mind.
>> Combined with HotSpot's ability to inline through >> invokedynamic, we could probably get the same optimal type narrowed, >> inlined code that TraceMonkey can.
> Yes, that's the easier way to get customization, via inlining. We > probably need an @Inline annotation (use this Power only for Good).
Why would we need an explicit inline annotation? I was under impression that use of invokedynamic would open the opportunity for HotSpot to inline the code with certain types of MethodHandles.
>> It seems to me that type specialization is a more broadly >> applicable, more generic, and thus more powerful concept that allows >> for finer-grained (method level) specializations/optimizations than >> doing it on a level of whole classes.
> The V8 technique sounds like a successor to Self's internal classing > mechanism; it sounds more retroactive. A key advantage of such > things is removal of indirections and search. If you want the "foo" > slot of an object in a prototype based language, it's better if the > actual data structures have fewer degrees of freedom and less > indirections; ideally you use some sort of method caching to link > quickly to a "foo" method which performs a single indirection to a > fixed offset. If the data structure has many degrees of freedom > (because there is no normalization of reps.) then you have to treat > the object as a dictionary and search for the foo more often. You > might be able to lookup and cache a getter method for obj.foo, but it > would be even better to have a fixed class for obj, which you test > once, and use optimized getters and setters (of one or two > instructions) for all known slots in the fixed class.
Yeah, my wet dream of combining this with the type specialization of methods above, is that you could have this JS code:
function abs(c) { return Math.sqrt(Math.sqr(c.re*c.re)+Math.sqr(c.im*c.im));
And it would end up being specialized as this (in Java):
class retrofitting:
public class Complex { int re, im; } // int, cause all observed arguments so far were ints
then type specialization:
public double abs(Complex c) // returns double b/c of Math.sqrt { ... uses GETFIELD on c for re and im...
}
and then HotSpot further JITs it to efficient native code.
The only difference is that the class name as generated would probably be $$autogenerated$$0fc45e9a or something of similar beauty, and not "Complex" :-)
> > My understanding of this optimization in V8 is that it's laser- > > targeted > > at probably the biggest bottleneck of Javascript: property lookup.
> Excuse my curiosity, but my basic understanding of this optimization > is like this: in JS, you don't have classes, so every object is > potentially completely different from all others. Therefore, you'd > always need to make the hash lookup when something says "foo.bar", as > everything foo might be is always different. Thus creating these > "hidden classes" allows you to classify sets of similar "foo" things, > so that you can cache the meaning of "bar" (i.e. offset from object > pointer) for these classes, I think this is called polymorphic inline > caching?
> > but for languages like Ruby and Python, it would be an enormously > > useful technique.
> Now when you have a class based language like Python or Ruby, you > already have these sets of similar things where the same property > names resolve to the same things (the classes). I.e. if your "foo.bar" > statements gets hit with foo being a specific instance, you can see if > you cache contains the resolved location of bar for this class. Ruby > has this "eigenclass" exception, but that's probably a rare case.
> So, I wonder how this optimization will help Ruby/Python/any class > based, dynamic language? Am I missing something?
Ruby is class based, but not only. You can extend single living object in runtime and it turns this object to singleton, creating new hidden subclass in fact. So if you have some kind of framework dynamically creating such singletons in runtime, it could possibly create a lot of hidden subclasses. IMHO. I'm not MRI or JRuby surgeon, this is only from application developer point of view. :)
Martin Probst wrote: > Now when you have a class based language like Python or Ruby, you > already have these sets of similar things where the same property > names resolve to the same things (the classes). I.e. if your "foo.bar" > statements gets hit with foo being a specific instance, you can see if > you cache contains the resolved location of bar for this class. Ruby > has this "eigenclass" exception, but that's probably a rare case.
> So, I wonder how this optimization will help Ruby/Python/any class > based, dynamic language? Am I missing something?
Python uses "slots" heavily, and many optimizations in Python impls seek to optimize access to those slots. Method invocation is not just "dispatch to this method", it's "lookup whatever's in this slot and invoke it like a method". I believe local variables are also technically slot-based, which makes it much similar to JS in that regard. So the same optimizations done for JS in V8 could directly apply to Python for sure.
In Ruby's case, most really critical things are not slots. Methods can only be defined as methods and have their own lookup/dispatch process which can be optimized as a result. Local variables are determined at parse time, and only evals--which in Ruby 1.8 share a lexically-scoped "binding scope"--can actually cause new variables to come into existence (and then only in the binding scope, so only other evals can see them). Where we have more dynamic behavior is in constant and instance variable lookup. Constants are both lexically and hierarchically scoped to the containing class object, which means that thousands of constants can be visible to a given piece of code due to it having to walk both lexical containers and superclasses to find them. An optimization to speed lookup of those constants, avoiding O(n) searching, could be adapted out of the V8 techniques. Instance variables, on the other hand, are an open hash on each object instance, and new variables can be added at any time by any class in the object's class hierarchy or via a few methods accessible from outside the object. So programs that make heavy use of instance variables end up spending a lot of time doing hash lookups, and a V8 optimization to specialize ivar tables as they stabilize could improve this code as well.
Attila Szegedi wrote: > On Sep 3, 2008, at 11:56 PM, John Rose wrote: >> Yes, that's the easier way to get customization, via inlining. We >> probably need an @Inline annotation (use this Power only for Good).
> Why would we need an explicit inline annotation? I was under > impression that use of invokedynamic would open the opportunity for > HotSpot to inline the code with certain types of MethodHandles.
The Inline annotation would be a hint to hotspot that a given method should at all costs be inlined into its callers (if I remember discussions with John correctly). Potentially this could also be an explicit way to break the N-bytecode size limit for inlined methods. Basically I see it as a way to hint to hotspot that it shouldn't use a given method as an inlining root, and should always try to force it into its callers. I have many such cases in JRuby where I'd love to be able to poke hotspot a bit.
>>> It seems to me that type specialization is a more broadly >>> applicable, more generic, and thus more powerful concept that allows >>> for finer-grained (method level) specializations/optimizations than >>> doing it on a level of whole classes. >> The V8 technique sounds like a successor to Self's internal classing >> mechanism; it sounds more retroactive. A key advantage of such >> things is removal of indirections and search. If you want the "foo" >> slot of an object in a prototype based language, it's better if the >> actual data structures have fewer degrees of freedom and less >> indirections; ideally you use some sort of method caching to link >> quickly to a "foo" method which performs a single indirection to a >> fixed offset. If the data structure has many degrees of freedom >> (because there is no normalization of reps.) then you have to treat >> the object as a dictionary and search for the foo more often. You >> might be able to lookup and cache a getter method for obj.foo, but it >> would be even better to have a fixed class for obj, which you test >> once, and use optimized getters and setters (of one or two >> instructions) for all known slots in the fixed class.
> Yeah, my wet dream of combining this with the type specialization of > methods above, is that you could have this JS code: ... > and then HotSpot further JITs it to efficient native code.
Yeah, me too. And the primary thing that has kept me from trying to implement such in JRuby is the risk of running on a JVM version that holds onto loaded bytecode with a kung-fu death grip, eventually blowing permgen. Bytecode Freedom! Bytecode Freedom!
> The only difference is that the class name as generated would probably > be $$autogenerated$$0fc45e9a or something of similar beauty, and not > "Complex" :-)
Perhaps it's time for an analog to JSR-42 that provides a mapping from mangled class and method names to actual names. I've got a "hybrid" stack trace generator in JRuby right now that mines StackTraceElement[] for known interpreter calls and replaces them with information from interpreter frames:
~/NetBeansProjects/jruby ➔ jruby -rjava -J-Djruby.backtrace.style=RUBY_HYBRID -d -e "def foo; raise; end; foo" java.lang.Thread:1426:in `getStackTrace': unhandled exception from org.jruby.RubyException:141:in `setBacktraceFrames' from org.jruby.exceptions.RaiseException:146:in `setException' from org.jruby.exceptions.RaiseException:69:in `<init>' from org.jruby.RubyKernel:756:in `raise' from org.jruby.java.addons.KernelJavaAddons:26:in `rbRaise' from :1:in `foo' from org.jruby.internal.runtime.methods.DynamicMethod:225:in `call' from org.jruby.internal.runtime.methods.DynamicMethod:202:in `call' from org.jruby.runtime.CallSite$InlineCachingCallSite:592:in `cacheAndCall' from org.jruby.runtime.CallSite$InlineCachingCallSite:169:in `call' from -e:1:in `foo' from ruby.__dash_e__Invokermethod__0$RUBY$fooFixed0:-1:in `call' from org.jruby.internal.runtime.methods.CompiledMethod:216:in `call' from org.jruby.runtime.CallSite$InlineCachingCallSite:592:in `cacheAndCall' from org.jruby.runtime.CallSite$InlineCachingCallSite:169:in `call' from -e:1: `<toplevel>' from -e:-1: `<toplevel>' from ruby.__dash_e__:-1:in `load' from org.jruby.Ruby:547:in `runScript' from org.jruby.Ruby:460:in `runNormally' from org.jruby.Ruby:333:in `runFromMain' from org.jruby.Main:214:in `run' from org.jruby.Main:100:in `run' from org.jruby.Main:84:in `main'
Note the `foo' and `<toplevel>' lines here interspersed with the normal Java lines.
On Sep 4, 2008, at 7:02 PM, Charles Oliver Nutter wrote:
>> The only difference is that the class name as generated would >> probably >> be $$autogenerated$$0fc45e9a or something of similar beauty, and not >> "Complex" :-)
> Perhaps it's time for an analog to JSR-42 that provides a mapping from > mangled class and method names to actual names.
Well, in my original example, I would've expected the runtime to create a class out of two object literals, {re:5,im:6} and {re:7,im: 9}. I would be highly surprised if you could have a compiler deduce a non-mangled name out of these :-) (best it could come up with would be "ReIm", IMHO; or "re_int$im_int").
> I've got a "hybrid" > stack trace generator in JRuby right now that mines > StackTraceElement[] > for known interpreter calls and replaces them with information from > interpreter frames:
Yeah, we have something similar in Rhino too, except we use it for its stackless interpreted mode, where it uses a linked list internal stack within a single interpreter invocation -- we're replacing Interpreter.interpret() stack trace elements with JS stack produced by their invocation...
Attila Szegedi wrote: > Well, in my original example, I would've expected the runtime to > create a class out of two object literals, {re:5,im:6} and {re:7,im: > 9}. I would be highly surprised if you could have a compiler deduce a > non-mangled name out of these :-) (best it could come up with would be > "ReIm", IMHO; or "re_int$im_int").
I was more thinking about providing a mapping file in a given class that says "this method here actually means foo, not foo$BLAH/__wahoooyippee. Not necessarily a standard mangling convention, but a way to specify how you've mangled.
On Wednesday 03 September 2008 15:26, Charles Oliver Nutter wrote:
> Randall R Schulz wrote: > > ... I don't think I can see right off how > > inferring a class structure from a lot of instances with similar > > attribute structure helps a VM optimize execution.
> > ...
> My understanding of this optimization in V8 is that it's > laser-targeted at probably the biggest bottleneck of Javascript: > property lookup. Every JS impl seems to have their own way of > tackling the problem, but the bottom line is that if you want a > decent JS impl you're going to need a way to optimize name-based > property lookup beyond the dumb hash every impl uses on day 1.
Thanks for the information. And to John, as well, for the link:
On Thursday 04 September 2008 04:05, John Wilson wrote:
> ... > As well as the comic there's this description > http://code.google.com/apis/v8/design.html > This probably won't apply to static-typed JVM languages, at least > ones that have static sets of fields at compile time, but for > languages like Ruby and Python, it would be an enormously useful > technique. ...
Given the heavy use of Maps in Groovy and Grails, this sort of technique would seem to benefit them, too, if it or something like it is applicable or can be adapted.
Randall R Schulz wrote: > Given the heavy use of Maps in Groovy and Grails, this sort of technique > would seem to benefit them, too, if it or something like it is > applicable or can be adapted.
It definitely could, but only in places where those maps aren't being directly exposed as maps. The minute a map-like structure escapes a predetermined scope, you lose the ability to track it well.
On Sep 5, 2008, at 1:37 PM, Charles Oliver Nutter wrote:
> It definitely could, but only in places where those maps aren't being > directly exposed as maps. The minute a map-like structure escapes a > predetermined scope, you lose the ability to track it well.
Yes, but here's a hack awaiting an inspired hacker: Josh designed a structure-modified count into JDK collections, for fail-fast iterator behavior. Maybe it could have a new purpose, to enable structure- aware access caching?