Abstract Final - Rendezvous with Keyur Shah

Abstract Final - Rendezvous with Keyur Shah

All XML General XML Java XML ESRI XML

20050930 Friday September 30, 2005

CONned by windowsCategory: General

Link: Naming a file

Do not use the following reserved device names for the name of a file: CON, PRN, AUX, NUL, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, and LPT9. Also avoid these names followed by an extension, for example, NUL.tx7.

I was writing an auto code generator which was generating many Java files and one of the files to be generated was Con.java. My generator generated every other file but when it came to Con.java it threw a FileNotFoundException. Which was very weird because I was trying to create a new file and so it not finding the file was in fact a good thing! To justify that it wasn't just a figment of my imagination I checked and confirmed that there was no such file. I rechecked and yet there was no such file. I cursed and ran the code generator again - this, my never failing trump card betrayed me as well.

Of course these days where everything else fails, Google doesn't. And sure enough it brought me to this page and it was clear that CON was one of the many reserved words and so I could not create a filed named so.

While I am ok with Windows not allowing me to create a file with a name which is a reserved word but not allowing files with a reserved word followed by an extension is all too limiting. And worse still the error messages when I try to create a file con.whatever vary from "access denied" to "file already exists" to "are you kidding me?" (no, not the last one but it came close).

( Sep 30 2005, 07:41:28 PM PDT ) Permalink Comments [0]

20050926 Monday September 26, 2005

Extend the ArcGIS ADF with POJOs - Part IIICategory: ESRI

The first cardinal rule when working with pooled ArcGIS Server objects in a webapp is that you must release the server context after every web server request. This is because you are sharing the server object with many other users and you need to put it back into the pool so that other users can access it. The second cardinal rule is that you cannot reference any server object after you have released the server context. If you do, it would be like executing a JDBC Statement after closing the Connection. You always need a live connection or a context in a client-server environment to work with objects that are remotely hosted.

So the question then is: How do I persist with the current state of a server object after releasing the context? The answer lies in the saveObject() and loadObject() methods of the IServerContext. You can serialize objects to their string representations with saveObject() and you can deserialize them to their object forms with loadObject(). So calling saveObjects before releasing the context and calling loadObjects on reconnect sets you up well to persist with the current client state while working with pooled objects.

As always, the ADF assists you in making such experiences easy for you. Rather than you having to scratch your heads about when to calls loads and when to calls saves, where to call them, how to keep track of them, et al the ADF provides you a simple interface in WebLifecycle where in you can do all of these tasks and that for only those objects needed for that particular task. The ADF calls relevant methods of the WebLifecycle just prior to releasing the context as well as immediately after a reconnect.

In the CountFeatures class that we have been working on through Parts I and II we may want to persist with the SpatialFilter object. Generally you persist only those objects which have enough client state in them for you to justify the saves and loads. It's obvious that the SpatialFilter doesn't have much state in it to merit justification but the idea here is to showcase how to do it easily so that you can apply it to more pertinent objects such as graphic elements and symbols.

The WebLifecycle defines 3 methods - activate(), passivate() and destroy() - which are called at different stages of a request / session lifecycle by the ADF. The passivate() method is called after a request has been serviced giving you the opportunity to serialize server objects to strings. The activate() method is called before servicing the request where you can deserialize the strings so that they are available as live objects when you perform the business tasks. And finally the destroy() is called when the session is being terminated for you to perform the necessary cleanup.

Below is the code which extends the CountFeatures class to participate in this lifecycle:

public class CountFeatures implements WebContextInitialize, WebContextObserver, WebLifecycle {
    ...
    //serialized SpatialFilter - valid after passivate() and before activate()
    String serializedSpatialFilter;
    //the spatial filter object - only valid between activate() and passivate()
    SpatialFilter filter;

    public void init(WebContext webContext) {
	...
        //create SpatialFilter and set spatial relationship
	filter = new SpatialFilter(agsctx.createServerObject(SpatialFilter.getClsid()));
	filter.setSpatialRel(esriSpatialRelEnum.esriSpatialRelContains);	    
    }
    
    public void passivate() { //serialize to strings
	serializedSpatialFilter = agsctx.saveObject(filter);
    }
    
    public void activate() { //deserialize to objects
	filter = new SpatialFilter(agsctx.loadObject(serializedSpatialFilter));
    }
    
    public void destroy() { //cleanup
        filter = null;
	serializedSpatialFilter = null;
    }

    public String doCount() throws Exception {
        ...
        //spatial filter is already created - only need to set geometry to the current extent
        filter.setGeometryByRef(agsmap.getFocusMapExtent());
        ...
        return null;
    }
}

The complete source code can be downloaded from here.

It's important to note that the loads still create new instances of the objects on the server. So there are no performance benefits to saves and loads versus new instance creations. The benefit to gain is that you don't have to worry about persisting with the state of the object yourself. The server will do that work for you.

That does it for this trilogy (ok so that was blatant abuse of that word - but hey, who's to say... It's my world around here :)

( Sep 26 2005, 12:11:39 AM PDT ) Permalink Comments [2]

20050910 Saturday September 10, 2005

JDK 5 concurrency API: group / batch thread poolCategory: Java

First up, let me say that the new concurrency API in JDK 5 is indeed a boon for the Java community especially for developers (including yours truly) who before this indulged in threads and concurrent programming only sparingly. Not because we didn’t know how to do it but because getting it right was quite an ordeal. The concurrency API should surely make concurrent programming, the bastion of only a selected few so far, more "mainstream".

So here’s my scenario: I need for a group / batch of tasks to execute concurrently and additionally, I need to wait until all of them have finished executing before moving forward.

Levaraging the new concurrency API; to implement this I can use the Executors factory to create a new thread pool. (A thread pool being an instance of ExecutorService.) To this pool I can submit the tasks of my batch which will be executed according to the policies of the pool. Then if I want to wait on all of them to complete, I need to call shutdown() followed by awaitTermination(). With this my code will indeed block until all tasks have been executed but the problem is that the thread pool no longer accepts any new tasks. So for my next batch of tasks I need to create a new thread pool all over again - which obviously is unneeded and expensive.

All said and done, I need an awaitExecution() method which like awaitTermination() blocks until all tasks have completed but unlike the shutdown() + awaitTermination() combo does not reject new tasks.

Below is a simple wrapper with the awaitExecution() method included. You can ofcourse use any of the extension patterns - decorator, adapter, etc. - for a more refined solution.

public class GroupThreadPool {
  protected ExecutorService pool;
  protected ArrayList<Future> futures = new ArrayList<Future>();

  public GroupThreadPool(int poolSize) {
    pool = Executors.newFixedThreadPool(poolSize);
  }

  public void submit(Runnable command) {
    futures.add(pool.submit(command));
  }

  public void awaitExecution() {
    try {
      for (Iterator<Future> iter = futures.iterator(); iter.hasNext(); ) {
        iter.next().get(); //blocking call
      }
    } catch (Exception ignore) {
    } finally {
      futures.clear();
    }
  }
}

The user creates this GroupThreadPool just once, calls submit() to submit various tasks in a batch and then calls awaitExecution() to block until all tasks have executed. He can continue to use the same GroupThreadPool object to execute subsequent batches.

The implementation adds the submitted tasks to a list of Futures. To block until all tasks have completed, it calls get() on all Futures which itself is a blocking operation. So awaitExecution() returns only after all tasks have been executed but before returning it clears the list of Futures to accept the next batch of tasks.

I would love suggestions / feedback on this implementation. Is there a better approach? Also, is this a common use case which merits inclusion of awaitExecution() in ExecutorService itself?

( Sep 10 2005, 07:03:17 AM PDT ) Permalink

20050822 Monday August 22, 2005

Extend the ArcGIS ADF with POJOs - Part IICategory: ESRI

In Part I we discussed how you could implement GIS functionalities in POJOs and plug them into the ADF. In this part we'll extend the POJO a little further.

In Part I, the CountFeatures object calculated and updated the feature count on a client interaction such as a button click. Suppose that we now need for this object to recalculate the count automatically whenever the current extent of the map changes or the map refreshes due to some other action.

The ADF provides a very simple way to do this. Objects can register themselves as observers of the WebContext and whenever the context is refreshed (by virtue of the user calling the refresh() method on the context), all observers will be intimated of this event and each observer can act upon it individually. This way we have loosely coupled objects reacting together to the context refresh.

With this background let's now extend our CountFeatures class to implement this behavior.

public class CountFeatures implements WebContextInitialize, WebContextObserver {
  
  public void init(WebContext context) {
    ...
    context.addObserver(this);
  }

  public void update(WebContext context, Object arg) {
    doCount(); //perform the business action on update
  }
  ...
}

First up, all observers of the WebContext need to implement the WebContextObserver interface. Next, they register themselves as observers of the context by calling the addObserver() method on the context. Finally, on every context refresh, the update() method of the WebContextObserver interface is called by the ADF and the object reacts (performs the business action) to the same. In this case we simply call the doCount() method which recalculates the feature count of the updated map. This will ensure that whenever the context refreshes (for example when the user zooms or pans), this object will recalculate and display the new count to the user.

As simple as that. Apart from a few modifications to the Java code, nothing else needs to change from the Part I source code. The JSP as well as the configuration files remain unchanged. You can download all the source code (including the unchanged JSP) for this part from here.

In Part III we'll extend CountFeatures further by implementing the WebLifecycle interface.

( Aug 22 2005, 12:27:51 PM PDT ) Permalink Comments [0]

20050809 Tuesday August 09, 2005

Extend the ArcGIS ADF with POJOs - Part ICategory: ESRI

This is the first of a 3 part series where we'll discuss how to add custom GIS functionality to the ADF as POJOs (Plain Old Java Objects). To accomplish this we'll be leveraging the IOC inherent in the ADF discussed earlier. We talked about 3 very important interfaces in the IOC discussion - WebContextIntialize, WebContextObserver and WebLifecycle. Putting these 3 interfaces in practice will be central to the 3 parts respectively. In this part we'll make use of the WebContextInitialize interface.

We'll keep the functionality to be implemented quite simple: Count the number of features of a given layer in the map's current extent.

To implement this scenario our POJO will need a few basic properties and methods - a read-only count property, a read/write layerId property representing the layer whose features are to be counted and a business method doCount() which implements the business task at hand. With this said, the skeleton of the class (we'll call it CountFeatures) will be as such:

public class CountFeatures {

  //properties
  int count;
  int layerId;
  public int getCount() { return count; }
  public int getLayerId() { return layerId; }
  public void setLayerId(int layerId) { this.layerId = layerId; }

  //business method
  public String doCount() {
    ...
    ...
    count = ...;
    return null;
  }
}

You might have noticed that the doCount() method returns a String. This is because the ADF is JSF based and when the user clicks on say a command button on a web page, it results in a call to doCount(). Based on the return value of this method the JSF framework decides which page to navigate to. Returning a null ensures that the webapp stays on the same page.

OK, so now we have the skeleton in place but we also need access to the ArcGIS Server and the underlying ArcObjects to perform the GIS task at hand. This is where the WebContextIntialize comes into the picture:

public class CountFeatures implements WebContextInitialize {
  
  //the context associated with this object
  AGSWebContext agsctx;
  public void init(WebContext context) {
    agsctx = (AGSWebContext)context;
  }
  ...
}

The ADF will call the init(WebContext) method of objects implementing WebContextInitialize immediately after the object is instantiated. This gives the object access to the WebContext. The AGSWebContext (which is the actual implementation of the WebContext that we work with) maintains references to the ArcGIS Server objects and ArcObjects (such as IMapServer, IMapDescription, etc.) as well as to other ADF objects (such as AGSWebMap). This implies that by virtue of gaining access to the AGSWebContext our custom object now has a hook into the whole of ArcObjects as well as the ADF - basically everything that you need to accomplish your GIS task at hand.

With access to everything that our class needs, the business logic can now be implemented in the doCount() method to perform the count operation and set the result to the count variable.

That's it - our Java code ends here. All that is left to do now is to register this object as a managed attribute of the WebContext so that the ADF can automatically instantiate the object on demand as well as call the init(WebContext) method immediately after instantiation. This is accomplished by adding the following lines of XML to managed_context_attributes.xml which you can find in the /WEB-INF/classes folder of your ADF webapp:

<managed-context-attribute>
  <name>countFeatures</name>
  <attribute-class>custom.CountFeatures</attribute-class>
  <description>counts features of a given layer in the current extent...</description>
</managed-context-attribute>

With this done, you can now access our custom object by name (countFeatures in this case).

You can download the full source here. In addition to the Java code, the ZIP file also contains a sample JSP. The JSP has a command button to trigger the business method, a dropdown to choose the layer and a text out to display the count.

In conclusion I'd like to mention that while admittedly the functionality that we have implemented here is trivial, you can essentially follow the same programming model to implement your own functionality as well: POJOs which implement WebContextInitialize

In Part II we'll extend this same object to be an observer of the context and in Part III we'll make this object participate in the ADF lifecycle.

( Aug 09 2005, 01:08:51 AM PDT ) Permalink Comments [2]

20050807 Sunday August 07, 2005

Inversion of Control in the ArcGIS Java ADFCategory: Java

Martin Fowler in a recent blog gave a good short explanation of the inversion of control pattern... In ESRI's ArcGIS Java ADF we employ this approach at a few places.

  1. The WebContextInitialize interface declares an init(WebContext) method. The ADF will call this method on objects which implement this interface and register themselves as attributes of the WebContext. This method will be called immediately after they are registered with the WebContext. Users interested in getting access to the associated WebContext object or want to do some initialization tasks should implement this interface.

  2. The WebLifecycle interface declares methods which will be called by the ADF at various phases of the webapp's lifecycle. Users can implement activation, passivation and destroy logic in these methods. This interface is most relevant when using pooled objects since users may want to rehydrate and dehydrate the states of the server objects when the ADF reconnects and releases its connection to the ArcGIS server on every request.

  3. The WebContextObserver interface declares an update(WebContext webContext, Object args) method. Objects implementing this interface can register themselves as observers of the WebContext by calling the addObserver(WebContextObserver) method. After users perform operations which change the state of the objects that they work with (for example zoom to a different extent, add a graphic element, etc.), they call the refresh() method on the WebContext. When this happens, the ADF iterates thru all the registered observers of the context and calls their update() methods. This ensures loose coupling among the various objects but at the same time gives these loosely coupled objects an opportunity to be in sync with the changed state of the app. This is a classic implementation of the observer pattern with the WebContext acting as the Observable object.

With the advent of JDK 5 annotations, it might be convenient for the users if #1 could be achieved by simply annotating a field or a setter method with an @Resource like annotation. The ADF on encountering this annotation on a WebContext field or setter method could inject the same into the interested object. Further, users can do the initialization tasks in any arbitrary method annotated with the @InjectionComplete annotation and the ADF will call this method immediately after injecting the WebContext. (Both these annotations are proposed by JSR 250 - Common Annotations).

#2 could also be achieved through annotations. Much like how EJB3 is proposing @PostActivate, @PrePassivate, et al; users can annotate the lifecycle methods with annotations such as @OnActivate, @OnPassivate and @OnDestroy. This would alleviate them of having to implement interfaces for lifecycle callbacks and further, they can choose to participate in only those phases of the ADF lifecycle which makes business sense for their objects.

Comments / feedbacks welcome.

( Aug 07 2005, 10:24:35 PM PDT ) Permalink Comments [1]

20050802 Tuesday August 02, 2005

Adding layers dynamically in the ArcGIS Java ADFCategory: ESRI

There have been many questions about adding layers dynamically in the ADF... And of course the requirement is that the added layer will reflect not only on the map but also on the TOC, layer drop downs, etc...

When working with non-pooled objects, there's a straightforward way of doing this. Look at the source code below:

AGSWebContext agsCtx = ...; //get hold of the AGSWebContext
AGSWebMap agsMap = (AGSWebMap)agsCtx.getWebMap();

//Step 1
agsCtx.applyDescriptions();

//Step 2
MapServer mapso = new MapServer(agsCtx.getServer());
IMap map = mapso.getMap(agsMap.getFocusMapName());
ILayer layer = ...; //create the layer
map.addLayer(layer);

//Step 3
agsCtx.reloadDescriptions();

Let's discuss the 3 steps now:

And that's about it! This sequence of steps holds true for any stateful changes that you want to make to non-pooled objects. The 3-word mantra is APPLY-CHANGE-RELOAD.

If you wanted to work with dynamic layers (or make any stateful changes) in the pooled context, there's indeed more work to do because you are now sharing the server object with others and you want to return the object back to the pool in the same state that you had received it. You need to get access to the server object, apply the current MapDescriptions to the object graph, make the changes to the object graph, reflect the changes in the MapDescriptions and the web controls, undo the changes to the graph and then return the object back to the pool. You can check out the dynamic layers sample on EDN to see this use case in action.

( Aug 02 2005, 08:52:53 PM PDT ) Permalink Comments [3]

20050722 Friday July 22, 2005

ArcGIS Server 9.2 (Java): Coming soon to a GIS near youCategory: ESRI

The ESRI User Conference is around the corner and it's time to talk about what the future holds for our users. Before I delve into the details let me say this: ArcGIS Java users, we have heard you! Here's what the future holds for you:

All this and more will be discussed at the UC in a couple of sessions:

I'm sure you are rushing to add them to your agenda! Also, here's the UC Q and A for more info...

See ya!

( Jul 22 2005, 10:57:21 AM PDT ) Permalink Comments [2]

20050703 Sunday July 03, 2005

The JSF evolutionCategory: Java

It's been more than 2 years since I started looking into JSF. JSF was in its early access avatar then and the JSF community felt like a startup trying to find its way into the unknown. And now 2 years on it's very heartening to see such wide spread industry support for JSF with any number of IDE vendors supporting it, every other technology on the block showcasing how it too can be integrated with JSF, JSF being the prime topic at many a J1 discussions, et al...

We at ESRI have been showcasing our GIS components for the past 2 JavaOnes. Doreen presented them in 2004 and Steve did it this year. They were well received on both occasions. But what has been even more interesting to me is the changing developer perspectives between J1 2004 and 2005 with regard to JSF. Last year they were just intrigued by this new technology and wanted to see what it was all about. This time around folks had a lot better understanding of it (they had either read extensively about it or actually used it themselves) and they wanted to see how they could actually use it in their organizations.

True JSF has its faults but IMO industry wide support should definitely tip the scales in its favor when it comes to developers choosing which framework to use in their new projects. But even without industry support I firmly believe that JSF has more merits than chinks. It does have a steep learning curve but when you are over it it's worth the effort in gold - it makes you think "components" and not request parameters, you now write business logic in POJOs and not worry about how the controller will call into them, you perform actions in simple methods in backing beans and not in an obscure action form or servlet - the list just goes on...

It's the same learning curve that one has to wade through when s/he went from procedural programming to OOP, from just getting the job done to perusing the GoF design patterns and employing them in their projects... It takes time but the end result is that you are a better programmer because of it.

( Jul 03 2005, 01:19:45 PM PDT ) Permalink Comments [0]

20050702 Saturday July 02, 2005

Annotation use casesCategory: Java

My first blog here but will cut straight to the point.

The most talked about feature at J1 this year was annotations. It was as if every new API / framework had to have support for annotations or must have something pertaining to it on their radar to gain acceptance or even be considered a contender.

I have not yet being able to make my mind if this profileration of @YeahIHaveAnAnnotationToo is a good thing or not. For the time being I am trying to come up with use cases of where annotations make sense. Here are some that I have assimilated from various blogs, J1 sessions and my own brain dumps:

1. Dependency injection.

2. Aspects (boiler plate stuff handled by the container / framework) and interceptors

3. Callback methods

4. Container contracts

5. Programmatic access to metadata for classes, fields and methods.

(I'll try to add more as I get my head around it more (or if you have any more points for me).)

While this push toward annotated POJOs for everything might be a good thing; the thing that I fear is that Java classes of the future would have less Java code and more of annotations. It may become increasingly difficult for the user to deterministically gauge what behavior any given method would exhibit because what the method does could change dramatically depending on which annotations were applied when the method was actually called.

Not trying to attach a pessimistic annotation to annotations (what can I say, I love meta-anything!) but just waving a flag of caution before we head heads first into the annotated unknown... ( Jul 02 2005, 07:23:55 PM PDT ) Permalink Comments [2]