Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [udig-devel] Pattern for updating features based on attribute change


See below..

On Aug 13, 2010, at 11:58 PM, Jody Garnett wrote:

See my prior post, but I saw the notes on memory data store and was only using it for mock up purposes. I'm not sure where these tutorials are..somewhere in: http://udig.refractions.net/confluence/display/DEV/1+Getting+Started. But in any case, what you're describing below sounds exactly like my basic use case but I'd like to go a lot further than that.

The source code for the training materials are available in svn; and if you are determined you can go through with a debugger and see what is happening. They are not the best example code as often they are used to communicate a concept rather than be efficient; indeed making them efficient is the subject of some of the exercises.

I'm doing this all as part of my open source project so I won't be going to any training courses soon. But as an open source developer I also appreciate the need to find a way to pay for all of this! Actually all I need is some ideas for the basic approach and am more than happy to dig into the source for that..in fact I'd rather look at source any day then talk about it. :)


For agent-based modeling support we do need to provide that ability to move items around on the landscape. But I'd also like to be able to change any kind of map object in real time. For example, you might want to model the spread of an epidemic from region to region in which case you'd want to treat shp file polygons representing those regions as agents and update them dynamically. So really, I can't see any reason beyond performance perhaps to treat the modeled map components any differently than any other components. That would be a much more satisfying approach from the Model point of view but of course you'd also then have the ability for users to treat the VIew just as with any other Geo feature. While it would also be a real advantage -- and inherent in a decent integration -- for users to be able to treat the model objects as regular editable objects that isn't a key need. And in fact, the overhead associated with doing all of the edits through an framework (command stack, etc..) is likely to be too high.

One thing to do is to allow your data to adapt to a FeatureSource as needed (allowing for integration with the GIS tools); but still have your core object model with tools and fast renders as needed.

Yeah that's a compelling approach. It doesn't seem to make sense to maintain the data in memory in two different places, so I've been giving some serious thought to this. The basic usage would be a scenario in which data and geometries are read-only from vector (shp) files, raster files, etc.. and these have one-to-one relationships to collections of agents that are being executed and managed by the ABM engine. So I could..

1. Write all of my Agent information to the existing ShpFile FeatureSource. That's what I'm experimenting with now as it seemed to be low-hanging. Of course I would have to create new feature attributes for agent attributes that do not exist in the shape files. One advantage is that I could persist back into the shape files but that's not consistent with any of my other persistence mechanisms so that's not really a big win.
2. Bring all of the ShpFile data into my Agents and provide a feature source for that. This is what I think you might be suggesting. The advantage here is a very consistent Model design, and presumably a major performance win. Also, I think the memory model could be much simpler and more efficient because again I don't have to worry about transaction issues. All of my models need to be in memory all of the time even at like 10^6+. Some disadvantages here are that there is not reason at all to keep all of the polygons, etc.. in my in memory model and I certainly don't want to persist them in my persistence mechanisms (for example, saving the model at the nth iteration) and I don't need to, because (generally speaking!) the borders between regions don't change during the course of a model run.
3. Take a hybrid approach in which the Geo Resource sources stuff is kept in the original resource source and treated as read-only, the ABM sourced data is kept in the agent model, and I have some kind of Feature source adapter that picks between the two.

I'm liking 3 and it might be close to what you're suggesting. Does it make any sense to try for that route? It's similar to a glass pane approach but pushing everything to the model.


An example of this from the uDig code base is a WMSLayer; if it can tell that the layer comes from GeoServer it will allow the layer to adapt to a WFS FeatureSource - allowing you to select with the feature selection tool content from a WMS layer.

Yes, that seems like the inverse problem but one in which the same techniques could be used, right? Here you have a layer in which you're concerned about latency, and you don't want that latency to interfere with the rest of the rendering process?

You ask about 3D; the viewport model is separate from the renderer; and 3D can be supported as a rendering engine without disrupting the rest of the application. So far a couple of teams have done it; the last one got into some license trouble and had to withdraw their work.

The GEF3D team uses LWJGL, which they've successfully employed by simply including a reference to the LWJGL update site so that the user (transparently, for the most part) is actually obtaining the software directly. 


I would of course be happy todo do the work but:
- I would need a customer and contract as it is more effort then I intend to put in as a volunteer
- we would need to preserve the independence of the application from the rendering system (as some installations rely on citrix or other screen remoting technology that does not play well with 3D).

Right, I was really just curious if anyone had played with it.. Again, I'm in the same boat of trying to find consulting gigues to accomplish the stuff that I feel needs to be accomplished. As those are found they will definitely be directed toward the people who can make cool things happen. :) But yes of course it would need to be modular.

Real 3D is the next great step of course. I was at a talk recently where someone pointed out that so many of the ESRI and other demos were actually nothing more than eye candy (I'm guilty of that as well, btw, but hey, it works!) but that (I guess excpet for geomodeling) all of the backing models are really more or less 2D models underneath.



Anyway, the bottom line is that its always possible that for some things super-imposed images / glass pane approach is needed but I'd like to try for a real integration first. And I'll be dealing with the model data internally so on the other hand I don't care as much as most people would about coherency and consistency in the geo model itself.. I'll simply need to make sure that at the moment of any write-back persistence to the geo stores (shape files, etc..) that that is managed well.

The uDig system is however *great* for integrating your own renderer onto your own data model. Indeed if I was doing it that would be my choice for showing your agents at work. The combination of Spring/Hibernate remoting and a custom renderer overlaying "real objects" overtop of a normal GIS Map (features and rasters) is a common approach used several teams around the world.

My thinking is that I definitely don't want to create my own geo renderer -- that's actually a great deal of the value that uDig / GeoTools are providing to this attempt. It's more that I want to the best way of integrating as a feature provider into the existing tool sets. At the same time, I'm working on light weight UI tools that wrap the renderers and meld in nicely with the other ABM tools form a User Experience point of view. for that e.g. http://eclipse.org/amp/images/AMPNewScreenshot.png


So just because your data is not "spatial" do not feel that is requires "real integration" - the ability to mix it up is why uDig is a great integration platform.

probably too much information..

No worries; better to have some background so I am not giving you poor advice.

Well, thanks for your time, I do really appreciate it.

-Miles


Back to the top