|
|
|
Caching Scenarios [message #43755 is a reply to message #43642] |
Wed, 09 July 2008 17:28 |
Eclipse User |
|
|
|
Originally posted by: ymesika.gmail.com
Running a US scenario takes some times due to access to disk for reading
the data. It could be great if the data would be cached so that running
the scenario again wouldn't require re-reading the data from the file
system.
Such feature should be toggled from the preferences so that those who
don't wish to use it will simply turn caching off.
|
|
|
|
|
|
Re: STEM Should also Run as a web application [message #43869 is a reply to message #43810] |
Fri, 11 July 2008 13:33 |
Daniel Ford Messages: 148 Registered: July 2009 Location: New York |
Senior Member |
|
|
Matt,
for the kinds of simulations we've been working on, mostly disease
spread in sub-global regions (e.g, North America, Asia, etc.) modelled at
administration level 2, we haven't really run out of horsepower quite yet so
going to a parrallel computing model might be a bit premature. These types
of models create graphs with on the order of 10000 nodes and simuliar
magnitude labels. Such models tend to run well on a single laptop. However,
for much bigger models representing finer detail (millions of nodes in the
representational graph) it might be the only way to make them work. I
expect disease models wouldn't really need quite that level of detail, but
if you were doing some kind of situational awareness application coupled
with integrated simulations for decision support then we'd need the power.
Something to think about or investigate is how we could leverage the fact
that all of STEM's modeling code is generated by EMF. One can provide their
own JET templates to the EMF code generator, so could we, for instance,
fiddle with the JET templates so that they generate code that would work
directly with something like Hadoop?
--
Daniel Ford
IBM Almaden Research Center
San Jose, CA
|
|
|
|
|
|
|
|
|
|
|
|
Re: STEM Should also Run as a web application [message #585868 is a reply to message #43679] |
Wed, 09 July 2008 17:49 |
Matthew Davis Messages: 269 Registered: July 2009 |
Senior Member |
|
|
Hi Jamie, what kinds of parallelism factors are in STEM? Is it a
candidate for Map/Reduce? It would really neat to see if there is a use
case for "cloud computing" in its computations. And, if you wanted to
make it a "software as a service" application for mass consumption,
perhaps even beyond the epidemiology, "the cloud" would be a great way
to host the computationally-intensive service.
As for how to do it as a Web app, a smartly done Google Maps interface
would really enhance the usability in my opinion. I'm not sure you can
duplicate the polygon renderings in the application's main visualizer in
Google Maps due to the browser's resource uses, but you could still do
some really interesting views merging the BIRT reports with municipality
markers in Google Maps. There are a ton of "mashup" uses IMO.
-Matt
James Kaufman wrote:
> STEM should also run as a web application. Please put your ideas on the
> best way to do this hear under this item.
>
|
|
|
|
Re: STEM Should also Run as a web application [message #585895 is a reply to message #43810] |
Fri, 11 July 2008 13:33 |
Daniel Ford Messages: 148 Registered: July 2009 Location: New York |
Senior Member |
|
|
Matt,
for the kinds of simulations we've been working on, mostly disease
spread in sub-global regions (e.g, North America, Asia, etc.) modelled at
administration level 2, we haven't really run out of horsepower quite yet so
going to a parrallel computing model might be a bit premature. These types
of models create graphs with on the order of 10000 nodes and simuliar
magnitude labels. Such models tend to run well on a single laptop. However,
for much bigger models representing finer detail (millions of nodes in the
representational graph) it might be the only way to make them work. I
expect disease models wouldn't really need quite that level of detail, but
if you were doing some kind of situational awareness application coupled
with integrated simulations for decision support then we'd need the power.
Something to think about or investigate is how we could leverage the fact
that all of STEM's modeling code is generated by EMF. One can provide their
own JET templates to the EMF code generator, so could we, for instance,
fiddle with the JET templates so that they generate code that would work
directly with something like Hadoop?
--
Daniel Ford
IBM Almaden Research Center
San Jose, CA
|
|
|
|
|
|
|
|
Powered by
FUDForum. Page generated in 0.05408 seconds