Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[ptp-dev] Remote Enabling GEM Plugin

Hi all,

We're looking at remote-enabling the GEM plugin and wanted to ask a few questions & get some input before we dig in. Any input would be gratefully accepted. After looking at the PTP Wiki, it appears the way PLDT is doing things is closer to the approach we need. Not sure how much work remote-enabling GEM would take at this point, but we'd like to get started.

Currently, the general chain of events GEM goes through is outlined in the steps below:

Notes:

* Anything we "run" is via a java.lang.Process object and Runtime.exec() to create a native process.
* We've been mostly using java.IO.File objects and relevant API calls.
* [Recap] ISP (In-situ Partial Order) is the underlying verification tool for GEM. It requires no code instrumentation.
* We've created remote projects and tried running things knowing it wouldn't work, but to see what did actually happen. GEM appears to silently fail with remote projects. No NPEs, nothing.


Basic GEM chain of events
---------------------------------

1.) We run a script (a wrapper for mpicc) that compiles the project, linking in MPI libs (e.g. libmpich, etc) and the interposition layer library for ISP ( The Profiler - libispprof). This interposition layer is needed to intercept MPI calls via the PMPI mechanism and subsequently generate and force relevant schedules.

2.) Next we run ISP itself on the profiled executable. This does all the dynamic verification for the profiled executable and generates a log file (our own format currently).

3.) The log file generated by ISP is then parsed by GEM, allowing GEM to display post verification results and source code stepping. GEM relies on knowing the path to all the relevant source files for the source code stepping we do in the Analyzer View.

Lots is happening under the hood with regard to organization, data structures, etc, but in the context of remote-enabling GEM, it seems like the relevant info is in the steps above. Once the log file is generated, GEM simply needs access to it as well as the source files.

Should we consider the lighter weight approach taken by PLDT (using IResource and getResourceURI(), pulling remote files back to the local machine - we're not relying on any headers, etc at this point), or is another approach to our problem more obvious to any who are more adept at RDT and its inner workings than we are. I think we simply don't understand the entire model very well at this point.

Thanks in advance for your time. I'll be joining the next PTP conference call.

-Alan






Back to the top