Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[platform-releng-dev] Scenarios


Here are some User scenarios which capture various situations we encounter.  Note that here we talk of "user" as being anyone who wants to run tests.  This could be anyone from someone clicking around in the JUnit UI in their dev environment to the very controlled releng situation.

A) Basic:
1) user chooses a machine on which to run tests
2) user chooses an Eclipse code base to test
3) user chooses some number of variations to use when running the Eclipse in which the test cases will actually execute. Variatoins include things such as VM, VM args, and potientially data set specifications.
4) user chooses some set of tests (perhaps suites) to run
5) a new VM is launched to run the tests.  This VM is run according to the variation information and code base specified
6) in the running test instance and assertions done.  Test pass/fail results are recorded in XML files on disk (releng case) or communicated to the origin via sockets (Junit/PDE launcher case)

A.P) Basic with performance measurements
1-5) as in Scenario A
6) as in Scenario A#6 but the each test, as required, collects and commits data to a database or local file.  The data is tagged as having come from a particular run of the scenario as a whole.  Assertions are then possible either on the absolute values in the collected data or relative to a reference point (other run of the scenario including variations as needed) specified when running the test (e.g., some step 4.5)


B) Session tests:
1-4) as in Scenario A
5) a new VM is launched to run the tests.  This instance may run JUnit or Eclipse depending on the nature of the tests.  If it is running Eclipse it is run according to the variation information and code base specified
6) in the running test instance, some number of tests are run.  An individual test may be a session test.  In this case, a SessionTestRunner (or some suitable helper) is created which launches a new VM running Eclipse according to the variations set and using the JUnit/PDE launcher technology).
7) each launched session test does its work and its assertions.  Test pass/fail results are communicated to the origin via sockets (Junit/PDE launcher case)
8) Test pass/fail results are recorded in XML files on disk (releng case) or communicated to the origin via sockets (Junit/PDE launcher case)

B.P) Session tests with performance measurements
1-6) as in Scenario B
7) as in Scenario B#7 but the each test, as required, collects and commits data to a database or local file.  The data is tagged as having come from a particular run of the scenario as a whole.  Assertions are then possible either on the absolute values in the collected data or relative to a reference point (other run of the scenario including variations as needed) specified when running the test (e.g., some step 4.5)
8) Test pass/fail results are recorded in XML files on disk (releng case) or communicated to the origin via sockets (Junit/PDE launcher case)


C) Multiple run session tests:
1- 5) as in Scenario B
6) in the running test instance, some number of tests are run.  An individual test may be a multirun session test.  In this case, the test writer writes a loop which creates a SessionTestRunner (or some suitable helper) and launches a new VM running Eclipse according to the variations set and using the JUnit/PDE launcher technology).
7) each launched session test does its work and its assertions.  Test pass/fail results are communicated to the origin via sockets (Junit/PDE launcher case)
8) Test pass/fail results are recorded in XML files on disk (releng case) or communicated to the origin via sockets (Junit/PDE launcher case)

C.P) Multiple run session tests with performance measurements
1-6) as in Scenario C
7) as in Scenario C#7 but the each test, as required, collects and commits data to a database or local file.  The data is tagged as having come from a particular run of the scenario as a whole.  That is, if 5 iterations were done and committed, those 5 would be identified as coming from the same run of the scenario.  Assertions are (typically) NOT done in the launched target.
8) After completing the loop running test iterations, assertions are done either on the absolute values in the collected (tagged) data or by comparing the tagged data to a reference point (other run of the scenario including variations as needed) specified when running the test (e.g., some step 4.5)
9) Test pass/fail results are recorded in XML files on disk (releng case) or communicated to the origin via sockets (Junit/PDE launcher case)



Other points
- This structure separates the concerns of writing a particular test and whether or not it is a session test and whether or not it should be run multiple times.

- The virtual machine used to run the tests is defined by the "user".  In the releng case it may happen that all tests are run using the same VM variation.

- The variations are specified by the user and used to control the launching of the target Eclipse.  Mismatches between what the user specified and what actually runs (e.g., discovered by introspecting the running VM) are effectively configuration mismatches which render the test invalid and it should abort without saving any data.

- using one property (perf_ctrl) with lots of values makes it hard for people to manage/change the values.  Would it be possible to use individual system properties? These could then be controlled in a number of ways including the config.ini file etc.

- the meaning of the "build" value is unclear.  Does it represent the particular Eclipse build (i.e., code base) on which I am running the tests?  What happens when I run the same scenario several times on the same Eclipse build?  Each run of a whole scenario needs to be delineated in the output data so comparisons can be done.


Summary:
The infrastructure we are building has general appeal.  The releng usecase of performance testing in a very closed/controlled environment as a special case.  The releng usecase was an interesting starting point and provided sufficient constraints to enable progress but it feels premature to consider APIs and database schemas finalized because we have satisifed those requirements.  In particular it is not clear that the scenarios above are handled in a natural way by the current support.  It might be interesting to augment the howto doc with descriptions of how each of these scenarios (*.P) should be implemented.

Jeff


Back to the top