WTP automated testing
Last Updated: Nov. 29, 2004

This document outlines the principles of WTP automated testing. It also provides some guidelines for creating and running test cases. If anyone has suggestions for this document, please post a message to the wtp-dev mailing list.

Testing

The goal of unit testing is to achieve:

  1. Continuous integration. Unit tests are run as part of the WTP builds. They give an early indication on what is failing. Unit test failures should be fixed as soon as possible (before the next integration build).
  2. API compatibility. Component teams should provide unit tests for their public APIs. This ensures developers do not break existing clients.

The goal of performance testing is to achieve uniform or better performance as WTP moves forwards. As a developer, your dedication to WTP performance is strongly desired. To ensure the performance of WTP does not regress over time, developers should provide performance test cases along side with their features, etc. Developers are also expected to verify their bug fixes and features contribution against existing performance test cases. If something is not performing well, open a bug. Use performance as the keyword. Click here to see a list of all the currently open performance bugs in WTP.

Eclipse has a performance infrastructure in place for measuring and tracking performance. The performance processes described in this document are modeled around the same infrastructure. To create and run tests under this infrastructure, please refer to the Eclipse Tests How-To document. Eclipse also has tips and tools to aid developers to debug and track down performance problems. They are listed here:

  1. Performance bloopers
  2. Core tools
  3. SWT tools

Components may have their own testing requirements. For example, the server tools framework often call APIs on server extensions to retrieve data about the extension. Some of these APIs must be short running as they are called from the UI. The server tools framework provides abstract performance test cases that extensions should extend to verify that code contributed by the extension does not regress performance in the base framework. Performance requirements from component teams are listed in a document which is located in that component’s “development” directory in CVS. Please refer to the WTP Development Practices document regarding any development related issues.

Creating junit test cases

Here's a laundry list for integrating test plugins into the WTP build:

  1. Commit the plugin into CVS, use the component's "performance-tests" folder. For example, the org.eclipse.wst.wsdl.tests.performance plugin should be placed into /home/webtools/wst/components/wsdl/performance-tests.
  2. Add the plugin to the component's tests map file. This map file can be found inside the /home/webtools/org.eclipse.wtp.releng/maps directory.
  3. Add the plugin to the feature.xml file. This file can be found inside the /home/webtools/<sub-project>/assembly/features/<performance feature> directory.
  4. Update test.properties and test.xml inside /home/webtools/org.eclipse.wtp.releng/testScripts to include the new performance plugins.

The Eclipse Tests How-To document has a very thorough explaination on how to create and run performance test cases using the Eclipse performance infrastructure.

WTP performance process

This section describes the process for tracking performance in WTP. It is based on the process used by the Eclipse Platform Project. All performance tests must be automated. Performance tests are run every week using Thursday's integration builds. Performance tests should:

  1. never have compile errors
  2. always run to completion

If either condition fails, failures should be handled immediately according to the WTP Development Practices document. Performance results are store in a Cloudscape database and are compared against results from the previous release. In case of a regression, a note will be posted to the mailing list indicating the problem.

  1. the developer who introduced the regression should fix the performance problem
  2. if the regression can be justified by a new feature, then the PMC must get involved and decide how important that feature is (ex. any competition, etc). Solutions may include, but not limited to, making the feature optional (ex. create a preference and turn it off by default), so only users who wish to use it have to pay for it.

Performance results from the weekly integration build are rendered into a graph, and is linked to on the build page. This graph provides a simple comparison between the integration build and the reference build.

To run performance tests for a build that's available from the Eclipse download Web site:

  1. Check out /home/webtools/org.eclipse.wtp.releng
  2. Change the properties files to fit your system (buildAll.properties, tests.properties, build.cfg and etc)
  3. Open a command prompt and navigate to the org.eclipse.wtp.releng directory
  4. Run the following command:

    ant -f cruise.xml -DbuildType=<buildType> -DbuildId=<buildId> -Dtimestamp=<timestamp> performance

    For example:

    ant -f cruise.xml -DbuildType=N -DbuildId=N20041127 -Dtimestamp=200411271458 performance

Running performance tests for a local build is similar. Go to your ${buildDirectory} directory and check the buildType, buildId and timestamp for your local build. Go through the same steps as above. If you have a Cloudscape database setup (refer to the Eclipse Tests How-To document), then the performance results will be written to the ${testDir}/results directory, else the performance results will be displayed in the console.