[
Date Prev][
Date Next][
Thread Prev][
Thread Next][
Date Index][
Thread Index]
[
List Home]
RE: [wtp-releng] Performance Tests
|
Hi David,
As far as I understand, each perf test results for all
2.0.2 I-build and 3.0 I-build and M-builds should be compared to the R2.0.1 perf
test result. Is this correct?
About the 12 hours runtime... I digged in the logs.
Actually, the pure perf test execution time is around 131 minutes - a little bit
more than 2 hours. But the consolelogs reports: "Total time: 600 minutes 16
seconds":
If I add the time for binaries download and extraction, and
graph generation - it goes close to 12 hours. The difference between the "131
minutes" and "600 minutes 16 seconds" should have been spent in starting and
stopping of the Eclipse workbench that happens on every single perf test.
The above is comparable with the logs of the R2.0 perf
results that have been run on the old perf test system:
I see: "Total time: 777 minutes 11 seconds". Here the total
time is more, because the old John's perf system was slower. So, even on the old
John's machine the whole perf test procedure should have taken more
than 15 hours for a single run. However, the pure time for the perf JUnit
test should have taken around 4 hours as you say.
So, these are my next steps. I will run again the
R2.0.1 perf test to have the "perf result base". Then I will run the perf tests
for all I-builds and M-builds declared on the download page and will compare the
results to the one of R2.0.1. I will run one perf execution per day (actually
night in Bulgaria). It will take around 2 weeks to cover all builds. When the
system is not used for perf tests (during the day in Bulgaria), I will execute
the scanning tests, which would take much less time, I hope.
Greetings,
Kaloyan
> - 2.0.2
I-builds - Do I refer the results of the 2.0.2 I-build to the 2.0.1 build?
> Or, do I refer the results of the
2.0.2 I-build to the results of the previous 2.0.2 I-build? > - 3.0 M-builds - I will refer
3.0 M1 to 2.0.1. But how about 3.0 M2? Referred to 3.0 M1, or again to
2.0.1? 2.0.1. We want a common, steady base measurement to compare against. It's no big
deal if there's a little week to week variation, but the
goal is to improve (or, be no worse than) the previous release).
But ... I also think the base measurements should be
re-ran each time the regular measurements are taken, just to make sure the machine is in a steady state. We could fix that archive issue if anyone thought worth it to use 2.0 as
the base. That'd be my preference, but no big deal if too much work.
If it's just a matter of FTP'ing the zip files,
you can use the "mirror form" of the URL to get the
file, and if not found on 'downloads' will automatically look in archives. But, not sure if that's the issue you are having or not.
I also think it'll be easy to fix the 12 hour run
time ... just take out any suites that are too long, open a bug on the
subproject that owns it, and let them create a more
reasonable test. The total time should be no more than 4 hours, IMHO, and
ideally just 1 or 2. The regular JUnit tests are
about 30 minutes.
Thanks!
From:
| "Raev, Kaloyan"
<kaloyan.raev@xxxxxxx>
|
To:
| <wtp-releng@xxxxxxxxxxx>
|
Date:
| 12/03/2007 10:02 AM
|
Subject:
| [wtp-releng] Performance
Tests |
Hello,
I am happy to announce that I have the performance
tests setup and run successfully.
I have executed the perf test on two
builds already:
http://download.eclipse.org/webtools/downloads/drops/R2.0/R-2.0.1-20070926042742/
http://download.eclipse.org/webtools/downloads/drops/R2.0/M-2.0.2-20071004053715/
Since all builds older than 2.0.1 are already moved
to the archive site, I cannot use the PerfBuilder tool to execute the perf tests
for them. This is why I use the official 2.0.1 release for a "reference base".
The first 2.0.2 I-build (M-2.0.2-20071004053715) perf test results are referred
to the results of that 2.0.1 release.
I am going to build perf test results for all
declared builds on the WTP download site in the next days. One execution last
for around 12 hours. But, before this I want to ask what is the for referring
perf results.
-
2.0.2 I-builds - Do I refer the results of the 2.0.2 I-build to the 2.0.1 build?
Or, do I refer the results of the 2.0.2 I-build to the results of the previous
2.0.2 I-build?
- 3.0
M-builds - I will refer 3.0 M1 to 2.0.1. But how about 3.0 M2? Referred to 3.0
M1, or again to 2.0.1?
And one more thing. There are still 5 failures in the
j2ee core perf test:
http://download.eclipse.org/webtools/downloads/drops/R2.0/M-2.0.2-20071004053715/perfresults/html/org.eclipse.jst.j2ee.core.tests.performance_.html
The reason is one and the same:
illegal data set: contains neither AVERAGE nor
AFTER values.
Is this a bug? Or there is
still something wrong in my setup?
Greetings,
Kaloyan Raev
Eclipse WTP Committer
Senior Developer
NW C JS TOOLS JEE (BG)
SAP Labs Bulgaria
T +359/2/9157-416
mailto:kaloyan.raev@xxxxxxx
www.sap.com _______________________________________________
wtp-releng mailing
list
wtp-releng@xxxxxxxxxxx
https://dev.eclipse.org/mailman/listinfo/wtp-releng
Attachment:
smime.p7s
Description: S/MIME cryptographic signature