Hi,
Hiho,
- Validation Scenarios or
Verification Procedures plans: yes, that's a good idea. But it
will be hard to implement. Imagine a project which have 20 very
very poor and badly written validation scenarios that all passed
with success: does this project will have a better notation than
a project with only 3 very good and large scenarios with 1 fail?
If we just focus on the availability or not of such a plan, I
don't know where to find this information in the current EF and
Polarsys infrastructure. Perhaps it could be added in the PMI
for each release ...
I do agree with the concerns. Including it as a mandatory field in
the PMI would also be great, because it would help projects setup
good practices.
This would be hardly applicable to all Eclipse projets in the wild,
however (PMI is Eclipse-wide, isn't it?). Maybe enforcing it for
PolarSys only would do the trick..
- installation metrics: as Charles
said, this metric is useful at the macro level. Of course, lots
of enterprise used internal p2 repositories, and lots downloaded
several the same plug-in. But, if you compare in the large every
project, we will find a trend and the comparison between
projects will be with the same criteria.
OK. By the way, once we have setup a 'measurement concept', we can
still update/adjust its included metrics.
In other words: we have the 'downloads' concept, which intends to
measure how much the product is downloaded. From there, we can have
different means to measure this concept, and the metrics can be
fine-tuned easily: repo downloads, update sites, surveys, etc. The
same holds for 'installed base'.
- usage metrics: there was a data
collector plug-in bundled in previous release trains: https://www.eclipse.org/org/usagedata/.
It was deactivated as the data collected wasn't used by anybody.
If we think it is really useful, perhaps it can be reactivated.
+1
- kind of users: Boris, I think in
the apache logs, you can find with a reverse query from the IP
the dns of each users who download a project. So, this can
become a metric of diversity (yes, it doesn't solve the nature
of the company behind the DNS, except if you filter . com and
.org URL).
Yep, true.
- voices of users or « quality »
representative users: as Boris said, we could have the both. And
after, we will play with the measure weight in increase the
importance of each of them, depending of Polarsys members'
opinions. But in my opinion, it's very important to have the
feedback of end users because they are the day-to-day users.
"Delay as much as possible unneeded decisions."
+1 to play with weights afterwards, depending on polarsys members'
opinions.
- track of talk about the project in
conferences: interesting. And it should be possible to
automatically analyse the drupal database of each eclipsecon to
find the name of the project in the description and keywords of
each talk.
- vote on marketplace: you should discuss with EF if you need
this new feature.
If we decide to include it in the measures, I will (discuss with
EF).
And here is another new idea: one
criteria to state if a project is mature or not is the
availability of professional support, professional services
provider, training. And we can automatically find this
information as each provider has its own entry in the Eclipse
Marketplace, and should list in their description the projects
they are working on: http://marketplace.eclipse.org/category/markets/training-consulting
+1 -- can't wait to add it to the model. What do other members think
about it?
--
boris
Etienne JULIOT
Vice President, Obeo
Le 29/08/2014 10:53, Boris Baldassari a écrit :
Hiho,
Some more comments inline. Just my opinion, though.
Le 28/08/2014 14:32, FAUDOU raphael a écrit :
Usage and downloads are different things and correlation is
hard to establish. For instance, you may have one organization
that download each new release in order to test new features
(including beta releases) but with only evaluation purpose,
while other organizations download one release a year and
distribute it internally through enterprise update site.
Finally, first organization might have downloaded 20 times a
year a component while the second will have downloaded it 1 a
year. From Topcased experience I could check that kind of
situation a lot of times concerning industrial companies.
You should be able to
calculate this metric from statistics of EF
infrastructure (apache logs on updatesite,
marketplace metrics, downloads of bundles when the
project is already bundled, etc). Be careful to
remove from the log stats any hudson/jenkins which
consume updatesite: they don't reflect a real number
of users.
Yes, that is a first point to care: real number of users.
Because of internal update sites, I do not see how we
could get public figures about number of users… except by
asking explicitly to companies.
Something interesting should be also to measure the
diversity of the users. For example, if a project is
used only by academic or by only one large company,
can we say it is mature?
woh… that is a BIG question. Here again Topcased
experience showed that Company’ size is not a
representative factor of maturity.
What would be useful to know is whether component/tool
is used in « operations » or only in « research »
industrial phase. For instance, at Airbus, there are uses
of Topcased in both operations (A380, A400M and A350
programs) and in Research and Technology (next programs)
while at CNES Topcased was only used in research programs
(from what I know).
Second really important point to measure is the number
of concurrent users of a given component/tool. If there is
only one user on given project, he/she can probably adapt
to the tool and find workarounds concerning major bugs and
finally usage is considered as OK. When there are several
concurrent end users, you see bugs and issues far quicker
and complaints occur quicker. So if a project with 5
concurrent users used a tool with success it has greater
value (from my opinion) than 5 usages of one person.
That's really hard to measure.
I'm quite concerned by metrics which may not be easily or
consistently retrieved. These pose a lot of threats on the
measurement theory [1] and on the quality model consistency, and
I would prefer not to measure something rather than measuring it
wrongly or unreliably. Not to say that the above-mentionned
metrics should not be used, but we have to make sure that they
indeed measure what we intend to measure and improve their
reliability.
[1] https://polarsys.org/wiki/EclipseQualityModel#Requirements_for_building_a_quality_model
My idea is reuse the
existing Eclipse Marketplace to host this feedback.
We can reuse ideas from other marketplace like Play
Store or AppStore where users can write a review but
also add a notation.
well, here, I’m not sure to agree. What you suggest is OK
for Eclipse technologies but I see PolarSys as driven by
industry and not by end users. For Polarsys I would expect
that industrial companies give their feedback, but not
that all end users give feedbacks as it can lead to
jungle.
On an industrial project, there are quality rules for
specification, design, coding, testing…. and the whole
team must comply with those rules. If we interviewed each
member of the team we might have very different feedbacks
and suggestions about process and practices. That is why
it is important to get a limited voices and if possible
« quality » representative industrial voices.
I don't think industry should be the only voice we hear to; this
is a clear violation of Eclipse quality requirements regarding
the three communities.
Listening to a larger audience IMHO really leads to better
feedback and advice. I would tend to mix both.
But perhaps I’m wrong in my vision of what PolarSys
should be. Just le me know…
Well, a great point with our architecture is that we can
customise the json files describing the model, concepts and
metrics easily. So it is really simple to have a PolarSys
quality model and a more generic Eclipse quality model, with
different quality requirements and measures.
I still believe that the PolarSys QM should also rely on the
other communities (especially regarding feedback) and should not
be too isolated, although some specific customisations are
definitely needed on quality attributes, concepts and metrics.
I'll let pass some time and then update the quality model with
the propositions that gathered positive feedback. I will also
try to summarise the metrics needed for the computations and
check their availability to help discuss this specific subject.
Have a wonderful day,
--
boris
Best
raphaël
Etienne JULIOT
Vice President, Obeo
Le 22/08/2014 16:57, Boris Baldassari a écrit :
Hiho dear colleagues,
A lot of work has been done recently around the
maturity assessment initiative, and we thought it
would be good to let you know about it to have
some great feedback..
* The PolarSys quality model has been improved and
formalised. It is thoroughly presented in the
polarsys wiki [1a], with the metrics [1b] and measurement
concepts [1c] used . The architecture of the
prototype [1d] has also been updated, following
discussions with Gaël Blondelle and Jesus
Gonzalez-Barahona from Bitergia.
* A nice visualisation of the quality model has
been developed [2] using d3js, which summarises
the most important ideas and concepts. The
description of metrics and measurement concepts
has still to be enhanced, but the quality model
itself is almost complete. Please fell free to
comment and contribute.
* A github repo has been created
[3], holding all definition files for the quality
model itself, metrics and measurement concepts. It also includes a set of scripts used
to check and manipulate the definition files, and
to visualise some specific parts of the system.
* We are setting up the necessary information and
framework for the rule-checking tools: PMD and
FindBugs for now, others may follow. Rules are
classified according to the quality attributes
they impact, which is of great importance to
provide sound advice regarding the good and bad
practices observed in the project.
Help us help you! If you would like to participate
and see what this on-going work can bring to your
project, please feel free to contact me. This is
also the opportunity to better understand how
projects work and how we can do better together,
realistically.
Sincerely yours,
--
Boris
[1a] https://polarsys.org/wiki/EclipseQualityModel
[1b] https://polarsys.org/wiki/EclipseMetrics
[1c] https://polarsys.org/wiki/EclipseMeasurementConcepts
[1d] https://polarsys.org/wiki/MaturityAssessmentToolsArchitecture
[2] http://borisbaldassari.github.io/PolarsysMaturity/qm/polarsys_qm_full.html
[3] https://github.com/borisbaldassari/PolarsysMaturity
_______________________________________________
polarsys-iwg mailing list
polarsys-iwg@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/polarsys-iwg
|