Hiho dear colleagues,
Thank you all once again for your time and interest. I updated the
quality model to reflect the feedback gathered during this round:
http://borisbaldassari.github.io/PolarsysMaturity/qm/polarsys_qm_full.html
* The quality model structure has now a new main attribute "Usage"
which seats next to community, process and product. It is composed
of:
- Usage -- quality attribute
- Installed base -- quality attribute
- Downloads -- measurement concept
- Repo downloads (i.e. from the main official eclipse
download system) -- metric
- Update site downloads -- metric
- Installed base -- measurement concept
- Installation surveys -- metric
- Users feedback -- quality attribute
- Marketplace feedback -- measurement concept
- Votes on Marketplace -- metric
- Eclipse IDE feedback -- measurement concept
- Data collector feedback -- metric
I would like however to state a personal remark: it seems to me
that the usage should be under community. In our context, the
community is mainly made up of industrial partners, but this is
still a community and the usage they (we) have of the project
should go under this category. Just my opinion, though.
* I don't know where to introduce the "validation documents"
parts. For sure, this impacts or describes some aspect of maturity
and it would be great to take it into account. I first wanted to
add it under the Process >> Test Management section, but
maybe this should stand by its own (under Process)..
=> Raphael, where do you think this section should appear in?
* Regarding usage and downloads (Raphaël's mail):
Agreed on correlation (hard to establish). From what I understand,
you would include downloads but not necessarily in the 'usage'
section.
=> Where would you like to see it? Community? or as another
concept under Usage? I did put it under 'Installation base' but
it's definitely not written in stone, so please just tell me.
* Regarding visibility in conferences (Charles' mail):
Hard to measure. Could be a query on google scholar, or a manually
filled survey. I added it under Community.
=> Did you mean to add it under another attribute?
As an overall conclusion, I would argue to first consider the
quality attributes [1], then the concepts used to assess them [2],
and finally the metrics used to measure the concepts [3]. In that
order; or we will soon have discrepancies in our model.
[1] simple quality model (quality attributes only):
http://borisbaldassari.github.io/PolarsysMaturity/qm/polarsys_qm_simple.html
[2] medium quality model (quality attributes & concepts):
http://borisbaldassari.github.io/PolarsysMaturity/qm/polarsys_qm_medium.html
[3] full quality model (quality attributes, concepts &
metrics):
http://borisbaldassari.github.io/PolarsysMaturity/qm/polarsys_qm_full.html
Any comment is more than welcome.
Cheers,
--
boris
Le 29/08/2014 14:53, etienne.juliot@xxxxxxx
a écrit :
Hi,
Here are a resume of my opinion on previous comment:
- Validation Scenarios or Verification Procedures plans: yes,
that's a good idea. But it will be hard to implement. Imagine a
project which have 20 very very poor and badly written
validation scenarios that all passed with success: does this
project will have a better notation than a project with only 3
very good and large scenarios with 1 fail? If we just focus on
the availability or not of such a plan, I don't know where to
find this information in the current EF and Polarsys
infrastructure. Perhaps it could be added in the PMI for each
release ...
I do agree with the concerns. Including it as a mandatory field in
the PMI would also be great, because it would help projects setup
good practices.
This would be hardly applicable to
- installation metrics: as Charles
said, this metric is useful at the macro level. Of course, lots
of enterprise used internal p2 repositories, and lots downloaded
several the same plug-in. But, if you compare in the large every
project, we will find a trend and the comparison between
projects will be with the same criteria.
- usage metrics: there was a data collector plug-in bundled in
previous release trains: https://www.eclipse.org/org/usagedata/.
It was deactivated as the data collected wasn't used by anybody.
If we think it is really useful, perhaps it can be reactivated.
- kind of users: Boris, I think in the apache logs, you can find
with a reverse query from the IP the dns of each users who
download a project. So, this can become a metric of diversity
(yes, it doesn't solve the nature of the company behind the DNS,
except if you filter . com and .org URL).
- voices of users or « quality » representative users: as Boris
said, we could have the both. And after, we will play with the
measure weight in increase the importance of each of them,
depending of Polarsys members' opinions. But in my opinion, it's
very important to have the feedback of end users because they
are the day-to-day users. They are concern about stability, and
sometimes to cool features. That's not because they aren't
professional: perhaps these cool features are useful for their
productivity. (let's compare in film market: who is right to do
the review? the professional reviewers of newspapers or the
large public who creates the box office? I think the answer is
quite different between 2000 and 2014)
- track of talk about the project in conferences: interesting.
And it should be possible to automatically analyse the drupal
database of each eclipsecon to find the name of the project in
the description and keywords of each talk.
- vote on marketplace: you should discuss with EF if you need
this new feature.
And here is another new idea: one criteria to state if a project
is mature or not is the availability of professional support,
professional services provider, training. And we can
automatically find this information as each provider has its own
entry in the Eclipse Marketplace, and should list in their
description the projects they are working on: http://marketplace.eclipse.org/category/markets/training-consulting
Etienne JULIOT
Vice President, Obeo
Le 29/08/2014 10:53, Boris Baldassari a écrit :
Hiho,
Some more comments inline. Just my opinion, though.
Le 28/08/2014 14:32, FAUDOU raphael a écrit :
Usage and downloads are different things and correlation is
hard to establish. For instance, you may have one organization
that download each new release in order to test new features
(including beta releases) but with only evaluation purpose,
while other organizations download one release a year and
distribute it internally through enterprise update site.
Finally, first organization might have downloaded 20 times a
year a component while the second will have downloaded it 1 a
year. From Topcased experience I could check that kind of
situation a lot of times concerning industrial companies.
You should be able to
calculate this metric from statistics of EF
infrastructure (apache logs on updatesite,
marketplace metrics, downloads of bundles when the
project is already bundled, etc). Be careful to
remove from the log stats any hudson/jenkins which
consume updatesite: they don't reflect a real number
of users.
Yes, that is a first point to care: real number of users.
Because of internal update sites, I do not see how we
could get public figures about number of users… except by
asking explicitly to companies.
Something interesting should be also to measure the
diversity of the users. For example, if a project is
used only by academic or by only one large company,
can we say it is mature?
woh… that is a BIG question. Here again Topcased
experience showed that Company’ size is not a
representative factor of maturity.
What would be useful to know is whether component/tool
is used in « operations » or only in « research »
industrial phase. For instance, at Airbus, there are uses
of Topcased in both operations (A380, A400M and A350
programs) and in Research and Technology (next programs)
while at CNES Topcased was only used in research programs
(from what I know).
Second really important point to measure is the number
of concurrent users of a given component/tool. If there is
only one user on given project, he/she can probably adapt
to the tool and find workarounds concerning major bugs and
finally usage is considered as OK. When there are several
concurrent end users, you see bugs and issues far quicker
and complaints occur quicker. So if a project with 5
concurrent users used a tool with success it has greater
value (from my opinion) than 5 usages of one person.
That's really hard to measure.
I'm quite concerned by metrics which may not be easily or
consistently retrieved. These pose a lot of threats on the
measurement theory [1] and on the quality model consistency, and
I would prefer not to measure something rather than measuring it
wrongly or unreliably. Not to say that the above-mentionned
metrics should not be used, but we have to make sure that they
indeed measure what we intend to measure and improve their
reliability.
[1] https://polarsys.org/wiki/EclipseQualityModel#Requirements_for_building_a_quality_model
My idea is reuse the
existing Eclipse Marketplace to host this feedback.
We can reuse ideas from other marketplace like Play
Store or AppStore where users can write a review but
also add a notation.
well, here, I’m not sure to agree. What you suggest is OK
for Eclipse technologies but I see PolarSys as driven by
industry and not by end users. For Polarsys I would expect
that industrial companies give their feedback, but not
that all end users give feedbacks as it can lead to
jungle.
On an industrial project, there are quality rules for
specification, design, coding, testing…. and the whole
team must comply with those rules. If we interviewed each
member of the team we might have very different feedbacks
and suggestions about process and practices. That is why
it is important to get a limited voices and if possible
« quality » representative industrial voices.
I don't think industry should be the only voice we hear to; this
is a clear violation of Eclipse quality requirements regarding
the three communities.
Listening to a larger audience IMHO really leads to better
feedback and advice. I would tend to mix both.
But perhaps I’m wrong in my vision of what PolarSys
should be. Just le me know…
Well, a great point with our architecture is that we can
customise the json files describing the model, concepts and
metrics easily. So it is really simple to have a PolarSys
quality model and a more generic Eclipse quality model, with
different quality requirements and measures.
I still believe that the PolarSys QM should also rely on the
other communities (especially regarding feedback) and should not
be too isolated, although some specific customisations are
definitely needed on quality attributes, concepts and metrics.
I'll let pass some time and then update the quality model with
the propositions that gathered positive feedback. I will also
try to summarise the metrics needed for the computations and
check their availability to help discuss this specific subject.
Have a wonderful day,
--
boris
Best
raphaël
Etienne JULIOT
Vice President, Obeo
Le 22/08/2014 16:57, Boris Baldassari a écrit :
Hiho dear colleagues,
A lot of work has been done recently around the
maturity assessment initiative, and we thought it
would be good to let you know about it to have
some great feedback..
* The PolarSys quality model has been improved and
formalised. It is thoroughly presented in the
polarsys wiki [1a], with the metrics [1b] and measurement
concepts [1c] used . The architecture of the
prototype [1d] has also been updated, following
discussions with Gaël Blondelle and Jesus
Gonzalez-Barahona from Bitergia.
* A nice visualisation of the quality model has
been developed [2] using d3js, which summarises
the most important ideas and concepts. The
description of metrics and measurement concepts
has still to be enhanced, but the quality model
itself is almost complete. Please fell free to
comment and contribute.
* A github repo has been created
[3], holding all definition files for the quality
model itself, metrics and measurement concepts. It also includes a set of scripts used
to check and manipulate the definition files, and
to visualise some specific parts of the system.
* We are setting up the necessary information and
framework for the rule-checking tools: PMD and
FindBugs for now, others may follow. Rules are
classified according to the quality attributes
they impact, which is of great importance to
provide sound advice regarding the good and bad
practices observed in the project.
Help us help you! If you would like to participate
and see what this on-going work can bring to your
project, please feel free to contact me. This is
also the opportunity to better understand how
projects work and how we can do better together,
realistically.
Sincerely yours,
--
Boris
[1a] https://polarsys.org/wiki/EclipseQualityModel
[1b] https://polarsys.org/wiki/EclipseMetrics
[1c] https://polarsys.org/wiki/EclipseMeasurementConcepts
[1d] https://polarsys.org/wiki/MaturityAssessmentToolsArchitecture
[2] http://borisbaldassari.github.io/PolarsysMaturity/qm/polarsys_qm_full.html
[3] https://github.com/borisbaldassari/PolarsysMaturity
_______________________________________________ polarsys-iwg
mailing list polarsys-iwg@xxxxxxxxxxx
To change your delivery options, retrieve your password, or
unsubscribe from this list, visit https://dev.eclipse.org/mailman/listinfo/polarsys-iwg
|