Hi Etienne, hi all,
Sure, maturity assessment has improved a lot: congratulations to all contributors.
Mostly agree with you that we should now add a « usage » part. However I get a few comments. Please see below.
Hi,
Here is my review of Maturity Assessment.
I think it's a great step further and a great job has already been
done.
I would like to propose an improvement:
I think the "user feedback and usage" is a major key point of the
maturity evaluation of a project. You can have a project with a
very nice code, a good responsiveness of its support, a good
process for its developers, but a poor maturity from the point of
view of the end users but features are useless, ergonomic is bad,
etc.
First, I propose to add a 4th category in the quality model [2]:
community is mostly focus on the creator of the technology and
those who help the project to be adopted. So, in my opinion, it
should not be consolidated in the same category.
Lets call it for the rest of my email "Usage".
In this Usage category, lets move "installed base". And in this
one, you propose an installation survey.
You should also add a metric about the number of download. For me,
a project which is not used cannot be promoted as Mature.
Usage and downloads are different things and correlation is hard to establish. For instance, you may have one organization that download each new release in order to test new features (including beta releases) but with only evaluation purpose, while other organizations download one release a year and distribute it internally through enterprise update site. Finally, first organization might have downloaded 20 times a year a component while the second will have downloaded it 1 a year. From Topcased experience I could check that kind of situation a lot of times concerning industrial companies.
You should be able to calculate this metric from statistics of EF
infrastructure (apache logs on updatesite, marketplace metrics,
downloads of bundles when the project is already bundled, etc). Be
careful to remove from the log stats any hudson/jenkins which
consume updatesite: they don't reflect a real number of users.
Yes, that is a first point to care: real number of users. Because of internal update sites, I do not see how we could get public figures about number of users… except by asking explicitly to companies.
Something interesting should be also to measure the diversity of
the users. For example, if a project is used only by academic or
by only one large company, can we say it is mature?
woh… that is a BIG question. Here again Topcased experience showed that Company’ size is not a representative factor of maturity. What would be useful to know is whether component/tool is used in « operations » or only in « research » industrial phase. For instance, at Airbus, there are uses of Topcased in both operations (A380, A400M and A350 programs) and in Research and Technology (next programs) while at CNES Topcased was only used in research programs (from what I know).
Second really important point to measure is the number of concurrent users of a given component/tool. If there is only one user on given project, he/she can probably adapt to the tool and find workarounds concerning major bugs and finally usage is considered as OK. When there are several concurrent end users, you see bugs and issues far quicker and complaints occur quicker. So if a project with 5 concurrent users used a tool with success it has greater value (from my opinion) than 5 usages of one person.
My next major proposition is allow user to be precise about their
feedback on the project, in a view to allow you to use it
automatically in the maturity assessment.
Agree.
My idea is reuse the existing Eclipse Marketplace to host this
feedback. We can reuse ideas from other marketplace like Play
Store or AppStore where users can write a review but also add a
notation.
well, here, I’m not sure to agree. What you suggest is OK for Eclipse technologies but I see PolarSys as driven by industry and not by end users. For Polarsys I would expect that industrial companies give their feedback, but not that all end users give feedbacks as it can lead to jungle. On an industrial project, there are quality rules for specification, design, coding, testing…. and the whole team must comply with those rules. If we interviewed each member of the team we might have very different feedbacks and suggestions about process and practices. That is why it is important to get a limited voices and if possible « quality » representative industrial voices.
So, you will have this kind of result :
<jebidihi.png>
To be more precise on the vote, we can use two alternatives:
propose a fix list of criteria, or allow free list of pro/cons.
The first option is easier to tool-up, but the second one is more
useful for the project commiters (mostly to know what users like,
which is an information very hard to find today).
For this second one, I see a website that uses it (I can't
remember which one) and it was like this :
You have two lists to fill where users can freely add new line.
Lets use this example for only one feedback):
- what do you like
performance
nice icons
-what do you dislike
no linux compatibility
And, you just have to create a simple analyzer that consolidates
the top 10 of each positive / negative feedbacks:
+
performance (80)
nice icons (15)
lay loading (8)
-
everything is perfect (200)
project logo (20)
As with mobile apps, Eclipse Marketplace Client could propose an
API to Eclipse plug-in to propose to vote directly from the IDE
itself. So, we could receive feedback of real end users, and not
only from the Eclipse's websites visitors.
Same remark: it has great chances to lead to a very large list of « cool » features while industry first requires « stability ».
But perhaps I’m wrong in my vision of what PolarSys should be. Just le me know… Best raphaël
Etienne JULIOT
Vice President, Obeo
Le 22/08/2014 16:57, Boris Baldassari a écrit :
Hiho dear colleagues,
A lot of work has been done recently around the maturity
assessment initiative, and we thought it would be good to let
you know about it to have some great feedback..
* The PolarSys quality model has been improved and formalised.
It is thoroughly presented in the polarsys wiki [1a], with the metrics [1b] and
measurement concepts [1c] used . The architecture of the
prototype [1d] has also been updated, following discussions with
Gaël Blondelle and Jesus Gonzalez-Barahona from Bitergia.
* A nice visualisation of the quality model has been developed
[2] using d3js, which summarises the most important ideas and
concepts. The description of metrics and measurement concepts
has still to be enhanced, but the quality model itself is almost
complete. Please fell free to comment and contribute.
* A github repo has been created [3], holding all
definition files for the quality model itself, metrics and
measurement concepts. It also includes a
set of scripts used to check and manipulate the definition
files, and to visualise some specific parts of the system.
* We are setting up the necessary information and framework for
the rule-checking tools: PMD and FindBugs for now, others may
follow. Rules are classified according to the quality attributes
they impact, which is of great importance to provide sound
advice regarding the good and bad practices observed in the
project.
Help us help you! If you would like to participate and see what
this on-going work can bring to your project, please feel free
to contact me. This is also the opportunity to better understand
how projects work and how we can do better together,
realistically.
Sincerely yours,
--
Boris
[1a] https://polarsys.org/wiki/EclipseQualityModel
[1b] https://polarsys.org/wiki/EclipseMetrics
[1c]
https://polarsys.org/wiki/EclipseMeasurementConcepts
[1d] https://polarsys.org/wiki/MaturityAssessmentToolsArchitecture
[2]
http://borisbaldassari.github.io/PolarsysMaturity/qm/polarsys_qm_full.html
[3] https://github.com/borisbaldassari/PolarsysMaturity
_______________________________________________
polarsys-iwg mailing list
polarsys-iwg@xxxxxxxxxxx
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/polarsys-iwg
<etienne_juliot.vcf>_______________________________________________ polarsys-iwg mailing list polarsys-iwg@xxxxxxxxxxx To change your delivery options, retrieve your password, or unsubscribe from this list, visit https://dev.eclipse.org/mailman/listinfo/polarsys-iwg
|