Thanks, Job. We'll give it an add.
The primary distinction between Che & Codenvy is intended to be related to the integrated workflows of the system - connecting the workspace through Jira, Jenkins, source repositories, as part of a continuous development flow where automation is used to create, update and destroy workspaces continuously for developers, so that product managers, engineers, and QA always have a ready-made workspace at the click of a URL.
The choice to include elasticity as a Codenvy component has less to do with driving revenues for Codenvy and was a consequence of the nature of the architecture. To get true elasticity, each of the API services needs to be independently scalable onto different clusters of nodes: builders, runners, API services, and so forth. This deployment architecture quickly became complex and Codenvy implemented a puppet backbone that doubles as an installer & updater of a multi-node system where these services are distributed around. This sort of complexity is necessary if you want to run a developer workspace cloud for millions of concurrent developers, and this complexity is really only suited for a commercial platform product like Codenvy.
We have a vision of making Che into a multi-user product, scalable to the resources available on a single node. So instead of having all of the API services distributed, they would operate within a single tomcat (or jetty), but that node would be multi-user with embedded user database & LDAP. You could then have elasticity up to the point of the resources available on a single node. So you could deploy that Che server onto a 400GB RAM node if you want and probably get concurrent support for 100s or maybe even 1000s of developers.