Tracking document for gathering notes on security concerns for the Rich
Client Platform theme of Eclipse 3.0.
Change history:
2003/05/28 - MEM - started document
2003/05/29 - MEM - added development ramifications
2003/06/04 - MEM - added MEZ' concerns about priority of plugin
signatures
2003/06/11 - MEM - converted to HTML, did various edits and committed
to Equinox site
2003/06/12 - MEM - broke out signatures requirement to distinguish
signing/verifying a plugin contents versus code signatures uses for
runtime checks.
2003/06/20 - MEM - added note about plugin fragments, modifications
from feedback from Keith & Chuck
Motives
The Rich Client Platform theme of the Eclipse 3.0 plan is motivated by
the desire to develop general purpose applications (i.e. non-IDE
applications) using the Eclipse platform. A key motivation is to
leverage the Eclipse plugin architecture so as to create modular,
pluggable applications that are easily extensible. With this
extensibility, however comes risks that must be ameliorated in order to
use the eclipse platform for develpment and deployment of
mission-critical applications. Even as an IDE, developers of
mission-critical code are exposed when using the Eclipse IDE should they
make use of rogue plugins.
Where we are now:
Eclipse 2.1 platform does not provide any security features at this
point. There is no concept of a user (thus no user authentication
or access control) and all plugins execute as local code with no
security manager in effect. When talking to a feature server to
retrieve new plugins, no authentication takes place (is this true? Need
to verify). Jars can be examined for signatures but there is no
enforcement or sandboxing. The documentation on the Update Manager
does recommend to the user to not download plugins except from a trusted
source. :-)
Vulnerabilities:
Multiple users of the same installation running in the same machine
account (i.e. kiosk applications) have free access to all of each
other's data.
Rogue user co-opting a machine account can use the platform to connect
to servers as configured for that user's workspace (for example, CVS
Repository connections are configured and persisted in the workspace).
Plugins are run with full access to the machine. Introduction of
a rogue plugin thus enables the following types of attacks:
- plugin reads critical data on machine
- plugin erases or modifies critical data on machine
- plugin opens connections to other server to either communicate
stolen data or to download malicious code or data
- plugin modifies other plugins to spread virus (simple virus
example: First examine other plugins to see which ones do not have
a Plugin subclass then simply insert new Plugin subclass into the target
plugin directory.)
Where we want to be:
User - based security:
- Need the optional concept of a 'user' within the RCP/Eclipse
world to support the secure shared use of a single Eclipse install from
within a single machine user account. I.E. the user would 'log in'
to the RCP application. Users would have independent
workspaces. This likely will require slight modification to
org.eclipse.resources in order to assert user authentication prior to
workspace association and to make use of (optional) encryption of local
user data. Should of course also support single-signon mode where
Eclipse User == Machine User.
- Need to optionally use JCE to encrypt data in workspaces.
- Need to authenticate users that access feature servers for
updates/new features. (is this already there?)
- Need to authenticate feature servers that users connect
with. (is this already there?)
- Need secure connection to feature server. ( (is this
already there?)
- Provisioning - remote - need to be able to have
administrator do a 'server push' of plugin code out to a client in a
secure manner.
- Provisioning - need ability to establish a policy that determines
what features/plugins a user has installed.
Code - based security
- Need to be able to verify signatures of plugins at load
time (not just download time) in order to authenticate source of the
plugin and the integrity of the plugin's files. This will likely require
manifest changes.
- Need to be able to verify signatures and sources of runtime code
contained by plugins. In other words, a jar file of code provided
by a plugin may have a different signature than the plugin as a whole.
- Need to be able to protect plugins and local data from other,
malicious or otherwise destructive plugins. This will likely
require sandboxing.
- Want to be able to grant different levels of access to code from
different sources. How
fine-grained? Trade-off:
precision versus ease-of-use.
- Declarative model (plugin manifest requests permissions - ask
user at install time) versus implicit (only ask user if a security
exception is tripped). Have to catch the latter anyway, but
might make nicer UI to get as much of it out of the way early.
- Fragments - fragments currently run effectively 'merged' with the
plugin they are augmenting. But a fragment potentially comes from
a different source than the plugin. We need to make sure we run
code from fragments with the appropriate permissions.
Scenario: Installing a plugin
Initial conditions:
User has Eclipse platform installed.
User runs platfrom with a security manager enabled.
Current policy includes grants for known sources (for example,
'org.eclipse' would be a trusted signature as shipped).
Steps:
User downloads a plugin to install into
the platform (Example: stock watcher)
Upon 'installing', the plugin is inspected for a signature and
declaritive permission requests
if not signed
plugin is run in tight sandbox
else if signed by known source
is plugin asking for more permissions than code from
that source is already granted?
if not
run in sandbox already defined
for that source
else
prompt user : "this plugin
(signed by known source) wants additonal permissions X - what do you
want to do?"
case
grant this
plugin - add permissions X for this plugin from this source
grant all from
this source - add permissions X for all code signed by source
deny - do not
add the permission and do not install the plugin
endcase
endif
else if signed by unknown source
is plugin asking for more permissions than those
provided by default (tight) sandbox?
if not
run in default sandbox
else
prompt user : "this plugin
(signed by known source) wants additonal permissions X - what do you
want to do?"
case
grant this
plugin - add permissions X for this plugin from this source
grant all from
this source - add permissions X for all code signed by source
deny - do not
add the permission and do not install the plugin
endcase
endif
endif
Results:
The plugin runs in an appropriate set
of permissions or is not installed if denied by user..
In some cases, new permission information may have been persisted.
Scenario: Running a plugin that tries
an unpermitted action
Initial conditions:
plugin has been installed and is
running with a specific set of permissions
Steps:
plugin invokes (directly or indirectly)
an unpermitted action
AccessController's checkPermission() method is invoked and fails
User is prompted "Code from the plugin "<plugin name>" signed by
"<plugin source>" is attempting to do <action> on
<target>. Do you want to allow this? Just this
once? Always? [Other possible options]"
Results:
plugin is allowed/denied the action
based on the user's choice.
Questions
if denied, do we uninstall the plugin?
if allowed, depending on user options,
new permission may be added to the policy.
Development ramifications of adding
security to the platform
Performance ramifications of adding
security to the platform:
- It is expected that running the platform with a security manager
enabled will cause a performance hit during runtime. This impact
may be ameliaroated by code that does not directly or indirectly invoke
a privileged action (thus triggering a security stack crawl), however
that does not seem likely for the bulk of really useful code.
- the platform could run without a security manager enabled and
most of this hit should go away. However, 'doPrivileged()'
sections of code may still have an impact, depending on how well
designed they are.