Home » Modeling » EMF "Technology" (Ecore Tools, EMFatic, etc) » Deadlocks with TranasactionalEditingDomain
|
Re: Deadlocks with TranasactionalEditingDomain [message #86876 is a reply to message #86848] |
Wed, 20 June 2007 13:29 |
Eclipse User |
|
|
|
Originally posted by: cdamus.ca.ibm.com
Hi, Jan,
Please ask questions about EMF Transaction component on the EMF newsgroup
(to which I have directed this reply).
Yes, currently the only way to avoid this is basically the only reliable way
ever to avoid deadlocks: ensure that whenever multiple locks are acquired,
they are always acquired in the same order.
Of course, in an extensible system like Eclipse Platform, this is very
difficult to do.
The Display-thread solution ("privileged runnables") works by having a
thread that owns a transaction hand it over to another thread (usually the
display thread, though similar mechanisms could be employed by any thread)
that is known by the transaction owner. The problem with Jobs is that they
don't normally know about one another, so they aren't really able to do
this kind of cooperative synchronization. However, if your Jobs do know
about one another, then it certainly is possible to implement a
"please-run-this-runnable-on-your-thread" utility by which means one thread
can run a PrivilegedRunanble on another's behalf.
The lock implementation used by the TED supports time-outs; this facility
just isn't used except internally by the lock, itself. I can imagine that
there would be considerable value in having this time-out exposed as a new
transaction option. Would you mind raising an enhancement request for
that? It would be nice to see this in the 1.2 release.
Another possibility is to allow a client to configure its TED to use the
Platform's ILock implementation instead of its own lock, to take advantage
of the Platform's deadlock detection capability. I have reservations about
this, though, which are part of the reason why it wasn't done this way in
the first place. Foremost is that the Platform resolves deadlocks by
arbitrarily giving the lock away to a thread that shouldn't have it, taking
it away from its legitimate owner, which will result in data corruption.
However, this behaviour may evolve into something more practical and
clients may be happy with it, so this would be another enhancement to
consider.
Cheers,
Christian
Jan Köhnlein wrote:
> Hi,
>
> we're heavily using GMF and thereby EMFT and often run into deadlocks
> involving the TransactionalEditingDomain (TED).
>
> A typical situations is that a worker job (e.g. a batch model validator)
> with the workspace-root as a rule tries to execute some runnable calling
> TED.runExclusive(), while the Display thread holds the TED-lock and
> tries to save a resource, attempting to aquire the workspace-root rule.
>
> Is there a way to avoid this? What we really would like to have is
> something like a tryLock or tryRunExclusive on the TED.
>
> AFAIR, a similar problem with the TED-lock and the Display-thread access
> has already been solved. Would that solution be transferable to the
> locks in job rules?
>
> Best regards
> Jan
|
|
|
Re: Deadlocks with TranasactionalEditingDomain [message #607023 is a reply to message #86848] |
Wed, 20 June 2007 13:29 |
Eclipse User |
|
|
|
Originally posted by: cdamus.ca.ibm.com
Hi, Jan,
Please ask questions about EMF Transaction component on the EMF newsgroup
(to which I have directed this reply).
Yes, currently the only way to avoid this is basically the only reliable way
ever to avoid deadlocks: ensure that whenever multiple locks are acquired,
they are always acquired in the same order.
Of course, in an extensible system like Eclipse Platform, this is very
difficult to do.
The Display-thread solution ("privileged runnables") works by having a
thread that owns a transaction hand it over to another thread (usually the
display thread, though similar mechanisms could be employed by any thread)
that is known by the transaction owner. The problem with Jobs is that they
don't normally know about one another, so they aren't really able to do
this kind of cooperative synchronization. However, if your Jobs do know
about one another, then it certainly is possible to implement a
"please-run-this-runnable-on-your-thread" utility by which means one thread
can run a PrivilegedRunanble on another's behalf.
The lock implementation used by the TED supports time-outs; this facility
just isn't used except internally by the lock, itself. I can imagine that
there would be considerable value in having this time-out exposed as a new
transaction option. Would you mind raising an enhancement request for
that? It would be nice to see this in the 1.2 release.
Another possibility is to allow a client to configure its TED to use the
Platform's ILock implementation instead of its own lock, to take advantage
of the Platform's deadlock detection capability. I have reservations about
this, though, which are part of the reason why it wasn't done this way in
the first place. Foremost is that the Platform resolves deadlocks by
arbitrarily giving the lock away to a thread that shouldn't have it, taking
it away from its legitimate owner, which will result in data corruption.
However, this behaviour may evolve into something more practical and
clients may be happy with it, so this would be another enhancement to
consider.
Cheers,
Christian
Jan Köhnlein wrote:
> Hi,
>
> we're heavily using GMF and thereby EMFT and often run into deadlocks
> involving the TransactionalEditingDomain (TED).
>
> A typical situations is that a worker job (e.g. a batch model validator)
> with the workspace-root as a rule tries to execute some runnable calling
> TED.runExclusive(), while the Display thread holds the TED-lock and
> tries to save a resource, attempting to aquire the workspace-root rule.
>
> Is there a way to avoid this? What we really would like to have is
> something like a tryLock or tryRunExclusive on the TED.
>
> AFAIR, a similar problem with the TED-lock and the Display-thread access
> has already been solved. Would that solution be transferable to the
> locks in job rules?
>
> Best regards
> Jan
|
|
|
Goto Forum:
Current Time: Mon Nov 04 19:21:52 GMT 2024
Powered by FUDForum. Page generated in 0.04678 seconds
|