Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
[jakartabatch-dev] Discussion Needed: closing test coverage gaps in TCK

In addition to the still-ongoing and interesting TCK refactoring discussion I wanted to kick off another important thread as we move forward: "closing TCK coverage gaps".
(This is one of the two I mentioned here: https://github.com/eclipse-ee4j/batch-api/wiki/Project-Directions#discussions-needed )

Though this too isn't as fun or interesting as adding new function, but important for the specification effort.

Unfortunately the original JSR 352 1.0 TCK (back in 2013) left a number of coverage gaps of the 1.0-level function. My IBM team developed the original TCK so I take the bulk of the responsibility here.

Anyway, though we did spend some effort over the years closing a few of the gaps, we never released/shipped this, because we never developed a process to deal with some of the tricky questions inherent in this kind of thing.

How would we deal with a newly-added test of an already-existing (but under-tested) specification statement that broken an already-compliant (passing previous TCK version) implementation? (In spite of getting some guidance from Bill Shannon, I didn't fully grasp the JCP rules here, and we never put together a plan. )

A key difficulty is that an existing implementation doesn't only have to handle the work of fixing their impl and making it compliant, but potentially has to worry about back-compatibility to their users, since conforming to the new test (if it doesn't now), will introduce a change in external behavior for its users, a potentially "breaking" change.
----

So now, we are governed by the Jakarta process: https://jakarta.ee/committees/specification/tckprocess/ In my reading of this, I think the main rule this prescribes is that the new tests have to be added in a minor (x.y) release (not just a service = x.y.z release). But I think it's really up to us how much of this new testing of old assertions we're willing to take on.

I do think we have a number of gaps (again, accepting responsibility here), that we should move to close, as time permits, mixed in with the goal of adding new function, and I think we should try to move forward by some combination of:

* Try to head off issues with upfront community discussion before merging new test, rather than challenging after the fact. The TCK process would allow a compatible impl to challenge a new test that broke it... but it'd be easier to tackle that before getting that far.

* Discuss back compatibility issues as a community to see if we can help with any common patterns of solving implementation-specific issues (e.g. maybe the /job/@version attribute could be used to point to "old" vs. "new" behavior?). In the best case, a non-conforming implementation will say, "we were wrong, this is a bug; all users will want the new behavior", but if not I'm trying to say it's not entirely the impl's fault if the spec/TCK hasn't already pinned down a single, clear behavior.

* Active, compatible implementations should help the community document how to run our standalone TCK against their implementation, so it' reasonable to ask someone adding a new test to run it against each of these implementations before merging.

* For certain issues, the best way to better align spec and TCK could even be to remove assertions from the spec.

If this approach seems obvious, well, I'm glad you agree.

What I'm arguing "against" really is taking the alternate view that the spec is defined only by the lowest common denominator of what each JSR 352 1.0-compatible implementation supports today.
I'm afraid this would be striking out significant, fundamental pieces of the spec from the common, spec-guaranteed function defined to app developers.

Appreciate your thoughts,
------------------------------------------------------
Scott Kurz
WebSphere Batch and Developer Experience
skurz@xxxxxxxxxx
--------------------------------------------------------


Back to the top