Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jakartaee-tck-dev] Input on user requirements for specifying Specification/Javadoc assertions...

Interesting subject.

IMHO, answering from the broader TCK perspective (including standalone, eg. Batch TCK), I  think it's good to continue to remain flexible and let each spec decide.

I say "continue" since we've already got a good legacy of tests out there and we haven't gone through the trouble of mapping out assertions, and I don't see there'd be too much value now.

Every new test should at least have a comment/Javadoc that explains at a high level:
- what we are trying to test
- something about "how" (if not super obvious) - e.g. for batch a key detail is usually "do the validation in the app logic vs. do it in the test logic"

But how far we'd need to go depends on some details about the test suite as a whole.  Since these runtime tests are all "integration tests" (not mocked), I might need to test 10 other things along with testing one new API method.  
After doing this for a few specific API methods, I might have good coverage of those 10 other things... except maybe there's a few key variations in the way they interact that I need tests for.   So not sure how helpful the assertion mapping would be.

To top it off, (and my memory might be a bit fuzzy), I recall a few batch challenges where the details were not especially related to the original purpose of the test but rather a side issue that had been taken for granted by one or more people.  
So in terms of someone failing a given test against a given impl.. the assertion id wouldn't have mattered anyway.   (Granted, the better the test suite the less this observation would apply but still).

So probably good for the test contributor to say in their own terms why they are adding the test, and review as PRs are merged.  

I think the weak link in my argument is proving we have sufficient coverage.   But again, we have what we have (our legacy), and now that we are adding smaller details we can at least commit to adding tests for every new piece.

------------------------------------------------------
Scott Kurz
WebSphere / Open Liberty Batch and Developer Experience
skurz@xxxxxxxxxx
--------------------------------------------------------


Inactive hide details for "Scott Marlow" ---02/03/2022 03:31:09 PM--- >From yesterdays TCK call <https://urldefense.proofpoint.c"Scott Marlow" ---02/03/2022 03:31:09 PM--- From yesterdays TCK call <INVALID URI REMOVED

From: "Scott Marlow" <smarlow@xxxxxxxxxx>
To: "jakartaee-tck developer discussions" <jakartaee-tck-dev@xxxxxxxxxxx>
Date: 02/03/2022 03:31 PM
Subject: [EXTERNAL] [jakartaee-tck-dev] Input on user requirements for specifying Specification/Javadoc assertions...
Sent by: "jakartaee-tck-dev" <jakartaee-tck-dev-bounces@xxxxxxxxxxx>





From yesterdays TCK call, we learned that the TCK test AssertionIDs add traceability to the TCK tests.  In the past, the TCK test development process started with determining a list of the most important SPEC/API requirements that should be tested (e.g. based on what is the most difficult to implement or likely to experience problems).  Picking the most important tests to add to the TCK, would preceed development of new TCK tests for an EE release. 

Some notes are jotted down in the Platform TCK wiki page that will eventually include the requirements for adding TCK test assertions.

As a TCK user, I am interested in reading the identified Specification/Javadoc text that TCK tests are written to validate.  I'm not especially interested in how the Jakarta EE development process deals with the specifics of mapping assertion IDs to the Specification/Javadoc sections, I would rather that each (newly written) TCK test include everything that I need to read in the test JavaDoc comments.  Another alternative could be for every TCK test failure to include context as to what the test is validating or something equivalent.

As someone who works on TCK testing, I do like the idea of picking the most important things to test from each Specification + API document but how should we handle doing that for new TCK tests?  Is that something that each SPEC API lead has in mind as they create new Standalone TCK tests?  I don't expect that they would add many tests that verify unimportant things. 

Thoughts?

Scott

Scott_______________________________________________
jakartaee-tck-dev mailing list
jakartaee-tck-dev@xxxxxxxxxxx
To unsubscribe from this list, visit
https://www.eclipse.org/mailman/listinfo/jakartaee-tck-dev



GIF image


Back to the top