I'm sorry I've had a hard time responding. For one, the question of whether we want to switch to a different "test harness" hasn't been pinned down. On top of that, I'm not too familiar with the details of either JTHarness or Arquillian.
Let me make an attempt though of starting with what I do know about Batch and both the standalone / Platform TCKs, and running through the issues your proposal doc raies.
Platform TCK benefits
First, there's the set of features that the Standalone Batch TCK completely lacks. Without solving these issues in detail, I'm not sure what we've really accomplished with this proposal
1. Packaging & Deployment of Tests to Web/EJB Container
The platform TCK wraps the tests in WAR/EJB "vehicles" (the term, IIRC). Though the tests don't particularly exercise any subtleties or finer scoping of EE packaging (they can do their validation jobs just as well run in one single WAR vs. a more finer-grained packaging), we do have compatible impls, e.g. Open Liberty that you can't just embed into the test JVM (i.e. can't just add them to the classpath). Packaging and deploying the tests into WAR/EJB format is an important component to being able to run the tests.
2. Single setup with other TCKs
This isn't an issue really from the Batch POV, but rather from the Jakarta Platform implementer and certifier roles. This person doesn't want to have to go through 20 different custom setup steps and provide the same type of config values in 20 different formats. This is not something I've ever personally done; I'm sure you are way more familiar than me, but my impression of the effort is that this is a key part of the value the aggregate Platform TCK brings to the table.
Platform TCK negatives
There are a set of contributors that could potentially grasp the project at the standalone Specification level and make a meaningful contribution that could be discouraged by having to understand complications introduced by the full platform TCK (and we want to make it easier).
So these are some negatives from the batch POV.
1. Custom technology for: a) authoring b) execution (not just `mvn verify`) creates a learning curve for new devs
2. Duplication of source - Because we wish to provide the "SE Profile" and more common `mvn verify` experience, we maintain the standalone TCK in addition to forking the artifacts into the Platform TCK. (We run a 'sed' type of find/replace which creates its own code maintenance / debugging challenges)
Other / Misc.
Then there's a handful of other items the proposal touches on. Not that these are all trivial but I don't see any as fundamentally gating the discussion. Here's a quick take where we stand on some:
1. "Profile" (subsets of tests)
The Standalone TCK is essentially a subset of the Platform TCK with the divergence being the tests that run in EE only, e.g. the tests that use global transactions. It is easy to use TestNG to do this. I'd imagine any common way of making profiles would be fine for Batch. I think the key question is whether it is approporiate for there to be a platform vs. non-platform subset/profile of tests maintained by the spec project ?
2. Reporting
Again, I'm not one who has ever run the full platform TCK, but my impression is we should allow some flexibility for individual spec components of the aggregate. This could mean that someone running the tests has to know their way around the logs / results structures of individual specs, and also that there is some custom logic needed to aggregate it all together. On the other hand, maybe if getting to the point of being able to do a single, common deployment/setup means that we are close enough that we could define a single reporting mechanism...well, that might make sense and I'd be OK with that too.
3. properties files?
While we have some batch TCK-specific properties, they are tightly coupled to the fine details of the batch tests (e.g. they allow you to tweak wait times to balance test execution time vs. failures because you didn't wait long enough, etc.).
4. sigtest
We haven't talked much about signature tests. I'll note we are currently duplicating these too. I'm sure these would be easier to collapse than runtime tests.
5. test assertionids/Javadoc
Our effort here has been of mixed quality. In some areas we are OK, in some kind of weak. As long as we can grandfather in our starting point, it would be easy enough to change to any particular format (through whatever annotation,etc. mechanism).
6. maven artifacts
We already release our Standalone Batch TCK tests but I don't think it helps anyone much without solving some of the common setup/harness etc. problems.
Conclusion
So, Scott M., hope that moves the discussion along though that might not have been exactly the answer you were looking for. I also realize I might not have kept up with other ML conversations. If there's another forum you'd hope I'd bring these points up in, feel free to let me know.