Skip to main content

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [List Home]
Re: [jakartabatch-dev] Kick off conversation on next Jakarta Batch release - Jakarta EE 10 ? Batch + CDI integration?



Le mer. 24 mars 2021 à 14:56, Scott Kurz <skurz@xxxxxxxxxx> a écrit :

Thanks all for the quick and interesting replies.


Romain, I'll have to probably spend some time getting a handle on your ideas but will try to take a look. It sounds like your idea might fit more with a new project, e.g. "Jakarta Reactive Batch" than continuing this one, but this ML is a fair place to discuss for now.


Don't hesitate to ping me offline if easier.
Also agree it can likely be a new project but also sounds a match for a 3.x since 1.0 didn't get much adoption in the industry yet (for a lot of small details making it not java friendly IMHO but I think it is saner to realign the spec on current dev habits than trying to fix these small things and block the move to a more modern dev API, a bit like EJB 2 -> EJB 3 move which just changed everything and made it adoptable by this change).
 




>On Mar 23, 2021, at 11:27 AM, Michael Minella <mminella@xxxxxxxxxx>
>wrote:
>
>
> Thanks Scott for kicking this conversation off. As you noted in your
>comment, the decision for not requiring CDI was made at the time due
>to the interest in having Spring Batch implement the JSR without
>rewriting a large portion of Spring Batch. To be clear, I still see
>that this is the right approach. I'm not clear as to how requiring
>CDI at the spec level improves the experience for the user of this
>specification. We allow an implementation to require a specific DI
>implementation and that seems to work fine, giving the end users the
>type of choice, we would expect. I personally, would like to see a
>list of functionality we want to enable via CDI that is not available
>otherwise. Again, what does requiring CDI support do for the end
>user? Keep in mind, the actual configuration of jobs/etc is
>completely independent of the CI infrastructure to the user.


Michael, one case we've gotten feedback on is that it's cumbersome to have only the single user data object to pass via Job/StepContext and users would like to use CDI beans with a selection of existing scopes and new job/step batch scopes.

CDI also would allow sharing data between the app hosting JobOperator client submitting the job and the job's batch artifacts, without having to parameterize via JSL properties (some overlap with the use of job builder APIs for this but some difference).


I don't see how to provide portability here without picking a specific DI implementation, with the scopes, and mechanisms like qualifiers, etc. it provides, and the natural choice for Jakarta Batch is to take advantage of the rest of the Jakarta platform, leading to CDI.

CDI support also opens the door to other ideas such as integrating with CDI events, (not seeing that on the issues list right now, but pretty sure the idea's been raised).


To try to illustrate that point, a common ask is to have a @JobScoped/@StepScoped beans (reusing batchee semantic but guess naming is obvious there), in spring land it would be the existing @StepScope or @JobScope so yes it is not portable but if both "scoped" annotations are added to the spec spring can wire them trivially with a stereotype or postprocessor so I don't see that as a big blocker even for Spring platform.

In terms of eventing it is probably a bit more impacting since @Inject Event<BatchEvent> event; or void onEvent(@Observes BatchEvent e); would require to rewire it on spring bus which has a completely different API on both sides.

That said I don't see why it would be a blocker for Spring since Spring already integrates partially most of javaee/jakartaee specs (thinking to servlet or jaxrs for example) once in spring context so it would just be a jakartaee specific API as any IoC integration in the whole JakartaEE specs no from spring PoV?

Biggest user concerns of jbatch are xml definition and untyped components AFAIK, last one can be fixed trivially but first one requires to bind the definition to java beans and spring has its java config which has different rules than cdi one so I assume the JBatch DSL can be the same (JobBuilder or so) but the way to register a job will differ (likely a CDI event vs a @Bean in spring land? or if we care about aligning it both will use bean scanning in its *own* context which does not solve much the problem by construction).

So overall JBatch as any spec requires to integrate to be more fluent and consistent with the overall platform, ie CDI since > javaee time.
This does not mean Spring has to follow these parts of the spec as it always had been done and if it is the concern we can ensure it is explicit by using the common "EE case" part to define these rules - spring can skip this whole part and the related group in TCK I guess.
 





>
>You pose three options. The first one is to require CDI, the second
>(CDI is optional) is really what we currently have (no required
>changes), and the third then introduces this odd situation of
>requiring CDI only in specific situations that really doesn't help
>those who choose not to use CDI in the first place. For me, I only
>see choice one and two as the real options. I don't see any
>implementation that does not use CDI duplicating the effort both

>across their other DI of choice and CDI.


The key difference from the status quo I'm envisioning is to somehow define the Batch integrated CDI behaviors, and write TCK tests that enforce them.


Today each implementation is free to integrate with CDI however it wants. There are maybe some de facto rules or constraints already implied by CDI at the platform level but this is the grey area I'm hoping to pin down and clarify.

E.g. should a CDI `RequestScoped` context be propagated from JobOperator "client" to executing job? Should any of the other scopes? This needs to be clarified to achieve portability within the platform when using CDI.


Side note: request scoped issue is not really an issue since request scoped is bound to a thread (even in servlet spec it does not follow async context lifecycle) and job operator changes of thread by spec so it can't be propagated but this is one of the scope we could introduce (@ExtendedJobScoped). For other *built-in* scopes it is kind of the same (session is bound to the session so decoralated from the job lifecycle, applicationscoped is global so shared etc...). This is fully due to CDI an should be inherited in JBatch otherwise it will conflict with other specs usable with JBatch (starting from transaction one which has a transaction scoped which is close to request scoped in terms of spec but aligned on the tx - so thread as of today - and not inheritable).
 



In my mind all of these three enforce Batch + CDI behavior, and the question would be what, if anything, would be said about impls that didn't support this.


> To be 100% transparent, as
>Spring Batch evaluates the roadmap for Spring Batch 5 (built on top
>of Spring Framework 6, etc), we are actively considering depricating
>our implementation of JSR-352 and removing it in Spring Batch 6. The
>number of users consuming Spring Batch via the spec is too small to
>warrant the ongoing maintenance burden.


Appreciate you being transparent and pointing that out.


> Add to that, we were never
>able to fully validate our implementation due to the transaction
>pieces of the TCK being dependent upon the entire application server
>certification process (unable to run in an SE environment).


This sounds like something that could be addressed along with my other 'batch-tck' idea there. Originally we were constrained by Oracle's construction of the EE TCK (CTS) framework, and we never made our own standalone SE TCK 100% equivalent.
(And, as mentioned, we're in the unhappy state of needing to commit the tests twice now.)


+1, bval and cdi already have TCK groups (you can be bean validation "se" compliant but not EE or EE (= SE + others) compliant) just by configuring the test suite and showing publicly you pass the related group of tests.
AFAIK it is just a matter of expliciting it - as "textually", ie defining that com/ibm/jbatch/tck/tests/jslxml defines SE TCK and com/ibm/jbatch/tck/tests/jslxml + com/ibm/jbatch/tck/tests/ee the full TCK suite (EE).

My 2cts there would be that spring can pass the EE tck since it is only about JTA which is well supported by spring so - as of today - it is just a matter of doing the *standalone* configuration of the tck launcher (it is done AFAIK - no need to run cts setup with EJB to pass these tests) and if you target full EE compatibility you run it in a EE container - like wildfly/tomee - and disable their jbatch impl to run yours instead.
Not sure any jakarta spec an do better than that: enable standalone compat and proove EE integration, sounds like the minimum delivery, no?
 







>From: Reza Rahman

>For me, option three is the way to go. It gives CDI users guarantees
>of what to expect in environments where CDI is available whilst not
>impacting non-CDI users and implementations. The CDI related items on
>the current issue list is a good representation of gaps seen in Batch
>when used in Jakarta EE runtimes.
>
>I believe this is the approach used by Faces fairly successfully in
>supporting use in both Jakarta EE and non-Jakarta EE environments.
>
>Reza Rahman
>Jakarta EE Ambassador, Author, Blogger, Speaker


Reza. Option three does seem like one way to compromise.


The thing I dislike about this is if you distill it to the bottom-line takeaway: "if you want Jakarta Batch to be 100% portable across all impls then avoid CDI". This seems to me in some ways exactly contrary to what we should aspire for the Jakarta platform.

Ture, once you achieve a certain level of expertise, you can better understand what would tie you to either Spring, CDI, etc. and judge that against your desire for portability. But I think sometimes people evaluate the implementations against these portability statements early on in the cycle, and the simple "it's all portable" is what a spec should aim for.

I do wonder if another way to compromise, from the perspective of getting CDI to work with Spring, would be to package a CDI impl like Weld right into your app, along with the Spring Batch and other impls. Could this be feasible? I've never tried it or read about it.

If so we could view the unit of portability as the Java classes & XML artifacts defining the job rather than the WAR/EAR package. That seems like it could potentially be useful to me, if the interest is there.

------------------------------------------------------
Scott Kurz
WebSphere / Open Liberty Batch and Developer Experience

skurz@xxxxxxxxxx
--------------------------------------------------------

_______________________________________________
jakartabatch-dev mailing list
jakartabatch-dev@xxxxxxxxxxx
To unsubscribe from this list, visit https://www.eclipse.org/mailman/listinfo/jakartabatch-dev

Back to the top