AFAIK JBeret ends by needing to call start(Job,...) on the operator which has the drawback to loose the job registry feature which is key as soon as you have an UI/API on top of job operator so the registration at startup is important for me - otherwise it kind of defeats the management API JBatch provides with JobOperator and the discovery.
About the CDI producer vs startup event registration (likely with a fluent API as CDI configurators in both cases) it is mainly the startup and runtime impact it implies I think.
On startup the event is clearly faster in general (1 vs N) and at runtime it is a bit faster too since it will not impact other parts/lookups+resolutions of the application for no reason (the beans are actually needed only once) so it makes lookups faster and use less memory. It also avoids some unexpected ambiguous error in some cases adding new jobs or composing libraries of jobs.
Last point is the coding style but both cases are pretty equivalent and can rely on injection to ease the writing of repeatitive parts so I guess the coding style is not key between both options. The event makes the callback explicit whereas the producer will require the AfterDeploymentValidation event to init all beans implicitly - which can break surprisingly if a producer was relying on a bean not available at that time.
It looks like what JBeret does is automatically register a job
defined programmatically:
https://jberet.gitbooks.io/jberet-user-guide/content/programmatic_job_definition_with_java/.
That seems a fine approach. Is there a way to get those folks to
chime in? Maybe they would say that a CDI producer based approach
is actually better and more type-safe? What does Spring Batch do?
Reza Rahman
Jakarta EE Ambassador, Author, Blogger, Speaker
Please note views expressed here are my own as an individual
community member and do not reflect the views of my employer.
On 4/6/2021 8:56 AM, Scott Kurz wrote:
I'm not trying to take us too far into
orchestration.
But we need to decide if we're going to define
"registration" of the Java-defined job such that some other
Jakarta component: servlet, etc. can start the job by name in some context
(WAR, some other scope, etc., like the XML definition in WEB-INF/classes/META-INF/batch-jobs provided for us.
So while one might argue this shouldn't be a spec
concern, it seems Romain (who sketched out a piece of this)
and Reza think this is worth discussing more.
I don't see JBeret doing this but I'm not that
familiar. Reza suggested maybe this can be done with CDI
producers. If the "standard" way were tied to CDI only, I
think that would be OK.
------------------------------------------------------
Scott Kurz
WebSphere / Open Liberty Batch and Developer Experience skurz@xxxxxxxxxx
--------------------------------------------------------
Michael Minella ---04/05/2021
12:22:32 PM---With regards to this issue, if I could provide
some visibility to how Spring Batch views this. The l
With regards to this issue, if I could
provide some visibility to how Spring Batch views this. The
launching of a job and the mechanisms involved in it are really
an orchestration concern. Spring Batch, from the beginning, has
taken no position on how jobs are orchestrated. We provide a
single class that could help as a utility, but most of the time
it wasn't used. Spring Boot provides abilities to launch batch
jobs. Spring Cloud Data flow can launch batch jobs. Cron,
Control-M, etc all can launch batch jobs. But Spring Batch
itself defers from orchestration concerns. This is by design to
allow Spring Batch to integrate with whatever orchestration tool
an enterprise chooses to use. If you are familiar with batch
systems in large enterprises, orchestration tools can be a bit
of a religion and not a place that is easily changed. It's best
to allow integration with whatever the enterprise has choosen
than to be prescriptive about how to orchestrate batch jobs.
Just my two cents.
Thanks, Michael Minella (He/Him)
Sr. Manager - Spring Engineering mminella@xxxxxxxxxx
3401 Hillview Avenue, Palo Alto, CA 94304
Just to try to highlight one point of Reza's answer: it is exactly
the goal of a java dsl: not have any "reference" which are
actually indirections making it harder for end users without any
real gain. With the java dsl you get the injection as in any CDI
app (or spring) in the job defining bean and you reference the
jbatch bean/backing bean with a .... reference not an alias in a
lambda.
Spring xml -> java movement was exactly this - even if
technically a bit different - and today almost nobody write
descriptors anymore (from JakartaEE which drops it slowly to
spring).
If you look at the JBeret implementation, that looks about right
to me. In general, job configuration would be rather static
except for the parameters perhaps. I would expect to define it
in a factory or CDI producer and reference it later, ideally in
a type safe way or by name in the least. There is a case for
dynamically building jobs, but it is fairly rare.
Hopefully the JBeret and Spring Batch folks can chime in? Both
should have some implementation experience on this already?
Reza Rahman
Jakarta EE Ambassador, Author, Blogger, Speaker
Please note views expressed here are my own as an individual
community member and do not reflect the views of my employer.
On Apr 1, 2021, at 8:49 AM, Scott Kurz <skurz@xxxxxxxxxx>
wrote:
Are we just looking to provide an API to dynamically
construct a job and then go and immediately execute it?
I'm sure that'd be useful. I mean to say, are we ALSO
looking to provide an API to "register" the job by name so
that it could be easily invoked later, by name and job
params, for example?
Using the existing interface for starting an XML-defined
job:
start(String jobXMLName, Properties jobParameters)
it's certainly easy to expose that remotely.
I haven't looked at the existing implementations and their
approaches yet.
But wondering what people were thinking this would look
like?