What I’m trying to achieve is a sequence of jobs run such that job 2 does not start until job 1 is finished and I don’t think that’s actually what I’m producing here.
If you schedule jobs, you cannot control when a job will eventually be evaluated, as the start time is e.g. influenced by other jobs that are run in parallel, and by databases that will be accessed by your query. An example:
prof:dump(jobs:eval("delete nodes db:open('DB')/*")), insert node <a/> into db:open('DB')/*
This query will update DB, and the scheduled query can only be started after your current query has finished. And there are options such as FAIRLOCK or PARALLEL that further complicate the manual orchestration of job executions.
Is it simply to do a jobs:wait(jobs:eval()) on each job or is there something better?
The wait function will immediately return if the job id is unknown; but it could be that the job has not even been started. The following code would probably achieve what you’re trying to do:
let $query := 'prof:sleep(1000)' let $job-id := jobs:eval($query, (), map { 'start': 'PT0.1S' }) return hof:until( function($_) { not(jobs:list() = $job-id) }, function($_) { prof:sleep(100) }, () ), 'finished'
I wouldn’t recommend that because of the transactional semantics, and because of side effects that can take place in a productive environment and that might block your orchestration code forever. A cleaner solution is schedule the next job from inside your query:
(: query1.xq :) db:create('db'), jobs:eval(xs:anyURI('query2.xq')
(: query2.xq :) ..., jobs:eval(xs:anyURI('query3.xq')
(: query3.xq :) ...
And the cleanest solution is define as many updates as possible in one single query. This way, you can benefit most from numerous update optimizations performed by BaseX (e.g., 100 single insert queries can be much slower than one large insert query).
Best, Christian