for $msg in //MSG
let $evi := $msg//TMCE/EVI[1]
let $updclass := $evi/@updateclass
let $tmcloc := $msg//MLOC/TMCL[1]
let $dir := $tmcloc/@direction
let $loccode := $tmcloc/@primarycode
let $alertc-id := concat($loccode, $dir, $updclass)
return
insert node (attribute alertc-id {$alertc-id}) into $msg
If I simply return result similar to this, using following query:
for $msg in //MSG
let $evi := $msg//TMCE/EVI[1]
let $updclass := $evi/@updateclass
let $tmcloc := $msg//MLOC/TMCL[1]
let $dir := $tmcloc/@direction
let $loccode := $tmcloc/@primarycode
let $alertc-id := concat($loccode, $dir, $updclass)
return element enhancedmsg {attribute alertc-id {$alertc-id}, $msg}
it takes just 27 s
With the insert node, it takes for ever, I had to cancel the query after 90 minutes.
I wonder, if there is any relation between this long execution time and possible inefficiency with automatic index update (but I am not aware of what is current status of this approach).
Currently, I see, it would be much more efficient to output the result as files, and then import them back, as importing the data I am working on took just 2 minutes.
Attached are few sample xml files in the format, I use in my database. Real database consists of 129097 MSG elements and number of DOC elements and documents is exactly the same.
With best regards
Jan