Hi Christian,
I have another issue with following code snippet. It works fine with small JDBC result rows returned from database.
But if your sql query returns say 40 million rows, then you will get out of memory exception easily. I guess Basex first read whole 40 million rows into memory first, then it will allow user to use sql data.
Is there a way for basex to read sql rows block by block (say 1000 by 1000) and write into a file?
I appreciate your help.
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ")
let $addBegin := file:append-text($outputfile,"<RECORDS>","UTF-8") let $url := "jdbc:oracle:thin:"
let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>)
return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Tue, Jan 6, 2015 at 4:22 AM, Christian Grün christian.gruen@gmail.com wrote:
Hi Erol,
I have several xml document index keeping products features and have one master index file of products.. Currently, we have about 25 million
products
that have size about 25G xquery index size.
One solution is to generate a BaseX command script, which contains multiple subsequent update operations. It can e.g. be created as follows:
let $records := count(db:open('db')//product-node) let $commands := for tumbling window $w in (1 to $records) start at $s when true() only end at $e when $e - $s = 999999 return 'for $product-node in //product-node' || '[position() = ' || $s || ' to ' || $e || '] ' || 'return insert node <a/> into $product-node' return file:write-text-lines('script.bxs', $commands)
..and run via "basex script.bxs".
Hope this helps, Christian
My xquery application is reading features data from auxiliary xml
document
and enhancing master document.
In order to do it, I am inserting data into master document with a
generic
insert command:
insert node <FEATURES_NODE> into <PRODUCT_NODE>
But xquery update is executing update process after all insert requests collected. But with huge size of my index, the memory of machine (35G) is exhausted easily and can NOT finish process.
Is there way to execute individual xquery update requests one by one so
that
process can finish?
Erol Akarsu
Hi Erol,
I have rewritten the sql:execute such that the results will now be requested and returned in an iterative manner. As iterative retrieval of external resources always comes with risks (the requested data source may be updated at the same time; statements may not be properly closed if not all results will be retrieved by a client; etc), I ask you to do some testing with the latest snapshot [1] and give us feedback about your experiences.
Thanks in advance, Christian
[1] http://files.basex.org/releases/latest
On Wed, Jan 7, 2015 at 11:58 AM, Erol Akarsu eakarsu@gmail.com wrote:
Hi Christian,
I have another issue with following code snippet. It works fine with small JDBC result rows returned from database.
But if your sql query returns say 40 million rows, then you will get out of memory exception easily. I guess Basex first read whole 40 million rows into memory first, then it will allow user to use sql data.
Is there a way for basex to read sql rows block by block (say 1000 by 1000) and write into a file?
I appreciate your help.
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ") let $addBegin := file:append-text($outputfile,"<RECORDS>","UTF-8") let $url := "jdbc:oracle:thin:" let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>) return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Tue, Jan 6, 2015 at 4:22 AM, Christian Grün christian.gruen@gmail.com wrote:
Hi Erol,
I have several xml document index keeping products features and have one master index file of products.. Currently, we have about 25 million products that have size about 25G xquery index size.
One solution is to generate a BaseX command script, which contains multiple subsequent update operations. It can e.g. be created as follows:
let $records := count(db:open('db')//product-node) let $commands := for tumbling window $w in (1 to $records) start at $s when true() only end at $e when $e - $s = 999999 return 'for $product-node in //product-node' || '[position() = ' || $s || ' to ' || $e || '] ' || 'return insert node <a/> into $product-node' return file:write-text-lines('script.bxs', $commands)
..and run via "basex script.bxs".
Hope this helps, Christian
My xquery application is reading features data from auxiliary xml document and enhancing master document.
In order to do it, I am inserting data into master document with a generic insert command:
insert node <FEATURES_NODE> into <PRODUCT_NODE>
But xquery update is executing update process after all insert requests collected. But with huge size of my index, the memory of machine (35G) is exhausted easily and can NOT finish process.
Is there way to execute individual xquery update requests one by one so that process can finish?
Erol Akarsu
Christian,
Super.
I am glad to test it today and give feedback
Erol Akarsu
On Thu, Jan 8, 2015 at 11:09 AM, Christian Grün christian.gruen@gmail.com wrote:
Hi Erol,
I have rewritten the sql:execute such that the results will now be requested and returned in an iterative manner. As iterative retrieval of external resources always comes with risks (the requested data source may be updated at the same time; statements may not be properly closed if not all results will be retrieved by a client; etc), I ask you to do some testing with the latest snapshot [1] and give us feedback about your experiences.
Thanks in advance, Christian
[1] http://files.basex.org/releases/latest
On Wed, Jan 7, 2015 at 11:58 AM, Erol Akarsu eakarsu@gmail.com wrote:
Hi Christian,
I have another issue with following code snippet. It works fine with
small
JDBC result rows returned from database.
But if your sql query returns say 40 million rows, then you will get out
of
memory exception easily. I guess Basex first read whole 40 million rows
into
memory first, then it will allow user to use sql data.
Is there a way for basex to read sql rows block by block (say 1000 by
and write into a file?
I appreciate your help.
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ") let $addBegin := file:append-text($outputfile,"<RECORDS>","UTF-8") let $url := "jdbc:oracle:thin:" let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>) return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Tue, Jan 6, 2015 at 4:22 AM, Christian Grün <
christian.gruen@gmail.com>
wrote:
Hi Erol,
I have several xml document index keeping products features and have
one
master index file of products.. Currently, we have about 25 million products that have size about 25G xquery index size.
One solution is to generate a BaseX command script, which contains multiple subsequent update operations. It can e.g. be created as follows:
let $records := count(db:open('db')//product-node) let $commands := for tumbling window $w in (1 to $records) start at $s when true() only end at $e when $e - $s = 999999 return 'for $product-node in //product-node' || '[position() = ' || $s || ' to ' || $e || '] ' || 'return insert node <a/> into $product-node' return file:write-text-lines('script.bxs', $commands)
..and run via "basex script.bxs".
Hope this helps, Christian
My xquery application is reading features data from auxiliary xml document and enhancing master document.
In order to do it, I am inserting data into master document with a generic insert command:
insert node <FEATURES_NODE> into <PRODUCT_NODE>
But xquery update is executing update process after all insert
requests
collected. But with huge size of my index, the memory of machine (35G) is exhausted easily and can NOT finish process.
Is there way to execute individual xquery update requests one by one
so
that process can finish?
Erol Akarsu
Christian,
I have tested basex's latest snapshot on Oracle 11g.
Unfortunately, I am still getting out of memory exception. I am using like this. Is it correct use? I saw there is one entity Streamable data structure in basex. Will I have to use together with this latest change?
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ")
let $addBegin := file:append-text($outputfile,"<RECORDS>","UTF-8") let $url := "jdbc:oracle:thin:"
let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>)
return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Thu, Jan 8, 2015 at 11:09 AM, Christian Grün christian.gruen@gmail.com wrote:
Hi Erol,
I have rewritten the sql:execute such that the results will now be requested and returned in an iterative manner. As iterative retrieval of external resources always comes with risks (the requested data source may be updated at the same time; statements may not be properly closed if not all results will be retrieved by a client; etc), I ask you to do some testing with the latest snapshot [1] and give us feedback about your experiences.
Thanks in advance, Christian
[1] http://files.basex.org/releases/latest
On Wed, Jan 7, 2015 at 11:58 AM, Erol Akarsu eakarsu@gmail.com wrote:
Hi Christian,
I have another issue with following code snippet. It works fine with
small
JDBC result rows returned from database.
But if your sql query returns say 40 million rows, then you will get out
of
memory exception easily. I guess Basex first read whole 40 million rows
into
memory first, then it will allow user to use sql data.
Is there a way for basex to read sql rows block by block (say 1000 by
and write into a file?
I appreciate your help.
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ") let $addBegin := file:append-text($outputfile,"<RECORDS>","UTF-8") let $url := "jdbc:oracle:thin:" let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>) return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Tue, Jan 6, 2015 at 4:22 AM, Christian Grün <
christian.gruen@gmail.com>
wrote:
Hi Erol,
I have several xml document index keeping products features and have
one
master index file of products.. Currently, we have about 25 million products that have size about 25G xquery index size.
One solution is to generate a BaseX command script, which contains multiple subsequent update operations. It can e.g. be created as follows:
let $records := count(db:open('db')//product-node) let $commands := for tumbling window $w in (1 to $records) start at $s when true() only end at $e when $e - $s = 999999 return 'for $product-node in //product-node' || '[position() = ' || $s || ' to ' || $e || '] ' || 'return insert node <a/> into $product-node' return file:write-text-lines('script.bxs', $commands)
..and run via "basex script.bxs".
Hope this helps, Christian
My xquery application is reading features data from auxiliary xml document and enhancing master document.
In order to do it, I am inserting data into master document with a generic insert command:
insert node <FEATURES_NODE> into <PRODUCT_NODE>
But xquery update is executing update process after all insert
requests
collected. But with huge size of my index, the memory of machine (35G) is exhausted easily and can NOT finish process.
Is there way to execute individual xquery update requests one by one
so
that process can finish?
Erol Akarsu
Christian,
I made little change in coding
This works w/o any exception.
This is super change. Thanks for implementing so quickly for me
let $list := for $rec in sql:execute ($conn,$sqlStmt) let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()}
On Thu, Jan 8, 2015 at 2:30 PM, Erol Akarsu eakarsu@gmail.com wrote:
Christian,
I have tested basex's latest snapshot on Oracle 11g.
Unfortunately, I am still getting out of memory exception. I am using like this. Is it correct use? I saw there is one entity Streamable data structure in basex. Will I have to use together with this latest change?
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ") let $addBegin := file:append-text($outputfile,"<RECORDS>","UTF-8") let $url := "jdbc:oracle:thin:" let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>) return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Thu, Jan 8, 2015 at 11:09 AM, Christian Grün <christian.gruen@gmail.com
wrote:
Hi Erol,
I have rewritten the sql:execute such that the results will now be requested and returned in an iterative manner. As iterative retrieval of external resources always comes with risks (the requested data source may be updated at the same time; statements may not be properly closed if not all results will be retrieved by a client; etc), I ask you to do some testing with the latest snapshot [1] and give us feedback about your experiences.
Thanks in advance, Christian
[1] http://files.basex.org/releases/latest
On Wed, Jan 7, 2015 at 11:58 AM, Erol Akarsu eakarsu@gmail.com wrote:
Hi Christian,
I have another issue with following code snippet. It works fine with
small
JDBC result rows returned from database.
But if your sql query returns say 40 million rows, then you will get
out of
memory exception easily. I guess Basex first read whole 40 million rows
into
memory first, then it will allow user to use sql data.
Is there a way for basex to read sql rows block by block (say 1000 by
and write into a file?
I appreciate your help.
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ") let $addBegin :=
file:append-text($outputfile,"<RECORDS>","UTF-8")
let $url := "jdbc:oracle:thin:" let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>) return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Tue, Jan 6, 2015 at 4:22 AM, Christian Grün <
christian.gruen@gmail.com>
wrote:
Hi Erol,
I have several xml document index keeping products features and have
one
master index file of products.. Currently, we have about 25 million products that have size about 25G xquery index size.
One solution is to generate a BaseX command script, which contains multiple subsequent update operations. It can e.g. be created as follows:
let $records := count(db:open('db')//product-node) let $commands := for tumbling window $w in (1 to $records) start at $s when true() only end at $e when $e - $s = 999999 return 'for $product-node in //product-node' || '[position() = ' || $s || ' to ' || $e || '] ' || 'return insert node <a/> into $product-node' return file:write-text-lines('script.bxs', $commands)
..and run via "basex script.bxs".
Hope this helps, Christian
My xquery application is reading features data from auxiliary xml document and enhancing master document.
In order to do it, I am inserting data into master document with a generic insert command:
insert node <FEATURES_NODE> into <PRODUCT_NODE>
But xquery update is executing update process after all insert
requests
collected. But with huge size of my index, the memory of machine
(35G)
is exhausted easily and can NOT finish process.
Is there way to execute individual xquery update requests one by one
so
that process can finish?
Erol Akarsu
Christian,
I am using Basex 7.9 and my script is running well. I can not use 8.0 because I have tested script with it , it can not finish. How difficult to back port sql changes to Basex 7.9 branch?
Erol Akarsu
On Thu, Jan 8, 2015 at 2:35 PM, Erol Akarsu eakarsu@gmail.com wrote:
Christian,
I made little change in coding
This works w/o any exception.
This is super change. Thanks for implementing so quickly for me
let $list := for $rec in sql:execute ($conn,$sqlStmt) let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()}
On Thu, Jan 8, 2015 at 2:30 PM, Erol Akarsu eakarsu@gmail.com wrote:
Christian,
I have tested basex's latest snapshot on Oracle 11g.
Unfortunately, I am still getting out of memory exception. I am using like this. Is it correct use? I saw there is one entity Streamable data structure in basex. Will I have to use together with this latest change?
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ") let $addBegin := file:append-text($outputfile,"<RECORDS>","UTF-8") let $url := "jdbc:oracle:thin:" let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>) return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Thu, Jan 8, 2015 at 11:09 AM, Christian Grün < christian.gruen@gmail.com> wrote:
Hi Erol,
I have rewritten the sql:execute such that the results will now be requested and returned in an iterative manner. As iterative retrieval of external resources always comes with risks (the requested data source may be updated at the same time; statements may not be properly closed if not all results will be retrieved by a client; etc), I ask you to do some testing with the latest snapshot [1] and give us feedback about your experiences.
Thanks in advance, Christian
[1] http://files.basex.org/releases/latest
On Wed, Jan 7, 2015 at 11:58 AM, Erol Akarsu eakarsu@gmail.com wrote:
Hi Christian,
I have another issue with following code snippet. It works fine with
small
JDBC result rows returned from database.
But if your sql query returns say 40 million rows, then you will get
out of
memory exception easily. I guess Basex first read whole 40 million
rows into
memory first, then it will allow user to use sql data.
Is there a way for basex to read sql rows block by block (say 1000 by
and write into a file?
I appreciate your help.
declare variable $sqlfile as xs:string external; declare variable $outputfile as xs:string external;
let $lines := file:read-text-lines ($sqlfile) let $lines := for $line in $lines return fn:normalize-space($line) let $sqlStmt := fn:string-join($lines," ") let $addBegin :=
file:append-text($outputfile,"<RECORDS>","UTF-8")
let $url := "jdbc:oracle:thin:" let $conn := sql:connect($url) let $res := sql:execute ($conn,$sqlStmt) let $list := for $rec in $res let $is := for $line in $rec/* return element {fn:data($line/@name)} {$line/text()} return file:append( $outputfile, <record>{$is}</record>) return ($list,file:append-text($outputfile,"</RECORDS>","UTF-8"))
On Tue, Jan 6, 2015 at 4:22 AM, Christian Grün <
christian.gruen@gmail.com>
wrote:
Hi Erol,
I have several xml document index keeping products features and
have one
master index file of products.. Currently, we have about 25 million products that have size about 25G xquery index size.
One solution is to generate a BaseX command script, which contains multiple subsequent update operations. It can e.g. be created as follows:
let $records := count(db:open('db')//product-node) let $commands := for tumbling window $w in (1 to $records) start at $s when true() only end at $e when $e - $s = 999999 return 'for $product-node in //product-node' || '[position() = ' || $s || ' to ' || $e || '] ' || 'return insert node <a/> into $product-node' return file:write-text-lines('script.bxs', $commands)
..and run via "basex script.bxs".
Hope this helps, Christian
My xquery application is reading features data from auxiliary xml document and enhancing master document.
In order to do it, I am inserting data into master document with a generic insert command:
insert node <FEATURES_NODE> into <PRODUCT_NODE>
But xquery update is executing update process after all insert
requests
collected. But with huge size of my index, the memory of machine
(35G)
is exhausted easily and can NOT finish process.
Is there way to execute individual xquery update requests one by
one so
that process can finish?
Erol Akarsu
Hi Erol,
I am using Basex 7.9 and my script is running well. I can not use 8.0 because I have tested script with it , it can not finish.
Do you mean that the script does not work with the 8.0 snapshot? Do you have more information?
How difficult to back port sql changes to Basex 7.9 branch?
It's not difficult per se, but it mainly takes time (we haven't added any features to 7.9 so far).
Best, Christian
Christian,
This function never ends with Basex 8.0. "CRM" database has records sorted by PRODUCT_ID. In tumbling window, I collect all records with same PRODUCT_ID and process them,
declare %updating function local:addCRMDataIntoProducts () {
let $allProds := fn:doc("AccumulatedProducts") let $segments := ("A","B","C","D","E","F","G"} let $productsMap := map:new( for $rec in $allProds//doc let $prodIDField := $rec/field[@name eq "ProductID"] let $pid := $prodIDField/text() return map:entry($pid,$prodIDField))
for tumbling window $prodGroup in fn:doc("CRM")//record start $first next $second when $first/PRODUCT_ID eq $second/PRODUCT_ID end $last next $beyond when $last/PRODUCT_ID ne $beyond/PRODUCT_ID
let $pid := $prodGroup[1]/PRODUCT_ID return let $prodEntry := map:get($productsMap,$pid)/.. return if (fn:empty($prodEntry)) then () else let $totalAmount := sum($prodGroup//AMOUNT/text()) let $totalOrderCount := sum($prodGroup//ORDER_COUNT) let $customers := for $cid in $prodGroup/CUSTOMER_ID/text() return <field name="CustomersPurchased">{$cid}</field> let $sumSolrFields := (<field name="Amount">{$totalAmount}</field>, <field name="OrderCount">{$totalOrderCount}</field>, $customers) let $segData := for $segGroup in $prodGroup let $sid := $segGroup/SON_SEGMENT/text() let $sid := fn:concat("10",fn:index-of($segments,$sid)) group by $sid return let $segAmount := sum($segGroup//AMOUNT/text()) let $segOrderCount := sum($segGroup//ORDER_COUNT) let $segFields := (<field name="SegAmount_{$sid}">{$segAmount}</field>, <field name="SegOrderCount_{$sid}">{$segOrderCount}</field>) return $segFields
return insert nodes ($segData,$sumSolrFields) as last into $prodEntry }
On Sun, Jan 11, 2015 at 6:33 AM, Christian Grün christian.gruen@gmail.com wrote:
Hi Erol,
I am using Basex 7.9 and my script is running well. I can not use 8.0 because I have tested script with it , it can not
finish.
Do you mean that the script does not work with the 8.0 snapshot? Do you have more information?
How difficult to back port sql changes to Basex 7.9 branch?
It's not difficult per se, but it mainly takes time (we haven't added any features to 7.9 so far).
Best, Christian
Hi Erol,
Is there any chance you can build an example that we can run out of the box, simplify the query a bit, or run it on a smaller database and see if it finishes? If yes, could you send me the query plans for both 7.9 and 8.0?
Best, Christian
PS: I wonder if you really tried the exact query below with the current snapshot, as map:new does not exist anymore and has been replaced with map:merge.
On Sun, Jan 11, 2015 at 3:04 PM, Erol Akarsu eakarsu@gmail.com wrote:
Christian,
This function never ends with Basex 8.0. "CRM" database has records sorted by PRODUCT_ID. In tumbling window, I collect all records with same PRODUCT_ID and process them,
declare %updating function local:addCRMDataIntoProducts () {
let $allProds := fn:doc("AccumulatedProducts") let $segments := ("A","B","C","D","E","F","G"} let $productsMap := map:new( for $rec in $allProds//doc let $prodIDField := $rec/field[@name eq "ProductID"] let $pid := $prodIDField/text() return map:entry($pid,$prodIDField)) for tumbling window $prodGroup in fn:doc("CRM")//record start $first next $second when $first/PRODUCT_ID eq
$second/PRODUCT_ID end $last next $beyond when $last/PRODUCT_ID ne $beyond/PRODUCT_ID
let $pid := $prodGroup[1]/PRODUCT_ID return let $prodEntry := map:get($productsMap,$pid)/.. return if (fn:empty($prodEntry)) then () else let $totalAmount := sum($prodGroup//AMOUNT/text()) let $totalOrderCount := sum($prodGroup//ORDER_COUNT) let $customers := for $cid in $prodGroup/CUSTOMER_ID/text() return <field name="CustomersPurchased">{$cid}</field> let $sumSolrFields := (<field
name="Amount">{$totalAmount}</field>, <field name="OrderCount">{$totalOrderCount}</field>, $customers) let $segData := for $segGroup in $prodGroup let $sid := $segGroup/SON_SEGMENT/text() let $sid := fn:concat("10",fn:index-of($segments,$sid)) group by $sid return let $segAmount := sum($segGroup//AMOUNT/text()) let $segOrderCount := sum($segGroup//ORDER_COUNT) let $segFields := (<field name="SegAmount_{$sid}">{$segAmount}</field>, <field name="SegOrderCount_{$sid}">{$segOrderCount}</field>) return $segFields
return insert nodes ($segData,$sumSolrFields) as last into
$prodEntry }
On Sun, Jan 11, 2015 at 6:33 AM, Christian Grün christian.gruen@gmail.com wrote:
Hi Erol,
I am using Basex 7.9 and my script is running well. I can not use 8.0 because I have tested script with it , it can not finish.
Do you mean that the script does not work with the 8.0 snapshot? Do you have more information?
How difficult to back port sql changes to Basex 7.9 branch?
It's not difficult per se, but it mainly takes time (we haven't added any features to 7.9 so far).
Best, Christian
basex-talk@mailman.uni-konstanz.de