Leo,

Thx for your answer and the release. I'll check the release, but even 13 s to produce a sequence of 100000 integer is a shot-down for my purposes.

I think that there are two quick work-around however, waiting for super clojure sequences. 

- The first one is on my side, but I don't like it : I must bound my code to handle sequence of functors, and can't work with functor of functor. This might be a serious limitation to what I wanted to do with XQUERY.

- Another suggestion, that might be quick and efficient, would be to add a direct optimized access to the functor

id($el) as function() as item()*               {function(){ $el }} 

into a BaseX module. For instance it could be pertinent to place it as hof:id-functor, since you already introduced hof:id. I think that there are two points of view here :

1) From the user point of view, this would greatly improve the range of algorithmic possibilities. Moreover, it is already possible to produce functor of functor of .... Thus we remain XQUERY coherent.

2) From the BaseX point of view, I don't know... Such a functor would become "de facto" a super type for XQUERY type hierarchy, since every XQUERY types could be casted into that functor (it would be above item, because it contains empty-sequences). 


Cheers

Jean-Marc









2013/11/30 Leo Wörteler <lw@basex.org>
Dear Jean-Marc,

Am 30.11.2013 15:29, schrieb jean-marc Mercier:

I am encountering a performance issue with BaseX interpreter,
illustrated by the snippet code above. This code first creates a
sequence of 100000 integers, taking more than a minute with my
environment. Then it creates a BaseX map of 100000 integers, taking 0.1
sec. This issue seems to be due to a poor performance of the operator ()
(see function local:id below).

one part of the problem is indeed sequence concatenation, which at the moment has costs linear in the length of the result if the sequence really has to be materialized in memory, e.g. as a function result. We are thinking about switching to another representation of sequences (e.g. finger trees or chunked sequences as used in Clojure [1]), but don't hold your breath as that will probably take some time.

The other (and bigger) cause of the slowdown was that there were type checks for `item()*` introduced and not eliminated by the optimizer. As everything in XQuery is a sequence of items, these are no-ops by definition, but traversing and checking 100000 items takes time.

I fixed the latter problem and Christian already uploaded a new snapshot [2]. With that, the execution time for your query drops from 57.9 to 12.4 seconds on my notebook.

Hope that helps,
  Leo

[1] http://code.google.com/p/clojure/source/browse/trunk/src/jvm/clojure/lang/PersistentVector.java
[2] http://files.basex.org/releases/latest/