Christian,
Hello.
I was trying to profile code through a client / server as you advised. Thus I installed the module into the basex repository, and launched it through a client.
However I can't find the log left by fn:trace, or prof:time. Do you know where these traces append in this configuration ?
Thx
Jean-Marc
2013/12/5 jean-marc Mercier jeanmarc.mercier@gmail.com
Hi Christian,
Thx for your answer.
it is advisable to remove all trace() calls from the code before doing
performance comparisons.
You're right. From another side, the trace functions are called very few times (68 calls), compared to the time execution (about two minutes to run the entire test). This should be of small impact over the results.
A remark over this test : I noticed that the test is running slower when I disabled the options INLINELIMIT = 0 and TAILCALL = -1.
2013/12/5 Christian Grün christian.gruen@gmail.com
Hi Jean-Marc,
another note (I just spotted your mail on talk@xquery.com; thanks for making your code public!): it is advisable to remove all trace() calls from the code before doing performance comparisons.
Looking forward to your experiences, Christian ___________________________
On Thu, Dec 5, 2013 at 11:39 AM, Christian Grün christian.gruen@gmail.com wrote:
Hi Jean-Marc,
Currently no. You can use the prof:current-ns()
I thought about this one, but isn't this method less accurate than
prof:time
If your measurements don’t fall below milliseconds, it should make no difference.
Benchmarking is a complex field on its own, however:
• In BaseX, it’s advisable to do all measurements with a running server instance. Otherwise, your measurement will also include the starup time for initializing the JVM and doing just-in-time compilations.
• If you use the GUI, those effects will also be amortized after a while, but the visual processing of the query results may take some additional time.
• You can measure the number of multiple runs via the RUNS option [1]. The following call on command line runs the query "1" for 10,000 times:
basex -o result.txt -r 10000 -V 1
... Parsing: 0.06 ms (avg) Compiling: 0.0 ms (avg) Evaluating: 0.0 ms (avg) Printing: 0.03 ms (avg) Total Time: 0.1 ms (avg) ... Query executed in 0.1 ms (avg).
Please note that only the results of the first run will be output, so this option should only be used when the output is small. As you have probably seen, the output can also be “swallowed” via prof:dump(). Some little experiments will soon give you a feeling on what’s comparable and what’s not.
If you want to compare the timing with other XQuery implementations, it gets even more complicated, because each engine has its own timing output, optimizations to speed up several runs, etc. In this case, a simple call on command line gives you results that may still be best comparable.
Hope this helps, Christian