Dave,
thanks for the hint. Yes, that absolutely made sense; I guess the following change (which will be uploaded with the next commit) should suffice to fix the problem:
Data, line 275: for(int p = 0, ps = Math.min(meta.size, id); p < ps; ++p) if(id == id(p)) return p;
Currently, due to its inefficiency, the method in question isn't used anywhere in the code. We're working on an ID-Pre mapping, which will allow us to keep on using existing index structures after update operations. This is work in progress, though; just stay tuned!
Christian
On Mon, Aug 16, 2010 at 7:06 PM, Dave Glick dglick@dracorp.com wrote:
Hello,
I think there might be a bug in Data.pre()...
When you use Data.id() to get the id for a given pre, it's easy enough to check Data.meta.size to make sure you're not asking for the id of a pre value that's beyond the end of the table since pre values are contiguous and are maintained when the database is updated.
However, I've noticed some crashes when asking for the pre value of a given id using Data.pre(). Specifically, if I create a large database and then remove all but a few nodes I have a very large Data.meta.lastid value but a very small Data.meta.size. If I then ask for the pre value for a very large id using Data.pre() that may or may not still be in the database, I often end up with an exception message such as "Invalid Data Access [pre:259, indexSize:3, access:3 > 2]". The expected result would simply be -1. I think the problem is that even though the id I'm asking for is still smaller than my Data.meta.lastid value, it's large enough that after the Data.table has been resized after removing most of the nodes, the call to Data.id() within Data.pre() is probing the table beyond its boundaries as it scans.
Did that make sense?
Dave