Hello List
I am experimenting with statistical data ( http://www.semantechs.co.uk/ ) and found that organising 2.5Gb of xml data into 12 unevenly sized collections ranging from 40 to 400Mb performs much more slowly than 36 collections each containing approximately 75Mb of data.
What rules of thumb are there to guide me in designing the most performant database?
Many thanks
Peter