

The most efficient way to do this is to cross-reference disappeared and new files _by their file system IDs_. ⦁ Reworked how move/rename detection works. You are short on RAM - tweak it the other way and the program will abide. You've got tons of RAM, you can increase the cache and speed things up a bit more. There's also a setting that controls how aggressively things are cached. The scanning is now done directly into permanent storage.

For larger backups these could take several seconds or even minutes. Hence the "Loading destination snapshot" and "Saving destination snapshot" parts of a backup run. So the scanner, for example, would first build its file system tree using a temporary storage and then it would save it in a permanent file. Previously, the temp data and the perm data were handled by separate pieces of code. It can be used both as a temporary swap space for large data sets and as a permanent storage for things that need to be saved between the runs. The program now includes a very efficient generic "blob" storage facility. This fronts a LOT of new and wonderful code, but the gist of it is this. ⦁ Completely revamped backup index caching, a.k.a. You should see scanning times reduced by another 15-20% compared to previous releases, both the preview and production ones. It is now faster and lighter, has a way simpler symlink and junction filtering logic, supports filtering by reparse point tags and comes with few other improvements.
