This PR sets the default AESCrypt stream format to v2.
This is done to introduce the ability to read the v3 format before switching to the new format. With a delayed activation, it is more likely that users can easily roll back a version if needed.
This PR adds a try/catch around initializing the default secret provider, to avoid crashes on startup where the setup is not correct for the provider.
If an explicit provider is set, this will crash as the user explicitly asked for it, but it is not working.
With this change, the implicit secret provider does not cause crashes.
This adds support for the `DO_NOT_TRACK` environment variable, which is a proposal to make all tools agree on a single opt-out variable, similar to the browser header.
This PR adds support for using the `quota-disable` option to diable the quota check performed before executing the remote sync operation.
This is needed if the quota information returned is unreliable.
This PR extends the search API to allow versions to be added so searching can be limited to specific versions.
Before this PR, only time was supported, but this is picking versions older than the timestamp as well.
This PR adds a small clause the prevents the delete query from failing if there are inconsistencies where deleted blocks can exist and a volume has no blocks.
I was not able to reproduce the issue without manipulating the database, but it was reported happening on the forum.
Co-authored-by: Copilot <copilot@github.com>
This PR adds logging for 500 errors to the live logs.
As a precaution, errors that are unknown in nature are not reported to the client as they could expose sensitive information to an unauthenticated caller.
With this PR the errors are logged in the live-log and are visible to authenticated users.
This PR adds a new `--run-script-post-backup` option that allows running a script after the main backup operation has completed, but before locking, compacting and verification is done.
The use for this is intended for cases where you want to stop or pause some service, run a backup, and then resume the service. Prior to this PR you would need to wait for the entire backup operation to complete, but with the new option you can resume as soon as the source data is no longer needed.
This PR adds a probing mechanism that checks if the backup is using remote file locking and automatically sets the `--repair-refresh-lock-info` setting based on that finding.
If the backup is using `--remote-file-lock-duration` then the operation will refresh locks by default. otherwise it will not refresh locks.
If the backup sets `repair-refresh-lock-info` then that value is used.
Priority is (most important first):
- input value (from FE)
- backup `repair-refresh-lock-info` setting
- backup `remote-file-lock-duration`
- global settings `repair-refresh-lock-info`
- global settings `remote-file-lock-duration`
If the backup is using remote file locking, then most likely the user will want the repaired database to contain lock information. If remote locking is not used, the repair will now skip fetching lock information.
This PR partially re-introduces the logic attempting to get the free space for a specific path.
The code before this would incorrectly always return the free space of `/` on non-Windows, instead of the actual mounted path's free space.
Rewrote the certificate selector to use the TlsHandshakeCallbackOptions as the HttpsConnectionAdapterOptions did not support returning the full chain, but only returned the leaf certificate.
This fixes#6807
This PR adds experimental support for using either ZStandard or GZip compression. Unlike the Zip module, which also supports these methods, the new module does a full-volume compression which is both much faster and compresses better with modern compression algorithms.
Because the compressor can see the full volume (and not just the blocks) it can compress data across blocks.
The downside to this is that it is not possible to take a single entry out of the compressed stream, but instead, Duplicati needs to decompress the whole stream and can then access the contents.
To make the implementation compatible with standard tools, the inner format is Tar (using ustar format). To allow faster reading, a small end-of-file header is added to the Tar file, emulating the concept from Zip files.
This small addition allows Duplicati to have random access to files in a Tar volume without needing to scan the whole thing.
To make sure data is always recoverable, the format is 100% compatible with regular tools, so `untar -xf` will work on the created volumes.
The downside is an extra pass for decompression when reading the volumes. When writing, each entry (a block of data most commonly) is written to a temporary file before being added to the output stream. It is possible to toggle this to use a memory buffer for even more speed up.
The ZStandard compression is from `ZstdSharp.Port` which has excellent performance.
When we move to .NET11 this can easily be changed to the new built-in module.
Since this is the first attempt to add this, it will log a warning on each use, explaining that the feature is currently just for testing.
Co-authored-by: Copilot <copilot@github.com>
This PR simply adds an alias to `--dblock-size` so it can also be supplied as `--remote-volume-size` as the latter is more intuitive for people not familiar with Duplicati's file naming conventions.
This PR updates the code that handles remote connections to detect server emitted error messages and log them.
Prior to this PR an expired link would make the UI appear to be sort-of connected with no indication of why it would not fully connect.