top of page

Mysite 1 Group

Public·52 members
Konstantin Muravyov
Konstantin Muravyov

!!TOP!! Download 44x Gmx Txt

recover: in #2957, recover was changed to monitor arc file downloadsto predict how much time remained for downloads to finish. When anarc file finished and was renamed to remove the .tmp suffix, a racecondition could cause a traceback. After the traceback, the arcfile would be present in the backup directory. Thanks David!

Download 44x Gmx txt

eliminate subparts on the same destination with the dest nopartscommand. This will download the files that are in parts (if notalready local), assemble them, remove the remote parts, and uploadas single files. Before starting, removing the maxsize keywordfrom dest.conf. Then use this command:

recover: when a backup is recovered, a new file testonly.conf iscreated in the backup directory. While this file is present,downloads work but destinations cannot be modified (no upload orremove). Some customers do test recovers to verify their backup,which is a great idea. But test recovers created a situation wheretwo different backup directories, the real one and the test, matchthe unique ID HashBackup uses to prevent accidental overwrites ofremote data. An operation in the test directory that modifies thebackup could unintentionally modify the remote data, potentiallycausing problems for the production backup. The testonly.conf fileallows tests to be run without allowing remote data to be changed.For real recovers of the local backup directory, delete thetestonly.conf file.

dest verify: the verify command quickly checks destinations to makesure that the files HB thinks are stored are actually stored,without downloading any files. Any files that are missing areflagged so they can be uploaded again or copied from otherdestinations. Previously, after dest verify, a sync operationoccurred to do uploads or transfers. But if a new destination isbeing setup, this sync may run for days, and there is no maxtime ormaxwait limit like with backup. So the sync after verify has beenremoved. If you still want to do a sync, add hb dest sync followinghb dest verify, or the next backup will do a sync.

selftest: previously selftest uploaded hb.db changes only if --fixwas used. With -v4, selftest downloads and checks arc files, andwhile they are downloaded, it also checks whether they need packing,to save a later download. During packing selftest corrects blocks,even without --fix. hb.db is also changed if --inc is used, torecord progress. So now, hb.db is uploaded by selftest whenever itchanges - even without --fix.

ssh: the ssh and sftp destination types are similar, but sshsupports selective download (reading parts of remote files) via thedd command. If the dd command is not available, the ssh destinationfailed. Now ssh destinations test to see if dd is available andif not, disable selective download. To avoid this error, change thedestination type from ssh to sftp.

backup: previously, if a new destination was added, backup tried toget it in sync at the start of the backup. This worked okay ifcache-size-limit was -1, but with a limited cache, arc files have tobe downloaded from an existing destination then sent to the newdestination. This did not respect --maxtime or --maxwait, and itheld up the backup until the sync was finished. For a large backupthis is impractical.

selftest: when cache-size-limit is -1, all arc files should be keptlocally. If cache-size-limit was previously >= 0 (arcs not alllocal) but is now -1 (arcs should be local), some arc files may nothave local copies. Since selftest -v4 already downloads remote arcfiles to verify them, it will now save a copy if the local copy wasmissing. This allows sites to migrate arc files back locally with-v4 and optional --inc incremental testing.

S3: a new keyword "partsize" can be used to specify a fixed partsizefor multipart S3 uploads and downloads. The default is 0, meaningthat HB chooses a reasonable part size from 5MB (the smallestallowed) to 5GB (the largest allowed), based on the file size. Whenthe new partsize keyword is used, HB uses this part size todetermine the number of parts needed, then "levels" the part sizeacross all parts. For example, when uploading a 990MB file with apartsize of 100M, HB will use 10 parts of 99M each. This option wasadded for the Storj network because it prefers part sizes that are amultiple of 64M (the Storj default segment size). The size can bespecified as an integer number of bytes or with a suffix, like 100Mor 100MB, but is always interpreted as MiB, ie, 100 * 1024 * 1024.

dest: a new "test" subcommand tests a single destination or allcurrently configured destinations. It performs 3 rounds of upload,download, and delete tests for many file sizes, displaying theperformance of each and an average performance for each file size.

S3: multipart get was added to S3 destinations. This scales verywell with the workers and partsize keywords in dest.conf. In testswith the Storj S3 Gateway, 1GB download performance increased from20 MB/s to over 200 MB/s using multipart gets, and Amazon S3 scaledup to over 300 MB/s with 16 threads. Multipart uploads anddownloads are enabled by default unless multipart false is used indest.conf. Thanks Dominick!

S3: if a file is copied to the MinIO object store with filesystemcommands, everything works fine except that MinIO serves the filewith an etag of 00000000000000000000000000000000-1. Instead ofcomplaining and aborting the download, HB now ignores these etags.If there was a download error, it will be caught later during therestore with a block hash mismatch. Thanks Ian!

backup: in #1946, a SIGTERM handler was added to the backup commandso that if the backup program was terminated, it finished thecurrent file and then stopped cleanly. However, this does not playwell with the new S3 multipart download feature, so the SIGTERMhandler has been removed.

selftest: if a -v4 selftest used --inc, had a download limitspecified with ",xxxMB", and would have exceeded the limit, selftestwould say "Checking 0% of backup" instead of the actual percentage.Also, if only one version was being incrementally checked (-r usedwith --inc), the percentage checked is not of the whole backup butjust of the version requested, so the message was changed to"Checking xx% of version r".

ssh: the ssh destination uses remote "dd" commands to downloadpieces of arc files (selective download). This is faster thandownloading whole arc files, especially when restoring small files.Some 3rd-party providers of ssh services use jails or chroots torestrict the commands that can be used and dd is sometimes notavailable. Now HB gives advice to either use sftp instead of ssh orenable dd.

dest.conf: the maxsize keyword is the maximum size of files uploadedto a destination. Any files over maxsize are split into partsbefore uploading, and reconstructed from parts when downloading.maxsize is used for small backups to email or WebDAV where there areoften low limits on file sizes, like 25MB.

recover: if --check was used and there were 2 destinations indest.conf, hb.db.N files were downloaded and applied twice bymistake. This could also cause download problems if workers was > 1in dest.conf because the same hb.db.N file could be downloadedconcurrently by multiple workers.

upgrade: the RSA key used to verify new versions of HashBackup hasbeen upgraded to RSA 4096. For the rest of 2019, #2295 and belowcan still use the upgrade command and will use the old RSA key toverify the download, while #2298 and up will use the new RSA key.

previously, new releases of HB were posted to the hashbackup.comDownload page and also to the upgrade server. To make automateddeployments easier, an installer or "boot binary" that does notchange from release to release is now posted on the Download page.The boot binary is run just like the regular HB command, but theinitial run will do an hb upgrade, replacing the boot binary, thenexecute the original command. The boot binary has the public keybuilt in and does RSA 4096 verification of the latest versiondownloaded from the upgrade server, just like a regular hb upgrade.

the "dir" destination type did not handle missing arc files wellwhen selective download was used. It went through the retry cyclethen stopped, which caused selftest to think that all subsequent arcfiles in the backup were bad, which caused it to delete them all if--fix was used. Now when an arc file is missing on a dirdestination, --fix only deletes the blocks in that arc file. ThanksIsrael!

With the new --splice option, get can combine data from parts oflocal files with remote backup data to restore files, sometimescalled incremental restore. This can be done even if the localfiles are changing, for example, a running VM image. For very largerestores, splicing can use temp space in the backup directory equalto the size of the restore, and it requires reading the local files.Splicing can reduce the amount of data downloaded significantly andis often faster than other non-spliced restores. Thanks Jacob andBen!

In this initial release, local data is matched for entire files.This may be improved in future releases, for example, to allowrestoring a large VM image to an earlier state using both localfiles and downloaded data. To compare restore plans without doing arestore, use get --plan with and without the --no-local options.Thanks Ben!

get: if a restore is restarted, get will restart much faster anddoes not download data it has already restored if files are restoredto the same location. This also uses the mtime + size check foridentical files. The --no-mtime option will verify the file hash ifthere is a possibility mtime has been altered. This is unlikely,but may be necessary for extremely high security restores.

get: when cache-size-limit is set, some archive files must bedownloaded for a restore. Creating a plan for the optimum downloadof remote data is rather complex and can be on the slow side,especially for a very large restore with a lot of small blocks, forexample, a large VM image saved with 4K blocks. In this release, HBsaves restore plans so that if there is a problem in the restore andit has to be tried again, the restore plan can be read in a fewseconds rather than computed from scratch. Keep in mind that arestore plan is dependent on the files being restored, so a plan canonly be reused if the exact same files are being restored. 041b061a72


Welcome to the group! You can connect with other members, ge...


bottom of page