Have been using Synology NAS for backup for quite some time.
Due to changes in structure will be migrating to new system and need to restore all files from Glacier.
Tried using CloudBerry Backup (desktop free) however after synchronizing there where no listed files for restoration.
Also tried using Fast Glacier, however this resulted in errors preventing the backup from functioning.
Any and all suggestions more than welcome.
The most easy approach would be if you could copy the files in question from your Synology NAS to the new system, whether it is a new Synology NAS or something else. It is the fastest and most cost-efficient way.
If you migrating to the new Synology, you should just copy the files from the old one. If it is somehow not possible, you should use "Retrieve task" button from "Restore" tab.
Beware, it will take long time just to retrieve the task. When retrieving the task, Synology NAS will download files related to the task. These files contain all the information about your backup task: which files are backed up, versions, file location, size and so on. When the task is retrieved, you can begin to recover the files. Use this calculator to calculate the restore price before you begin - it is far from cheap.
Also, hope that restore will succeed (because, sometimes, backup integrity fails which makes restore not possible)
In case you restore to some other system than Synology, you should definitely copy the files from your old Synology. If you no longer have access to the old Synology NAS, buy a new Synology (or maybe use XPenology), restore to it and then copy the files to the new system. Amazon Glacier on Synology NAS backs up in proprietary format, which to my knowledge only readable by other Synology devices.
Good luck with migration.
Related
I would like to run an aws s3 sync command daily to update my hard drive backup on S3. Most of the time there will be no changes. The problem is that the s3 sync command takes days to check for changes (for a 4tb HDD). What is the quickest way to update a hard drive backup on S3?
If you are wanting to backup your own computer to Amazon S3, I would recommend using a Backup Utility that knows how to use S3. These utilities can do smart things like compress data, track files that have changed and set an appropriate Storage Class.
For example, I use Cloudberry Backup on a Windows computer. It does regular checking for new/changed files and uploads them to S3. If I delete a file locally, it waits 90 days before deleting it from S3. It can also handle multiple versions of files, rather than always overwriting files.
I would recommend only backing-up data folders (eg My Documents). There is no benefit to backing-up your Operating System or temporary files because you would not restore the OS from a remote backup.
While some backup utilities can compress files individually or in groups, experience has taught me to never do so since it can make restoration difficult if you do not have the original backup software (and remember -- backups last years!). The great things about S3 is that it is easy to access from many devices -- I have often grabbed documents from my S3 backup via my phone when I'm away from home.
Bottom line: Use a backup utility that knows how to do backups well. Make sure it knows how to use S3.
I would recommend using a backup tool that can synchronize with Amazon S3. For example, for Windows you can use Uranium Backup. It syncs with several clouds, including Amazon S3.
It can be scheduled to perform daily backups and also incremental backups (in case there are changes.)
I think this is the best way, considering the tediousness of daily manual syncing. Plus, it runs in the background and notifies you of any error or success logs.
This is the solution I use, I hope it can help you.
I have a job that clones a repo then s3 syncs changes files over to an s3 bucket. I'd like to sync only changed files. Since the repo is cloned first, the files always have a new timestamp so s3 sync will always upload them. I thought about using "--size-only", but my understanding is that this can potentially miss files that have legitimately changed. What's the best way to go about this?
There are no answers out of the box that will sync changed files if the mtime cannot be counted on. As you point out, this means that if a file does not change in size, then using the "--size-only" flag will cause aws s3 sync to skip those files. To my mind there are two basic paths, the solution you use will depend on your exact needs.
Take advantage of Git
First off, you could use the fact you have the files stored in git to help update the modified time. git itself will not store the metadata, the maintainers have a philisphy that doing so is a bad idea. I won't argue for or against this, but there are two basic ways around this:
You could store this metadata in git. There are multiple approaches to doing this, one such is metastore which uses a tool that's installed alongside git to store the metadata and apply it later. This does require adding a tool to all users of your git repo, which may or may not be acceptable.
Another option is to attempt to recreate the mtime from metadata that's already in git. For instance, git-restore-mtime does this by using the timestamp of the most recent commit that modified the file. This would require running an external tool before running the sync command, but it shouldn't require any other workflow changes.
Using either of these options would allow a basic aws sync command to work, since the timestamps would be consistent from one run to another.
Do your own thing
Fundamentally, you want to upload files that have changed. aws sync attempts to use file size and modification timestamps to detect changes, but if you wanted to, you could write a script or program to enumerate all files you want to upload, and upload them along with a small bit of extra metadata including something like a sha256 hash. Then on future runs, you can enumerate the files in S3 using list-objects and use head-object on each object in turn to get the metadata to see if the hash has changed.
Alternatively, you could use the "etag" of each object in S3, as that is returned in the list-objects call. As I understand it, the etag formula isn't documented and subject to change. That said, it is known, you can find implementations of it here on Stack Overflow and elsewhere. You could calculate the etag for your local files, then see if the remote files differ and need to be updated. That would save you having to do the head-object on each object as you check for changes.
I started to use AWS S3 to provide a fast way to my users download the installation files of my Win32 apps. Each install file has about 60MB and the download it's working very fast.
However when i upload a new version of the app, S3 keeps serving the old file instead ! I just rename the old file and upload the new version with the same name of the old. After i upload, when i try to download, the old version is downloaded instead.
I searched for some solutions and here is what i tried :
Edited all TTL values on cloudfrond to 0
Edited the metadata 'Cache-control' with the value 'max-age=0' for each file on the bucket
None of these fixed the issue, AWS keeps serving the old file instead of the new !
Often i will upload new versions, so i need that when the users try to download, S3 never use cache at all.
Please help.
I think this behavior might be because S3 uses an eventually consistent model, meaning that updates and deletes will propagate eventually but it is not guaranteed that this happens immediately, or even within a specific amount of time. (see here for the specifics of their consistency approach). Specifically, they say "Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all Regions" and I think the case you're describing would be an overwrite PUT. There appears to be a good answer on a similar issue here: How long does it take for AWS S3 to save and load an item? which touches on the consistency issue and how to get around it, hopefully that's helpful
I am new to Software Configuration Management systems, but am now interested in using Fossil. I have been reviewing the documentation on-and-off for a few days, and have played with the program a little, but I am still unsure how to most appropriately use it to meet my needs, so I would appreciate any advice anyone would like to offer on the following use scenario.
I am working exclusively in Windows environments. I am a sole developer, often working on a number of relatively small projects at a time. For the time being at least, I do not expect to make much use of forking and branching capabilities – I like to think my code development generally progresses fairly linearly. But I regularly need to access and update my code at a number of usually standalone PCs - that is, they are never networked to each other and often do not even have internet access.
I am hoping that Fossil will assist me in two ways, keeping track of milestones in my codebases including providing the ability to easily restore a previous version for testing purposes, and also making it as simple as possible for me to ensure I always have all versions of the code for every project accessible to me when I sit down to work at any particular PC.
To achieve the second objective, I expect to make a point of always carrying a USB Flash Drive with me as I move from PC to PC. I expect this Flash Drive should contain a number of repository files, one for each project I am concerned with. I expect when I sit down at any particular PC I should be able to extract from this Flash Drive whichever version of whichever project I need to access. Similarly, when I “finish” working at this PC if I wish to retain any changes I have made I expect I should “commit” these changes back to relevant repository on the Flash Drive in some way. But the most appropriate way to do all this is unclear to me.
I understand Fossil is generally intended to work with a local copy of a project’s repository on each machine’s local hard disk, and with a master repository accessed remotely when required via a network or internet connection. In my case, it seems to me the master repository would be the relevant repository file on my Flash Drive, but when my Flash Drive is plugged into the machine I am working on, the files on it are effectively local, not remote. So, when I sit down to work at a PC, should I copy the repository file for the project I need to work on onto the PC’s local hard drive, then open the version of the code I need to access from this copy of the repository, or should I just open the project repository directly from my Flash Drive ? Additionally, if I should copy the repository onto the local hard disk, should I simply copy the repository file using the operating system, or should I use Fossil to clone it to the local hard disk (I do not really understand the difference here) ? Then, when I finish working at the PC, if I wish to incorporate any changes I have made back into the repository on my Flash Drive, should I update this directly into the repository on my Flash Drive, or into a copy of the repository on the PC’s local hard disk ? If the later, should I then simply copy the updated repository file onto my Flash Drive (overwriting the previous repository file), or should I “pull” or “push” the changes into the repository file on the Flash Drive – can I even do this, when the hard disk based repository and the Flash Drive based repository files are effectively both local files on the same PC ? I’m guess I'm getting a bit confused here…
A possible additional complicating factor in the “right” way to do all this is that typically, when I finish working at a PC I will not want to leave a copy of the source code or the repository on the PC (i.e., the customer’s hardware). I understand deleting the local copies of the repositories undermines the redundancy and backup benefits of using a Distributed SCM system, but I guess I will address this by keeping copies of the repositories on my own PCs and ensuring I backup the repository files on the Flash Drive itself reliably.
So any thoughts, experience or advice on the most appropriate way to use Fossil in the above scenario would be most welcome, thank you.
Hope this is still actual :)
I would suggest following process:
On your usb drive do:
mkdir fossil - to keep your fossil repo files
mkdir src - to keep your project files.
Go to the fossil folder and create repos for your projects A and B
cd fossil
fossil init a.fossil
fossil init b.fossil
Use .fossil extensions as this will simplify work with repos later.
Create fossil_server.cmd batch file to start fossil as a server.
SET REPO_PATH=X:\fossil
SET FOSSIL_CMD=Path_to_fossil_exe/fossil.exe
start %FOSSIL_CMD% server %REPO_PATH% --repolist --localhost --port 8089
Start fossil_server.cmd, open browser and go to localhost:8089
You will see page with your repos, so you can configure them, write wiki/tickets and so on.
Go to the src folder
mkdir a
mkdir b
cd a
fossil open ../../fossil/a.fossil
cd ../b
fossil open ../../fossil/b.fossil
So you have initial repository for your files in src/a, src/b
Add new files to A/B projects and do
cd src/a
fossil addremove
REM to add new files to the repository
fossil commit
REM to commit changes.
Now you can add/modify files in your projects, commit them and rolling back.
just use:
fossil commit --tag new_tag
to add easy to understand tag to your commit,
more on https://fossil-scm.org/home/doc/trunk/www/quickstart.wiki
Can anyone clarify an issue? I'm using the VSS API (C++ using VSS2008 and the latest SDK running on XP SP3) in a home-brew backup utility*.
THe VSS snapshot operations work fine for folder that have no subfolders - i.e. my email and SQL server volumes. However when I take a snapshot of a folder that does contain subfolders, the nested structure is 'flattened' in the snapshot - all sub-directories cease to exist.
So here is the question: I am aware that support for VSS on XP is a bit limited but is there a way to specify a snapshot be non-recursive? The docs are not very helpful ...
I got really tired of buggy rubbish that costs boatloads and fails every few days so I thought I'd roll my own. It'll get onto CodeProject at some point. If anyone is interested let me know and you can have a (source) copy when it's ready ...
Thx++
Jerry
Your question is confusing...
VSS does not work at a "folder" level. It works a "volume" level.
You "snap" a volume and you will have a device path which you can "open" using the filesystem api (which will automatically mount the device volume with a filesystem) on a file by file or you can access the device directly (sector by sector).
It should be easy to backup all files on the snapped device volume (don't forget all of the file streams and ACL's for NTFS files), your problem will be restoring them... VSS will not help you on the restore. The main problem will be restoring a system volume, where you will need another OS to boot to like WinPE or DOS or something else. If your not worried about system volumes then restore can be easy.
If you backup the data in terms of sectors, then you get the added benefit that if you write a volume device driver for it (to look like a volume or HD) then windows will auto-load a filesystem driver for it. This gives you a free explorer application, this is what most sector based backup applications do. Also it gives them VM possibilities.
Even if you are doing simple file backups, it helps to understand filesystems (NTFS, FAT, etc) so that you know what you can/should backup and restore. Do you know what a NTFS reparse point is? How are you going to deal with it if you hit one during your backup? Do you know how windows actually boots and what files you need to backup and restore and "patch" to be able to have a chance at booting. On a restore, how best do you lay out the NTFS volume as not to affect NTFS performance on the restored volume? Are you going to support restoring system volumes to new hardware, what does that require you to do just to have a chance of working? The questions are endless.
System backup/restore is not easy, there are lots of edge cases (see some of the questions above) that you don't know about until you hit them.
Good luck on you project, I hope I haven't put you off too much, I'm just saying there is a lot of work to be able to deliver a backup application that most people have have no idea about.
Comment on the above - if a 'writer' is playing the VSS game then it will ensure that the file system is in a happy state as part of the VSS setup.
In the case of MS SQL Server - check that it is a VSS writer. If it is then your snapshot of the DB files should be OK. If not, then its in what is called a 'crash state'. So for example if you are using MySQL or some other non-MS, non-VSS aware SQL database - your backup may or may not be coherent ('a good one'). In that case it may be better than nothing, but it it may also still be useless. Using VSS MAY result in a better integrity from which to make your backup, but of the files are open, they are open and if the app does not play in the VSS pig-pen then you may or may not be hosed.