I have installed visualsvn. And, I want to backup it periodically.
I hope not using dump service. I want to backup directory directly.
I don't know which directory I have to backup.
I set directory name for example
real data
visualsvnsvr
the reason why I don't know is the size of directories.
real data directory size is 8G.
visualsvnsvr directory size is 1.74G.
Must I backup both of them?
or just backup visualsvnsvr? if so, why is the size different?
Related
I'm trying to restore data in EFS from recovery points managed by AWS Backup. It seems AWS Backup does not support destructive restores and will always restore to a directory in the target EFS file system, even when creating a new one.
I would like to sync the data extracted from such a recovery point to another volume, but right now I can only do this manually as I need to lookup the directory name that is used by the start-restore-job operation (e.g. aws-backup-restore_2022-05-16T11-01-17-599Z), as stated in the docs:
You can restore those items to either a new or existing file system. Either way, AWS Backup creates a new Amazon EFS directory (aws-backup-restore_datetime) off of the root directory to contain the items.
Further looking through the documentation I can't find either of:
an option to set the name of the directory used
the value of directory name returned in any call (either start-restore-job or describe-restore-job)
I have also checked how the datetime portion of the directory name maps to the creationDate and completionDate of the restore job but it seems neither match (completionDate is very close, but it's not the exact same timestamp).
Is there any way for me to do one of these two things? Both of them missing make restoring a file system from a recovery point in an automated fashion very hard.
Is there any way for me to do one of these two things?
As it stands, no.
However, since we know that the directory will always be in the root, doing find . -type d -name "aws-backup-restore_*" should return the directory name to you. You could also further filter this down based on the year, month, day, hour & minute.
You could have something polling the job status on the machine that has the EFS file system mounted, finding the correct directory and then pushing that to AWS Systems Manager Parameter Store for later retrieval. If restoring to a new file system, this of course becomes more difficult but still doable in an automated fashion.
If you're not mounting this on an EC2 instance, for example, running a Lambda with the EFS file system mounted, will let you obtain the directory & then push it to Parameter Store for retrieval elsewhere. The Lambda service mounts EFS file systems when the execution environment is prepared - in other words, during the 'cold start' duration so there are no extra costs here for extra invocation time & as such, would be the cheapest option.
There's no built-in way via the APIs however to obtain the directory or configure it so you're stuck there.
It's an AWS failure that neither do they return the filename that they use in any way nor does any of the metadata returned - creationDate/completionData - exactly match the timestamp they use to name the file.
If you're an enterprise customer, suggest this as a missing feature to your TAM or SA.
I understood that hdfs snapshot keeps tracks of added or deleted files from a directory. How is the behaviour when i have files (PARQUET) that are appended continuously?
When you create a snapshot of a directory/file, they are added in the subdirectory /.snapshot , so they are ordered by date ascending whatever the file format is! There's no a maximum number of snapshots.
hdfs snapshot keeps tracks of added or deleted files from a directory
Correct me if I'm wrong, but a snapshot keeps track of every single change (even in the file) and not just of the added and deleted files from a directory.
I hope this helps you to understand their behaviour!
HDFS snapshots documentation
I have two separated servers: one is CD server and one is CM Server. I upload images on CM server and publish them. On the web database, although I saw the the images under Media Library item
But they aren't displayed on CD server (e.g on website), it indicates that the images not found. Please help me to know how I can solve that problem or I do need some configuration for that.
Many thanks.
Sitecore media items can carry actual media file either as:
Blob in the database - everything works automatically OOB
Files on the file system - one needs to configure either WebDeploy, or DFS
Database resources are costly, you might not want to waste them on something that can be achieved by free tools.
Since WebDeploy by default locates modified files by comparing file hashes between source, and target, it will become slower after a while.
You might have uploaded image in media library as a file. As such, image is stored as a File on file system. To verify this, your image item in media library will have a path value set in 'File Path' field of your image item. Such files have to be moved to file system of CD server as well.
If you uploaded your images in bulk, you can store them as blob in DB by default rather than as a File in file system using following setting-
<setting name="Media.UploadAsFiles" value="false">
I have to replicate my local folder structure in S3 bucket, I am able to do so but its not creating folders which are empty. My local folder structure is as follows and command used is.
"aws-exec s3 sync ./inbound s3://msit.xxwmm.supplychain.relex.eeeeeeeeee/
its only creating inbound/procurement/pending/test.txt, masterdata and transaction is not cretated but if i put some file in each directory it will create.
As answered by #SabeenMalik in this StackOverflow thread:
S3 doesn't have the concept of directories, the whole folder/file.jpg
is the file name. If using a GUI tool or something you delete the
file.jpg from inside the folder, you will most probably see that the
folder is gone too. The visual representation in terms of directories
is for user convenience.
You do not need to pre-create the directory structure. Just pretend that the structure is there and everything will be okay.
Amazon S3 will automatically create the structure as objects are written to paths. For example, creating an object called s3://bucketname/inbound/procurement/foo` will automatically create the directories.
(This isn't strictly true because Amazon S3 doesn't use directories, but it will appear that the directories are there.)
Why AWS S3 uses objects and not file & directories is there any specific reason to not have directories/folders in s3
You are welcome to use directories/folders in Amazon S3. However, please realise that they do not actually exist.
Amazon S3 is not a filesystem. It is an object storage service that is highly scalable, stores trillions of objects and serves millions of objects per second. To meet the demands of such scale, it has been designed as a Key-Value store. The name of the file is the Key and the contents of the file is the Object.
When a file is uploaded to a directory (eg cat.jpg is stored in the images directory), it is actually stored with a filename of images/cat.jpg. This makes is appear to be in the images directory, but the reality is that the directory does not exist -- rather, the name of the object includes the full path.
This will not impact your normal usage of Amazon S3. However, it is not possible to rename a directory because the directory does not exist. Instead, rename the file to rename the directory. For example:
aws s3 mv s3://my-bucket/images/cat.jpg s3://my-bucket/pictures/cat.jpg
This will cause the pictures directory to magically appear, with cat.jpg inside it. There is not need to create the directory first, because it doesn't actually exist. This is because the user interface is making it appear as though there are directories.
Bottom line: Feel free to use directories, but be aware that they do not actually exist and can't be renamed.