https://docs.aws.amazon.com/quickstart/latest/rd-gateway/step2.html#existing-standalone
https://s3.amazonaws.com/quickstart-reference/microsoft/rdgateway/latest/templates/rdgw-standalone.template
I'm referencing template above to create my Remote Desktop Gateway (RDGW) in existing VPC. It has QSS3BucketName, QSS3KeyPrefix in parameter section. Resources section has RDGWLaunchConfiguration references QSS3BucketName bucket again. on Setup files, it's calling the following path.
https://${QSS3BucketName}.${QSS3Region}.amazonaws.com/${QSS3KeyPrefix}submodules/quickstart-microsoft-utilities/scripts/Unzip-Archive.ps1
For some reason, PT30 (after30 mins it says it didn't get the required signal and rolls back. Question to the community is what do I need to store these files in the S3 buckets or templates would dump it in S3 while it's creating the stack.
I also created a bucket in S3, copied these scripts from GitHub and pasted inside the bucket in the correct order, still does not work. Kind frustrating.
The quick start template is referencing nested templates placed at aws-quickstart S3 bucket. If we place the default values in above URL, than the exact URL we get is,
https://aws-quickstart.s3.amazonaws.com/quickstart-microsoft-rdgateway/submodules/quickstart-microsoft-utilities/scripts/Unzip-Archive.ps1
You can create the stack with either default values without changing AWS quick start configuration or else you can download all the referencing templates, modify as per your requirement and place it in your own bucket. Once it is done than replace the URL value in the main template with your bucket's URL.
Related
lets say I put a rule to prefix "logs/" and dont have any contents inside, and my day of expiration is set to 1 day.
After 1 day, does logs folder get deleted? or it only applies to objects created inside that prefix, and they get deleted after 1 day of each respective object file creation?
First of all, in S3 there are no folders. S3 is essentially a key-value storage, where the keys are strings, identifiers for the objects and the values are the objects stored.
Your keys can mimic a folder structure, the AWS console will show as if you would have folders for organizational purposes, but under the hood there are no folders. Since there are no folders, when you delete the last object with a folder like suffix, the "folder" will also disappear. So, if everything from logs/ is deleted, the log "folder" will also be deleted.
You may be able to have "empty folders". From the AWS docs:
When you use the Amazon S3 console to create a folder, Amazon S3 creates a 0-byte object with a key that's set to the folder name that you provided. For example, if you create a folder named photos in your bucket, the Amazon S3 console creates a 0-byte object with the key photos/. The console creates this object to support the idea of folders.
If you would want to keep your logs\ prefix, you may want to create the policy in a way to exclude this 0-byte file.
Fairly new to cloudformation templating but all I am looking to create a template where I create a S3 bucket and import contents into that bucket from another S3 bucket from a different account (that is also mine). I realize CloudFormation does not natively supports importing contents into S3 bucket, and I have to utilize custom resource. I could not find any reference/resources that does such tasks. Hoping someone could point out some examples or maybe even some guidance as to how to tackle this.
Thank you very much!
Can't provide full code, but can provide some guidance. There are few ways of doing this, but I will list one:
Create a bucket policy for the bucket in the second account. The policy should allow the first account (one with cfn) to read it. There are many resources on doing this. One from AWS is here.
Create a standalone lambda function in the first account with execution role allowing it to the read bucket from the second account. This is not a custom resource yet. The purpose of this lambda function is to test the cross-account permissions, and your code which reads objects from it. This is like a test function to sort out all the permissions and polish object copying code from one bucket to other.
Once your lambda function works as intended, you modify it (or create new one) as a custom resource in CFN. As a custom resource, it will need to take your newly created bucket in cfn as one of its arguments. For easier creation of custom resources this aws helper can be used.
Note, that the lambda execution timeout is 15 minutes. Depending on how many objects you have, it may be not enough.
Hope this helps.
If Custom Resources scare you, then a simpler way is to launch an Amazon EC2 instance with a startup script specified via User Data.
The CloudFormation template can 'insert' the name of the new bucket into the script by referencing the bucket resource that was created. The script could then run an AWS CLI command to copy the files across.
Plus, it's not expensive. A t3.micro instance is about 1c/hour and it is charged per second, so it's pretty darn close to free.
I need to configure a setup for an already existing lambda function setup and used with an S3 buckets in/out, kind of like the sample AWS pages provide here:
http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
The Lambda function that I'm working with does the following:
- notices new image in S3 location mybucket-test/images/in
- creates a .json file at the same S3 location - mybucket-test/images/in
- creates variations of the image at S3 location mybucket-test/images/out
My task is about creating a second set of S3 in/out to work with the same lambda function. The AWS tutorial provided at the above link, doesn't mention how to set up the "resize" bucket.
After following the tutorials I ended up with this situation for the Lambda function:
- lambda notices new image in S3 location wire-qa/images/in
- creates .json file at the expected location wire-qa/images/in
- creates variations of the image in S3 location mybucket-test/images/out INSTEAD OF placing them into wire-qa/images/out
It seems to me there is something simple that I'm missing, but figuring out what it is on my own is already taking a lot of time.
I looked into all sorts of places to configure the Lambda function to use my bucket wire-qa/images/out, as well as mybucket-test/images/out. But I have not found so far anything relevant.
Would very much appreciate some help here. I would at least like to understand where in AWS example the bucket S3 "resize" is set up to work with Lambda.
I'd like to write a Lambda function that is triggered when files are added or modified in an s3 bucket and processes them and moves them elsewhere, clobbering older versions of the files.
I'm wondering if AWS Lambda can be configured to trigger when files are updated?
After reviewing the Boto3 documentation for s3 it looks like the only things that could happen in a s3 bucket would be creations and deletions.
Additionally, the AWS documentation seems to indicate there is no way to trigger things on 'updates' to S3.
Am I correct in thinking there is no real concept of an 'update' to a file in S3 and that an update would actually be when something was destroyed and recreated? If I'm mistaken, how can I trigger a Lambda function when an S3 file is changed in a bucket?
No, there is no concept of updating a file on S3. A file on S3 is updated the same way it is uploaded in the first place - through a PUT object request. (Relevant answer here.) An S3 bucket notification configured to trigger on a PUT object request can execute a Lambda function.
There is now a new functionality for S3 buckets. Under properties there is the possibility to enable versioning for this bucket. And if you set a trigger for creating on S3 assigned to your Lambda function - this will executed every time if you 'update' the same file as it is a new version.
I am setting up an Amazon S3 output on BitMovin and it is telling me my values are incorrect. I don't know which ones because they all have been copied and pasted over. It may be another issue with my bucket.
I have setup a bucket in Oregon so us-west-2, copy and pasted the name, access key and access secret in. My policies match what they have on this document too:
Tutorial: Policies for BitMovin
your Copy&Paste went wrong, but just a bit :)
In your second statement, you would have to remove the "/*"-part from the string "arn:aws:s3:::test-bitmovin/*" within the "Resource"-Array.
The allowed actions of the second statement apply to the bucket but not to the objects within. Therefore the stated resource should refer to a bucket.
Then it should work as expected!