I want to add a lifecycle policy to my existing s3 bucket (using serverless) which deletes all the folders inside my s3 bucket.I have written the code in the serverless.yml.When I am trying to deploy my code i am getting -
Additional stack resources updated failed (UPDATE_ROLLBACK_COMPLETE).
so i checked into cloudformation stacks , i am getting message that my bucket already exists -
my_bucket_name already exists
Resource update cancelled
The following resource(s) failed to create: [my_bucket_name]
I am not sure why am i getting this , my s3_bucket code looks like this -
custom:
additionalStacks:
ressources:
Resources:
MyS3TBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my_bucket
LifecycleConfiguration:
Rules:
- Status: Enabled
ExpirationInDays: 30
This is not my entire s3 code but a small part of it which is required in this post. Before adding lifecycle configuration everything was working fine. Any help would be appreciated , Thank you
As the error suggests:
my_bucket_name already exists
The bucket that you want to create already exists. If its yours, you have to delete it before you can re-create it. If not, bucket names must be globally unique. This means that maybe some other AWS user has already created a backed with the same name as yours. In this case you must ensure that the back name is absolutely unique, which is often done by adding some random postfix, e.g.:
MyS3TBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my_bucket-489d939239dd3
LifecycleConfiguration:
Rules:
- Status: Enabled
ExpirationInDays: 30
Related
Am totally new to this serverless and AWS. am trying to create a S3 bucket using serverless
my serverless.yml file looks like this
service: s3-file-uploader
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-2
custom:
fileUploadBucketName: ${self:service}-bucket-${self:provider.stage}
resources:
Resources:
FileBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.fileUploadBucketName}
AccessControl: PublicRead
for the first time when I run serverless deploy it worked, but thereafter I have deleted the bucket manually from aws and try to redeploy it. It shows me this error
Serverless Error ----------------------------------------
An error occurred: FileBucket - s3-file-uploader-bucket-dev already exists.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 14.18.0
Framework Version: 2.72.1
Plugin Version: 5.5.4
SDK Version: 4.3.0
Components Version: 3.18.2
it says bucket with s3-file-uploader-bucket-dev this name already exists but there is no bucket with this name inside aws s3.
even though it gives this error, also creates a bucket with the name of s3-file-uploader-dev-serverlessdeploymentbucket-1aucnojnjl618 but this is not the name I have given in the serverlesss.yml file it should be like s3-file-uploader-bucket-dev and in the cloudFormation there is a stack created with the name s3-file-uploader-dev and its status is UPDATE_ROLLBACK_COMPLETE.
Why it's showing the above-mentioned error and at the same time creating a bucket with a different name? it's confusing that give an error and create a bucket.
It can take several hours for a bucket name to become available again.
So, either choose a different bucket name or wait a little longer until it becomes available again.
Bucket names are globally unique. You can read about it here and about deleting a bucket here. If the name has not been taken already by someone else, you need to wait for a "while" to reuse the same name. The time taken is unknown(afaik).
I am trying to set up the Inventory configuration for an S3 bucket with CloudFormation. I want to get daily inventories of data in one subfolder, and have the inventories written to a different subfolder in the same bucket. I have defined the bucket as follows:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
# ...other properties...
InventoryConfigurations:
- Id: runs
Enabled: true
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: !GetAtt S3Bucket.Arn
Format: CSV
Prefix: inventory/runs/
IncludedObjectVersions: Current
OptionalFields: [ETag, Size, BucketKeyStatus]
Prefix: runs/
ScheduleFrequency: Daily
Unfortunately, the !GetAtt S3Bucket.Arn line seems to be failing, causing an error message like "Error: Failed to create changeset for the stack: , ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Circular dependency between resource". If I use the actual ARN of the bucket in place of !GetAtt S3Bucket.Arn (it already exists from a previous version of the stack), then the deploy succeeds, so I know buckets can write Inventories to themselves.
So I guess my question is, is there a way to let Cfn resources call !GetAtt on themselves, so I don't have to hard-code the bucket ARN in InventoryConfigurations? Thanks in advance!
Can AWS CloudFormation resources call !GetAtt on themselves?
Unfortunately no, as the !GetAtt is used to reference other resources in the stack as you've experienced (other as in concrete resources that have already been created).
However, in your case, considering you know the bucket name, you could just construct the bucket ARN yourself directly.
Format:
arn:aws:s3:::bucket_name
e.g. if the name is test, you can use arn:aws:s3:::test
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: 'arn:aws:s3:::test'
I'm trying to deploy my Serverless project for several environments. I would like to run a develop, staging and production environment. To make this work I'm using serverless-dotenv-plugin with a NODE_ENV=development or NODE_ENV=acceptation (in this case). Everything related to the plugin works.
Everything related to the plugin seems to work. When I deploy for development or acceptance it loads the correct .env file, as well it does try to create the related S3 buckets.
As you can see in the attached image there are two buckets for each environment which I want to link to a Route53 domain. The initial deployment created the correct buckets. When I now deploy again, for development there is no issue. Although when I deploy for acceptance I get the error An error occurred: BucketGatsbySite - project-bucket-acc-www-gatsby already exists., so the build breaks.
Of course the bucket already exists, but because it's already created it shouldn't be re-created. This seems to work for development, but not for acceptance and I have no clue why. In this AWS documentation I can't find anything related to this. Although as you can see below I do have the DeletionPolicy: Retain, which I think should mean there shouldn't be a new one created, but the old one should be retained.
So to summarise, I want to create a bucket but not overwrite it. Only create it once and after that retain the old ones and don't try to create new ones.
My config is as follows:
service: project
package:
individually: true
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: ${env:STAGE}
region: ${env:REGION}
environment:
REGION: ${env:REGION}
STAGE: ${env:STAGE}
NODE_ENV: ${env:NODE_ENV}
CLIENT_ID: ${env:AWS_CLIENT_ID}
TABLE: "project-db-${env:STAGE}"
BUCKET: "project-bucket-${env:STAGE}"
POOL: "project-userpool-${env:STAGE}"
iam:
role:
statements:
- Effect: Allow
Action:
- dynamodb:*
Resource:
- !GetAtt projectTable.Arn
BucketReactApp:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
AccessControl: PublicRead
BucketName: "${self:provider.environment.BUCKET}-www-react"
BucketGatsbySite:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
AccessControl: PublicRead
BucketName: "${self:provider.environment.BUCKET}-www-gatsby"
Every suggestion would be really appreciated, since I'm kinda stuck on this..
Some changes in CloudFormation (CFN) require update of the resource. It's mentioned on the "AWS::S3::Bucket" documentation page as Update requires property for each statement.
And here is the list of all "Update behaviors of stack resources" and Replacement will means that the bucket will be recreated.
But it's still strange, because the only two statements that require replacement on update are:
BucketName
ObjectLockEnabled
So maybe some intermediate operation on CFN stack requires recreation of S3 bucket.
maybe you should be looking at the UpdateReplacePolicy attribute:
BucketGatsbySite:
Type: AWS::S3::Bucket
...
UpdateReplacePolicy:
link: link
I've created a bucket without a DeletionPolicy and want to add it. Updated our configuration (in our serverless.yml), and we now see the DeletionPolicy: retain in the cloud-formation template.
However, based on this blurb:
One quirk in the update template workflow is that DeletionPolicy
cannot be updated by itself but must accompany some other change that
"add, modify or delete properties" of an existing resource. A fun fact
about the AWS::S3::Bucket resource: it cannot be updated after
creation. Good news: this does not apply to the DeletionPolicy
attribute. Bad news: CFN won't pick up the changes unless another
property of the "immutable" S3 bucket is updated.
I've searched for quite a while, and I cannot determine how to query the S3 bucket and determine if the DeletionPolicy is actually set or not. I don't see where this is exposed in the AWS console, nor do I see in the AWS cli where this can be queried. It doesn't appear to be in the Bucket Policy as far as I can tell.
How do I validate the DeletionPolicy is actually set?
I think the linked article (from 1/12/2014) is outdated. I just did the following:
Deployed a stack containing
MyBucket:
Type: AWS::S3::Bucket
.
.
.
Verified it deployed successfully, then deployed
MyBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
.
.
.
Verified it deployed successfully, then deleted my stack, and verified MyBucket is still present in S3.
I am trying to deploy a serverless project which has s3 bucket creation cloudformation in the serverless.yml file, but the problem is when I tried to deploy, it says the s3 bucket already exists and failing the deployment.
I know s3 bucket name should be globally unique, and I am damn sure it is a unique name that I am using, even if changed to something else, it still says the same.
the cloudformation stack it says the s3 bucket exists is actually the newly created stack, not sure how to fix this issue. can anyone help me out with this issue and tell me how to fix the deployment issue and the cause for the issue :).
Thanks in advance.
The issue I had was, for one of the lambdas I had the above-mentioned bucket as the event source, so when some bucket is added as event source it actually creating that bucket as well, therefore when it runs the actual creation related cloudformation it is saying the bucket already exists.
So I fixed it by only keeping the event source and removed the actual declaration of that bucket.
If you add existing: true to the S3 config in your serverless.yml file it won't try to create the S3 bucket like the below:-
funcName:
handler: handler
events:
- s3:
bucket: 'my-bucket-name'
events: s3:ObjectCreated:*
existing: true
rules:
- suffix: .pdf
- prefix: documents
Anything involving CloudFormation (or any other infrastructure-in-code) is fussy, and the error messages can mislead, meaning there are a ton of things that can cause this problem (see issues on GitHub like this one).
But in my experience, the most common causes of these kind of problems are are not the pre-existing bucket, but problems with AWS credentials, permissions, or region that give misleading error messages. To fix these, or at least rule them out:
Make sure your serveless.yml is set to the region you already deployed the stack in. Example:
custom:
stage: dev
region: us-east-2
Override any latent credentials from, for example, ~/.aws/credentials, by explicitly setting your credentials in the shell you'll use to deploy. Example from the Serverless docs:
export AWS_ACCESS_KEY_ID=<your access key here>
export AWS_SECRET_ACCESS_KEY=<your access secret here.
Make sure those AWS credentials have the roles and permissions they need.
But, as I mentioned, CloudFormation is fussy. There may be other problems to solve, but try these first. You may try them and still be beating your head against the wall, but it'll more likely be the right wall. Hope this helps.
Try to use Conditional statements and pass them as a Parameter to create the bucket or not
AWSTemplateFormatVersion: 2010-09-09
Parameters:
EnvType:
Description: Environment type.
Default: test
Type: String
AllowedValues:
- prod
- test
ConstraintDescription: must specify prod or test.
Conditions:
CreateProdResources: !Equals
- !Ref EnvType
- prod
Resources:
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
ImageId: ami-0ff8a91507f77f867
MountPoint:
Type: 'AWS::EC2::VolumeAttachment'
Condition: CreateProdResources
Properties:
InstanceId: !Ref EC2Instance
VolumeId: !Ref NewVolume
Device: /dev/sdh
NewVolume:
Type: 'AWS::EC2::Volume'
Condition: CreateProdResources
Properties:
Size: 100
AvailabilityZone: !GetAtt
- EC2Instance
- AvailabilityZone
Follow the sample condition flow to decide wheather to create a resource or not.
See this for more details
When deploying, the BucketName must be unique across all regions. So if anyone has already created a bucket with "local-bucket-dev," it will throw
An error occurred: AttachmentsBucket - local-bucket-dev already
exists.
Try to just the BucketName to be unique.
I hope that helps.