Serverless: Deplyment error S3 Bucket already exists in stack - amazon-web-services

I am trying to deploy a serverless project which has s3 bucket creation cloudformation in the serverless.yml file, but the problem is when I tried to deploy, it says the s3 bucket already exists and failing the deployment.
I know s3 bucket name should be globally unique, and I am damn sure it is a unique name that I am using, even if changed to something else, it still says the same.
the cloudformation stack it says the s3 bucket exists is actually the newly created stack, not sure how to fix this issue. can anyone help me out with this issue and tell me how to fix the deployment issue and the cause for the issue :).
Thanks in advance.

The issue I had was, for one of the lambdas I had the above-mentioned bucket as the event source, so when some bucket is added as event source it actually creating that bucket as well, therefore when it runs the actual creation related cloudformation it is saying the bucket already exists.
So I fixed it by only keeping the event source and removed the actual declaration of that bucket.

If you add existing: true to the S3 config in your serverless.yml file it won't try to create the S3 bucket like the below:-
funcName:
handler: handler
events:
- s3:
bucket: 'my-bucket-name'
events: s3:ObjectCreated:*
existing: true
rules:
- suffix: .pdf
- prefix: documents

Anything involving CloudFormation (or any other infrastructure-in-code) is fussy, and the error messages can mislead, meaning there are a ton of things that can cause this problem (see issues on GitHub like this one).
But in my experience, the most common causes of these kind of problems are are not the pre-existing bucket, but problems with AWS credentials, permissions, or region that give misleading error messages. To fix these, or at least rule them out:
Make sure your serveless.yml is set to the region you already deployed the stack in. Example:
custom:
stage: dev
region: us-east-2
Override any latent credentials from, for example, ~/.aws/credentials, by explicitly setting your credentials in the shell you'll use to deploy. Example from the Serverless docs:
export AWS_ACCESS_KEY_ID=<your access key here>
export AWS_SECRET_ACCESS_KEY=<your access secret here.
Make sure those AWS credentials have the roles and permissions they need.
But, as I mentioned, CloudFormation is fussy. There may be other problems to solve, but try these first. You may try them and still be beating your head against the wall, but it'll more likely be the right wall. Hope this helps.

Try to use Conditional statements and pass them as a Parameter to create the bucket or not
AWSTemplateFormatVersion: 2010-09-09
Parameters:
EnvType:
Description: Environment type.
Default: test
Type: String
AllowedValues:
- prod
- test
ConstraintDescription: must specify prod or test.
Conditions:
CreateProdResources: !Equals
- !Ref EnvType
- prod
Resources:
EC2Instance:
Type: 'AWS::EC2::Instance'
Properties:
ImageId: ami-0ff8a91507f77f867
MountPoint:
Type: 'AWS::EC2::VolumeAttachment'
Condition: CreateProdResources
Properties:
InstanceId: !Ref EC2Instance
VolumeId: !Ref NewVolume
Device: /dev/sdh
NewVolume:
Type: 'AWS::EC2::Volume'
Condition: CreateProdResources
Properties:
Size: 100
AvailabilityZone: !GetAtt
- EC2Instance
- AvailabilityZone
Follow the sample condition flow to decide wheather to create a resource or not.
See this for more details

When deploying, the BucketName must be unique across all regions. So if anyone has already created a bucket with "local-bucket-dev," it will throw
An error occurred: AttachmentsBucket - local-bucket-dev already
exists.
Try to just the BucketName to be unique.
I hope that helps.

Related

S3 bucket already existing error - even after manually deleted it from aws

Am totally new to this serverless and AWS. am trying to create a S3 bucket using serverless
my serverless.yml file looks like this
service: s3-file-uploader
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-2
custom:
fileUploadBucketName: ${self:service}-bucket-${self:provider.stage}
resources:
Resources:
FileBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.fileUploadBucketName}
AccessControl: PublicRead
for the first time when I run serverless deploy it worked, but thereafter I have deleted the bucket manually from aws and try to redeploy it. It shows me this error
Serverless Error ----------------------------------------
An error occurred: FileBucket - s3-file-uploader-bucket-dev already exists.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 14.18.0
Framework Version: 2.72.1
Plugin Version: 5.5.4
SDK Version: 4.3.0
Components Version: 3.18.2
it says bucket with s3-file-uploader-bucket-dev this name already exists but there is no bucket with this name inside aws s3.
even though it gives this error, also creates a bucket with the name of s3-file-uploader-dev-serverlessdeploymentbucket-1aucnojnjl618 but this is not the name I have given in the serverlesss.yml file it should be like s3-file-uploader-bucket-dev and in the cloudFormation there is a stack created with the name s3-file-uploader-dev and its status is UPDATE_ROLLBACK_COMPLETE.
Why it's showing the above-mentioned error and at the same time creating a bucket with a different name? it's confusing that give an error and create a bucket.
It can take several hours for a bucket name to become available again.
So, either choose a different bucket name or wait a little longer until it becomes available again.
Bucket names are globally unique. You can read about it here and about deleting a bucket here. If the name has not been taken already by someone else, you need to wait for a "while" to reuse the same name. The time taken is unknown(afaik).

Can AWS CloudFormation resources call !GetAtt on themselves?

I am trying to set up the Inventory configuration for an S3 bucket with CloudFormation. I want to get daily inventories of data in one subfolder, and have the inventories written to a different subfolder in the same bucket. I have defined the bucket as follows:
S3Bucket:
Type: AWS::S3::Bucket
Properties:
# ...other properties...
InventoryConfigurations:
- Id: runs
Enabled: true
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: !GetAtt S3Bucket.Arn
Format: CSV
Prefix: inventory/runs/
IncludedObjectVersions: Current
OptionalFields: [ETag, Size, BucketKeyStatus]
Prefix: runs/
ScheduleFrequency: Daily
Unfortunately, the !GetAtt S3Bucket.Arn line seems to be failing, causing an error message like "Error: Failed to create changeset for the stack: , ex: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state: For expression "Status" we matched expected path: "FAILED" Status: FAILED. Reason: Circular dependency between resource". If I use the actual ARN of the bucket in place of !GetAtt S3Bucket.Arn (it already exists from a previous version of the stack), then the deploy succeeds, so I know buckets can write Inventories to themselves.
So I guess my question is, is there a way to let Cfn resources call !GetAtt on themselves, so I don't have to hard-code the bucket ARN in InventoryConfigurations? Thanks in advance!
Can AWS CloudFormation resources call !GetAtt on themselves?
Unfortunately no, as the !GetAtt is used to reference other resources in the stack as you've experienced (other as in concrete resources that have already been created).
However, in your case, considering you know the bucket name, you could just construct the bucket ARN yourself directly.
Format:
arn:aws:s3:::bucket_name
e.g. if the name is test, you can use arn:aws:s3:::test
Destination:
BucketAccountId: !Ref AWS::AccountId
BucketArn: 'arn:aws:s3:::test'

Unable to add lifecycle policy to s3 bucket using serverless

I want to add a lifecycle policy to my existing s3 bucket (using serverless) which deletes all the folders inside my s3 bucket.I have written the code in the serverless.yml.When I am trying to deploy my code i am getting -
Additional stack resources updated failed (UPDATE_ROLLBACK_COMPLETE).
so i checked into cloudformation stacks , i am getting message that my bucket already exists -
my_bucket_name already exists
Resource update cancelled
The following resource(s) failed to create: [my_bucket_name]
I am not sure why am i getting this , my s3_bucket code looks like this -
custom:
additionalStacks:
ressources:
Resources:
MyS3TBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my_bucket
LifecycleConfiguration:
Rules:
- Status: Enabled
ExpirationInDays: 30
This is not my entire s3 code but a small part of it which is required in this post. Before adding lifecycle configuration everything was working fine. Any help would be appreciated , Thank you
As the error suggests:
my_bucket_name already exists
The bucket that you want to create already exists. If its yours, you have to delete it before you can re-create it. If not, bucket names must be globally unique. This means that maybe some other AWS user has already created a backed with the same name as yours. In this case you must ensure that the back name is absolutely unique, which is often done by adding some random postfix, e.g.:
MyS3TBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my_bucket-489d939239dd3
LifecycleConfiguration:
Rules:
- Status: Enabled
ExpirationInDays: 30

Serverless AWS redeployment error: S3 bucket already exists error, only for certain environment?

I'm trying to deploy my Serverless project for several environments. I would like to run a develop, staging and production environment. To make this work I'm using serverless-dotenv-plugin with a NODE_ENV=development or NODE_ENV=acceptation (in this case). Everything related to the plugin works.
Everything related to the plugin seems to work. When I deploy for development or acceptance it loads the correct .env file, as well it does try to create the related S3 buckets.
As you can see in the attached image there are two buckets for each environment which I want to link to a Route53 domain. The initial deployment created the correct buckets. When I now deploy again, for development there is no issue. Although when I deploy for acceptance I get the error An error occurred: BucketGatsbySite - project-bucket-acc-www-gatsby already exists., so the build breaks.
Of course the bucket already exists, but because it's already created it shouldn't be re-created. This seems to work for development, but not for acceptance and I have no clue why. In this AWS documentation I can't find anything related to this. Although as you can see below I do have the DeletionPolicy: Retain, which I think should mean there shouldn't be a new one created, but the old one should be retained.
So to summarise, I want to create a bucket but not overwrite it. Only create it once and after that retain the old ones and don't try to create new ones.
My config is as follows:
service: project
package:
individually: true
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
stage: ${env:STAGE}
region: ${env:REGION}
environment:
REGION: ${env:REGION}
STAGE: ${env:STAGE}
NODE_ENV: ${env:NODE_ENV}
CLIENT_ID: ${env:AWS_CLIENT_ID}
TABLE: "project-db-${env:STAGE}"
BUCKET: "project-bucket-${env:STAGE}"
POOL: "project-userpool-${env:STAGE}"
iam:
role:
statements:
- Effect: Allow
Action:
- dynamodb:*
Resource:
- !GetAtt projectTable.Arn
BucketReactApp:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
AccessControl: PublicRead
BucketName: "${self:provider.environment.BUCKET}-www-react"
BucketGatsbySite:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
AccessControl: PublicRead
BucketName: "${self:provider.environment.BUCKET}-www-gatsby"
Every suggestion would be really appreciated, since I'm kinda stuck on this..
Some changes in CloudFormation (CFN) require update of the resource. It's mentioned on the "AWS::S3::Bucket" documentation page as Update requires property for each statement.
And here is the list of all "Update behaviors of stack resources" and Replacement will means that the bucket will be recreated.
But it's still strange, because the only two statements that require replacement on update are:
BucketName
ObjectLockEnabled
So maybe some intermediate operation on CFN stack requires recreation of S3 bucket.
maybe you should be looking at the UpdateReplacePolicy attribute:
BucketGatsbySite:
Type: AWS::S3::Bucket
...
UpdateReplacePolicy:
link: link

Update existing Log Group using CloudFormation

I have a lambda which has a log group, say LG-1, for which retention is set to Never Expire (default). I need to change this Never Expire to 1 month. I am doing this using CloudFormation. As the log group already exists, when I am trying to deploy my lambda again with the changes in template as :
LambdaFunctionLogGroup:
Type: 'AWS::Logs::LogGroup'
DependsOn: MyLambda
Properties:
RetentionInDays: 30
LogGroupName: !Join
- ''
- - /aws/lambda/
- !Ref MyLambda
the update is failing with error :
[LogGroup Name] already exists.
One possible solution is to delete the log group and then again create it with new changes as shown above which works perfectly well.
But I need to do it without deleting the log group as it will result in the deletion of all the previous logs that I have.
Is there any workaround which is possible ?
#ttulka answered:
".. it is impossible to manipulate resources from CF which already exist out of the stack."
But actually the problem is more general than that and applies to resources created inside of the stack. It has to do with AWS CloudFormation resource "Replacement policy". For some resources the way CloudFormation "updates" the resource is to create a new resource, then delete the old resource (this is called the "Replacement" update policy). This means there is a period of time where you've got two resources of the same type with many of the same properties existing at the same time. But if a certain resource property has to be unique, the two resource can't exist at the same time if they have the same value for this property, so ... CloudFormation blows up.
AWS::Logs::LogGroup.LogGroupName property is one such property. AWS::CloudWatch::Alarm.AlarmName is another example.
A work around is to unset the name so that a random name is used, perform an update, then set the name back to it's predictable fixed value and update again.
Rant: It's an annoying problem that really shouldn't exist. I.e. AWS CF should be smart enough to not have to use this weird clunky resource replacement implementation. But ... that's AWS CF for you ...
I think it is impossible to manipulate resources from CF which already exist out of the stack.
One workaround would be to change the name of the Lambda like my-lambda-v2 to keep the old log group together with the new one.
After one month you can delete the old one.
Use customresource Backed lambda within your cloudformation template. The custom resource would be triggered automatically the first time and update your retention policy of the existing log group. If you need it you custom resource lambda to be triggered every time, then use a templating engine like jinja2.
import boto3
client = boto3.client('logs')
response = client.put_retention_policy(
logGroupName='string',
retentionInDays=123
)
You can basically make your CF template do (almost) anything you want using Custom Resource
More information (Boto3, you can find corresponding SDK for the language you use) - https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/logs.html#CloudWatchLogs.Client.put_retention_policy
EDIT: Within the CloudFormation Template, it would look something like the following:
LogRetentionSetFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src
Handler: set_retention_period.handler
Role: !GetAtt LambdaRole.Arn
DeploymentPreference:
Type: AllAtOnce
PermissionForLogRetentionSetup:
Type: AWS::Lambda::Permission
Properties:
Action: lambda:invokeFunction
FunctionName:
Fn::GetAtt: [ LogRetentionSetFunction, Arn ]
Principal: lambda.amazonaws.com
InvokeLambdaFunctionToSetLogRetention:
DependsOn: [PermissionForLogRetentionSetup]
Type: Custom::SetLogRetention
Properties:
ServiceToken: !GetAtt LogRetentionSetFunction.Arn
StackName: !Ref AWS::StackName
AnyVariable: "Choose whatever you want to send"
Tags:
'owner': !Ref owner
'task': !Ref task
The lambda function would have the code which sets up the log retention as per the code which I already specified before.
For more information, please google "custom resource backed lambda". Also to get you a head start I have added the ink below:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html