AWS Serverless Application Publish via Visual Studio - amazon-web-services

Using .Net Core, visual studio 2017 and AWS Toolkit for Visual 2017, I created a basic web api, the api works as designed.
However when it comes to publishing/deploying it, the first time works perfectly when the Stack doesnt exist, creates everything its suppose to. When I make a change and need to re-deployed/publish, it comes back with the following error.
Error creating CloudFormation change set: Stack [TestStack] already exists and cannot be created again with the changeSet [Lambda-Tools-636366731897711782].
Just above the error message is this
Found existing stack: False
Im wondering if there is something not quite right with it detecting if the Stack exists.
Im just wondering if Im missing something, or if this is actually be design, as for me to republish it I have to log into my AWS Console and go into the cloud formation section and delete the existing Stack.
Publish Dialog
Project Structure

After a bit of digging, and general trial and error. I believe this is actually to do with permissions of the user performing the publish. (The user in AWS)
I changed an inline policy to
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"cloudformation:*"
],
"Resource": [
"*"
]
}
]
}
Where cloudformation:* used to be several lines for individual permissions.
This now successfully publishes over an existing Stack, however visual studio doesnt like it and crashes. (Although the update does go through to AWS)

AWS' Serverless Application Model is … very new still. And for lack of any documentation about what IAM permission one needs to deploy an App with their CLI, I've worked out this policy that seems to work, and only grants the least needed permissions for the task.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"lambda:UpdateFunctionCode",
"s3:PutObject",
"cloudformation:DescribeStackEvents",
"cloudformation:UpdateStack",
"cloudformation:DescribeStackResource",
"cloudformation:CreateChangeSet",
"cloudformation:DescribeChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:GetTemplateSummary",
"cloudformation:ListChangeSets",
"cloudformation:DescribeStacks"
],
"Resource": [
"arn:aws:lambda:*:123456789012:function:*-SAM-*",
"arn:aws:cloudformation:*:123456789012:stack/<STACK NAME OR GLOB>/*",
"arn:aws:cloudformation:<AWS_REGION>:aws:transform/Serverless-2016-10-31",
"arn:aws:s3:::aws-sam-cli-managed-default-samclisourcebucket-*/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::aws-sam-cli-managed-default-samclisourcebucket-*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "cloudformation:ValidateTemplate",
"Resource": "*"
}
]
}
Note:
Replace <STACK NAME OR GLOB> with something that best suits your needs, like:
* If you don't care which CloudFormation Stack this grants access to
*-SAM-* If you name your SAM CloudFormation apps with some consistency
Replace <AWS_REGION> with the region you're operating in.
The arn:aws:s3:::aws-sam-cli-managed-default-samclisourcebucket-* is the standard bucket naming that SAM CLI uses for creating a bucket for deploying CloudFormation Templates or Change Sets. You could alter this to explicitly be the name of the bucket SAM created for you.

Related

AWS IAM - How to disable users from making changes via the console, but allow API changes via CLI

For Amazon Webservices IAM, is there a way where I can create a role with some policies that only allow Read in the Console, yet allows Read/Write using the API/CLI/Terraform.
The purpose is to force usage of infrastructure-as-code to avoid configuration drift.
Any insights or references to Best practices are very welcome.
It's important to be clear that there is no fool-proof way to do this. No system can ever be sure how a request was made on the client side.
That being said, there should be a way to achieve what you are looking for. You will want to use the IAM condition aws:UserAgent (docs here) to prevent users from using the browser. Here is an example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
},
{
"Effect": "Deny",
"Action": "*",
"Resource": "*"
"Condition": {
"StringLike": {
"aws:UserAgent": "console.amazonaws.com"
}
}
}
]
}
CloudTrail logs the UserAgents for requests, so you could use that to figure out which UserAgents to block. (docs here)

S3 Access column shows "Error" for all buckets

In my console, all buckets show "Error" at the access column. Every operation results in an error, being it uploading, downloading, deleting or modifying files. The only thing I can do is creating a bucket. Afterwards however, I can't do anything with it.
I always had access rights and was previously working with my current account. I even tried it with the root account without any success. This seems to have happened miraculously over night as I wasn't working with S3 much during the past days.
N.B. I don't use any other APIs beside the console.
In your IAM policy permission, you have to add following permission for S3 console to list all your buckets properly (without error).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Console_List_S3_Buckets",
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets",
"s3:GetAccountPublicAccessBlock",
"s3:GetBucketAcl",
"s3:GetBucketPolicyStatus",
"s3:GetBucketPublicAccessBlock"
],
"Resource": "*"
}
}
from the AWS documentation
https://aws.amazon.com/premiumsupport/knowledge-center/s3-console-error-access-field/
s3:GetAccountPublicAccessBlock,
s3:GetBucketPublicAccessBlock,
s3:GetBucketPolicyStatus,
s3:GetBucketAcl,
s3:ListAccessPoints
...
And i added the following
...
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
Now all of my bucket items list, have full "access" information...including the Permission -> Permissions overview (inside the bucket)
{
"Sid": "AllowS3ListAllBuckets",
"Effect": "Allow",
"Action": [
"s3:GetAccountPublicAccessBlock",
"s3:GetBucketPublicAccessBlock",
"s3:GetBucketPolicyStatus",
"s3:GetBucketAcl",
"s3:ListAccessPoints",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation"
],
"Resource": "*"
}
After hours of googling and trying things out, I finally found out what the origin of the problem was: The browser.
If you encounter this problem try logging in via incognito mode. If the issue is now magically solved, then chances are, some plugin caused the problem, mainly those blocking ads. In my case it was "Avira Browserschutz" but I read that "uBlock Origin" may cause the same issue.
Were you editing your bucket/iam policy? If so it might be because of this issue: https://aws.amazon.com/premiumsupport/knowledge-center/s3-accidentally-denied-access/

AWS CodePipeline - Insufficient permissions Unable to access the artifact error

Trying to create and run an AWS CodePipeline that pulls from Github, builds and deploys to an EC2 instance. The pipeline is as follows:
Source (Github) -> Build (AWS CodeBuild) -> Deploy (AWS CodeDeploy)
The source and build steps both succeed. However, deploy fails consistently giving the following error:
Insufficient permissions
Unable to access the artifact with Amazon S3 object key '[redacted]-2nd-test-pip/BuildArtif/IbiHzen' located in the Amazon S3 artifact bucket 'codepipeline-us-east-1-[redacted]'. The provided role does not have sufficient permissions.
Below is the IAM policy for the CodeBuild service role policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:logs:us-east-1:362490217134:log-group:/aws/codebuild/[Redacted]-Build-Project",
"arn:aws:logs:us-east-1:362490217134:log-group:/aws/codebuild/[Redacted]-Build-Project:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-us-east-1-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-us-east-1-[Redacted]/*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
}
]
}
The CodePipeline service role created by the pipeline wizard has assigned S3 full access:
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"ecs:*"
],
"Resource": "*",
"Effect": "Allow"
},
I have confirmed numerous times that the artifact referenced in the pipeline deploy step matches the artifact created by the build step.
If I go and look at path referenced, there is not a directory or zip file (not sure which SHOULD be there, but neither is) with that name. Additionally, a zip file is generated during the build, but it is never named what the deploy step expects.
I've also gone into the build project and attempted builds using other artifact configurations, but they seem to be ignored when running the build through CodePipeline.
Disclaimer: I've seen similar questions posted here and elsewhere on the interwebs, but each of them deal with ECS or another situation that differs from mine. Thank you for your help
The issue was unrelated to roles/policies. As mentioned, the expected zip file did not exist in the S3 bucket. This was due to an invalid artifact files path specified in the buildspec. Once corrected, the zip file is created and the deploy no longer fails on this error. Seems odd to me that CodePipeline would allow the build to report as completed successfully without validating that the files created as the artifact and passed to the deploy step were, in fact, created.

Correct Privileges in Amazon S3 Bucket Policy for AWS PHP SDK use

I have server S3 buckets belonging to different clients. I am using AWS SDK for PHP in my application to upload photos to the S3 bucket. I am using the AWS SDK for Laravel 4 to be exact but I don't think the issue is with this specific implementation.
The problem is unless I give the AWS user my server is using the FullS3Access it will not upload photos to the bucket. It will say Access Denied! I have tried first with only giving full access to the bucket in question, then I realized I should add the ability to list all buckets because that is probably what the SDK tries to do to confirm the credentials but still no luck.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::clientbucket"
]
}
]
}
It is a big security concern for me that this application has access to all S3 buckets to work.
Jeremy is right, it's permissions-related and not specific to the SDK, so far as I can see here. You should certainly be able to scope your IAM policy down to just what you need here -- we limit access to buckets by varying degrees often, and it's just an issue of getting the policy right.
You may want to try using the AWS Policy Simulator from within your account. (That link will take you to an overview, the simulator itself is here.) The policy generator is also helpful a lot of the time.
As for the specific policy above, I think you can drop the second statement and merge with the last one (the one that is scoped to your specific bucket) may benefit from some * statements since that may be what's causing the issue:
"Action": [
"s3:Delete*",
"s3:Get*",
"s3:List*",
"s3:Put*"
]
That basically gives super powers to this account, but only for the one bucket.
I would also recommend creating an IAM server role if you're using a dedicated instance for this application/client. That will make things even easier in the future.

AWS: Cannot delete folders with current IAM policy

I am using the policy pasted below. This policy does almost everything I intend it to do. The user can get to the specified folder (/full/test/next/) within the specified bucket (BUCKETNAME). They can upload files, delete files, create new folders...etc.
However, they cannot delete folders created within this directory (i.e. cannot delete /full/test/next/examplefolder). I've been searching around and doing some modification but I have not found any answers. Any help would be much appreciated.
I apologize for any lack of clarity or incorrect terminology. I am new to AWS.
Two additional notes:
1. I can delete these folders from the main administrative account.
2. As the user, I do NOT have any rights within these folders (even if the user created the folders).
Pasted Code -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
},
{
"Sid": "AllowRootAndHomeListingOfProperFolder",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::BUCKETNAME"],
"Condition":{"StringEquals":{"s3:prefix":["","full/","full/test/", "full/test/next/", "full/test/next/*"],"s3:delimiter":["/"]}}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::BUCKETNAME/full/test/next/*"]
}
]
}
Ok, I can confirm this that this is an issue with the Browser. I had the exact same problem and after a lot of head banging, I figured out that it was a trivial issue. I changed my browser and it worked. Also, I was able to delete the folder using AWS CLI as well as AWS Ruby SDK.
So, there is nothing wrong in your policy.