I am running a website on AWS S3 bucket. I have to update the website once in a while. At the moment, when I do the deployment I just copy the built files to my bucket and override existing ones.
Is there a way to do some versioning on these deployments? I know there is a built-in versioning S3, but it is only for individual files I think.
The best option would be that every deployment is tagged with git commit-id and I could rollback to a particular commit-id if needed.
Any ideas? Already tried to name directories with commit-id -prefix, but the problem is that index.html has to live in root dir.
If you want to use some solution for non-technical users who could rollback the previous version just by doing some clicking action in AWS console you can try to change Index document config.
For example, you have structure in bucket like this:
bucket/v1/index.html
bucket/v2/index.html
...
bucket/vN/index.html
It means, that you could only change the config in Bucket properties -> Static website hosting -> Index document -> from v2/index.html to v1/index.html.
Sounds like "just doing some clicking action in AWS console"
You can use AWS CodePipeline, and use git reverts to manage rollbacks. See this github repository for a cloudformation stack to set up a website on s3/cloudfront with something like this in place.
You can configure bucket versioning using any of the following methods:
Configure versioning using the Amazon S3 console.
Configure versioning programmatically using the AWS SDKs
Both the console and the SDKs call the REST API Amazon S3 provides to manage versioning.
Note
If you need to, you can also make the Amazon S3 REST API calls directly from your code. However, this can be cumbersome because it requires you to write code to authenticate your requests.
Each bucket you create has a versioning subresource (see Bucket Configuration Options) associated with it. By default, your bucket is unversioned, and accordingly the versioning subresource stores empty versioning configuration.
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
</VersioningConfiguration>
To enable versioning, you send a request to Amazon S3 with a versioning configuration that includes a status.
<VersioningConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Status>Enabled</Status>
</VersioningConfiguration>
To suspend versioning, you set the status value to Suspended.
More information here.
Related
Noting the architecture design (taken from this deprecated AWS documentation page:
https://aws.amazon.com/blogs/compute/resize-images-on-the-fly-with-amazon-s3-aws-lambda-and-amazon-api-gateway/)
With each step described:
A user requests a resized asset from an S3 bucket through its static
website hosting endpoint. The bucket has a routing rule configured to
redirect to the resize API any request for an object that cannot be
found.
Because the resized asset does not exist in the bucket, the
request is temporarily redirected to the resize API method.
The user’s browser follows the redirect and requests the resize operation via API
Gateway.
The API Gateway method is configured to trigger a Lambda
function to serve the request.
The Lambda function downloads the
original image from the S3 bucket, resizes it, and uploads the resized
image back into the bucket as the originally requested key.
When the Lambda function completes, API Gateway permanently redirects the user
to the file stored in S3.
The user’s browser requests the
now-available resized image from the S3 bucket. Subsequent requests
from this and other users will be served directly from S3 and bypass
the resize operation. If the resized image is deleted in the future,
the above process repeats and the resized image is re-created and
replaced into the S3 bucket.
Steps 3-7 feel somewhat straight forward... but how do you get an S3 bucket to configure a routing rule to redirect upon a 'missing object'?
Specifically, this neesd to be done in serverless framework.
In theory, an updated version of this concept is laid out in the cloudformation template here: https://docs.aws.amazon.com/solutions/latest/serverless-image-handler/template.html but I'm not seeing any code in that template that configures an S3 bucket. I follow deeper to their gitlab repo and it seems they are deploying with the aws-sdk? https://github.com/aws-solutions/serverless-image-handler/blob/main/source/custom-resource/index.ts
It seems that you can configure a redirect on S3 itself. Here is a link that shares at least 3 steps to do this.
To configure redirection rules for a static website. To add redirection rules for a bucket that already has static website hosting enabled, follow these steps.
Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
In the Buckets list, choose the name of a bucket that you have configured as a static website.
Choose Properties.
Under Static website hosting, choose Edit.
In Redirection rules box, enter your redirection rules in JSON.
In the S3 console you describe the rules using JSON. For JSON examples, see Redirection rules examples. Amazon S3 has a limitation of 50 routing rules per website configuration.
Recommending the layering of your automation by separating provisioning and application automation by using Terraform or/and Cloud Formation for this.
So I got into a situation working for a client which does not provide in any way AWS_ACCESS_KEY_ID as security protection. We have only available for development AWS Web Console. So I started searching for another way of the programmatic script(speed-up) my dev tasks.
Note: we cannot use AWS client without AWS_ACCESS_KEY_ID and secret.
My assumptions: If the AWS web console can do the same thing as aws cli (.eg create bucket, load data into bucket, etc.), why not use web console auth mechanism (visible in http request headers) and bind it to aws cli (or some other api call code) to make it work even without aws keys?
Question: Is this possible? For sure I can see in http headers following artifacts:
aws-session-token
aws-session-id
awsccc
and dozen of others...
My idea is to automate this by:
Go to the web console and login have a script that will
automatically output from browser session required parameters to
some text file
Use this extracted information by some dev script
If this is not supported or impossible to achieve with aws cli, can I use some SDK or raw AWS Api calls with extracted information?
I can extract SAML content which has above mentioned aws-creds header also I see oauth client call with following headers:
https://signin.aws.amazon.com/oauth?
client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fcanvas&
code_challenge=bJNNw87gBewdsKnMCZU1OIKHB733RmD3p8cuhFoz2aw&
code_challenge_method=SHA-256&
response_type=code&
redirect_uri=https%3A%2F%2Fconsole.aws.amazon.com%2Fconsole%2Fhome%3Ffromtb%3Dtrue%26isauthcode%3Dtrue%26state%3DhashArgsFromTB_us-east-1_c63b804c7d804573&
X-Amz-Security-Token=hidden content&
X-Amz-Date=20211223T054052Z&
X-Amz-Algorithm=AWS4-HMAC-SHA256&
X-Amz-Credential=ASIAVHC3TML26B76NPS4%2F20211223%2Fus-east-1%2Fsignin%2Faws4_request&
X-Amz-SignedHeaders=host&
X-Amz-Signature=3142997fe7212f041ef90c1a87288f53cecca9236098653904bab36b17fa53ef
Can I use it with AWS SDK somehow?
To reset an S3 bucket to a known state, I would suggest looking at the AWS cli s3 sync command and the -delete switch. Create a "template" bucket with your default contents, then sync that bucket into your Dev Bucket to reset your Dev bucket.
As for your key problems, i would look at IAM Roles rather trying to hack the console auth.
As to how to run the AWS CLI, you have several options. It can be done from Lambda, ECS (containers running on your own Ec2) or an ec2 instance. All 3 allow you to attach an IAM role. That role can have policies attached (for your S3 bucket) - but there is no key to manage.
Thx for feedback to #MisterSmith! It kinda helped with follow up.
I have found also SAML call during analysis of Chrome traffic from login page to AWS console, I have found this project: https://github.com/Versent/saml2aws#linux
Which extracted all ~/.aws/credentials variables needed for aws cli to work.
I've been using AWS Codedeploy using github as the revision source. I have couple of configuration files that contains credentials(e.g. NewRelic and other third party license key) which I do not want to add it to my github repository. But, I need them in the EC2 instances.
What is a standard way of managing these configurations. Or, what tools do you guys use for the same purpose?
First, use IAM roles. That removes 90% of your credentials. Once you've done that, you can store (encrypted!) credentials in an S3 bucket and carefully control access. Here's a good primer from AWS:
https://blogs.aws.amazon.com/security/post/Tx1XG3FX6VMU6O5/A-safer-way-to-distribute-AWS-credentials-to-EC2
The previous answers are useful for managing AWS roles/credential specifically. However, your question is more about general non-AWS credentials, and how to manage them securely using AWS.
What works well for us is to secure the credentials in a properties file in a S3 bucket. Using same technique as suggested by tedder42 in A safer way to distribute AWS credentials to EC2, you can upload your credentials in a properties file into a highly secured S3 bucket, only available to your instance, which has been configured with the appropriate IAM role.
Then using CodeDeploy, you can add a BeforeInstall lifecycle hook to download the credential files to a local directory via the AWS CLI. For example:
aws s3 cp s3://credentials-example-com/credentials.properties
c:\credentials
Then when the application starts, it can read those credentials from the local file.
Launch your EC2 instances with an instance profile and then give the associated role access to all the things your service needs access to. That's what the CodeDeploy agent is using to make calls, but it's really there for any service you are running to use.
http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
Is there a way to list the Amazon Resource Name (ARN) of an S3 Bucket from the web GUI?
I know I can piece it together myself, but that just seems unnecessary. Ideally, I could go to the S3 instance page and copy and paste the ARN. I've looked in the properties page of the bucket, but I'm not seeing anything that looks useful there.
No, the current S3 console does not expose bucket ARNs. You could probably add a feature to the S3 console page to yield ARNs with a simple GreaseMonkey script.
I have to upload some static HTML and CSS files to Amazon S3, and have been given an Access Key ID as well as a Secret Access Key.
I've signed up for AWS, how to I upload stuff?
If you are comfortable using the command line, the most versatile (and enabling) approach for interacting with (almost) all things AWS is to use the excellent AWS Command Line Interface (AWS CLI) - it meanwhile covers most services' APIs, and it also features higher level S3 commands that ease dealing with your use case considerably, see the AWS CLI reference for S3 (the lower level commands are in s3api) - specifically you are likely interested in:
cp - Copies a local file or S3 object to another location locally or in S3
sync - Syncs directories and S3 prefixes.
I use the latter to deploy static websites hosted on S3 by simply syncing what's changed, convenient and fast. Your use case is covered by the first of several Examples (more fine grained usage with --exclude, --include and prefix handling etc. is available):
The following sync command syncs objects under a specified prefix and
bucket to files in a local directory by uploading the local files to
s3. [...]
aws s3 sync . s3://mybucket
While the AWS CLI supports the regular AWS Credentials handling via environment variables, you can also configure Multiple Configuration Profiles for yourself and other AWS accounts and switch as needed:
The AWS CLI supports switching between multiple profiles stored within the configuration file. [...] Each profile uses different credentials—perhaps from two different IAM users—and also specifies a different region. The first profile, default, specifies the region us-east-1. The second profile, test-user, specifies us-west-2. Note that, for profiles other than default, you must prefix the profile name with the string, profile.
Assuming you want to upload to S3 storage, there are some good free apps out there. If you google for "CloudBerry Labs" they have a free "S3 Explorer" application which lets you drag and drop your files to your S3 storage. When you first install and launch the app, there will be a place to configure your connection. That's where you'll put in your AccessKey and SecretKey.
Apart from the AWS-CLI, there are a number of 'S3 browsers'. These act very much like FTP clients, showing folder structure and files on the remote store, and allow you to interact much like FTP by uploading and downloading.
This isn't the right forum for recommendations, but if you search the usual places for well received s3 browsers you'll find plenty of options.
To upload a handful of files to S3 (the cloud storage and content distribution system), you can log in to use the AWS console S3 application.
https://console.aws.amazon.com/console/home?#
There's also tonnage of documentation on AWS about the various APIs.