I'm trying to get Let's Encrypt automatic cert update on a AWS Lightsail wordpress instance and Route53.
I used these official instructions for adding a SSL certs to a AWS Lightsail wordpress website.
Site SSL is working fine, but I was looking for a way to automate the re-issue and found the certbot plugin - certbot-dns-route53
I created a separate AWS non-admin user just for the updates, and added the policy as suggested by the certbot official docs
{
"Version": "2012-10-17",
"Id": "certbot-dns-route53 sample policy",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:GetChange"
],
"Resource": [
"*"
]
},
{
"Effect" : "Allow",
"Action" : [
"route53:ChangeResourceRecordSets"
],
"Resource" : [
"arn:aws:route53:::hostedzone/MYZONEID"
]
}
]
}
I placed the API access information in both a environment variable and ~/.aws.config file.
I executed the command -
sudo certbot certonly --dns-route53 --dns-route53-propagation-seconds 30 --dry-run -d 'domain.
com,*.domain.com'
And I get the following error -
An error occurred (AccessDenied) when calling the ListHostedZones
operation: User: arn:aws:sts::548507530525:assumed-role/Amaz
onLightsailInstanceRole/i-00ff79ff762ac0576 is not authorized to
perform: route53:ListHostedZones To use certbot-dns-route53, configure
credentials as described at
https://boto3.readthedocs.io/en/latest/guide/configuration.h
tml#best-practices-for-configuring-credentials and add the necessary
permissions for Route53 access
I attempted a ~.aws/config & credentials file as well -
config-
[profile cross-account]
role_arn=arn:aws:iam::XXXXXXXXXXX:user/domain_cert_update
source_profile=default
credentials -
[default]
aws_access_key_id=ACCESSKEY
aws_secret_access_key=SECRETKEYHERE
I'm not sure how to get the lightsail instance of /i-00ff79ff762ac0576 assigned to the policy correctly. I've read through the config guide it links and it doesn't help.
Better to use https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-enabling-https-on-wordpress
If you use Certbot, you should renew license manually. But use the above tutorial and you can setup HTTPS by using bncert tool.
I have setup HTTPS and Redirect HTTP to HTTPS on 2 of my websites using the above documentation.
Thanks for the question, it pointed me in the right direction to get this working myself.
I created the AWS CLI config files in the home directory at /home/bitnami/.aws. For me the issue was certbot wasn't actually finding these config files as it was looking in the root directory instead of the home directory.
I added a link using the root user:
sudo -s
ln -s /home/bitnami/.aws/ ~/.aws
After that certbot could find the config files and the command was successful.
Related
I've currently writing a Terraform module for EC2 ASGs with ECS. Everything about starting the instances works, including IAM-dependent actions such as KMS-encrypted volumes. However, the ECS agent fails with:
Error getting ECS instance credentials from default chain: NoCredentialProviders: no valid providers in chain.
Unfortunately, most posts I find about this are about the CLI being run without credentials configured, however this should of course use the instance profile.
The posts I do find regarding that are about missing IAM policies for the instance role. However, I do have this trust relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"ecs.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
(Added ECS because someone on SO had it in there, I don't think that's right. Also removed some conditions.)
It has these policies attached:
AmazonSSMManagedInstanceCore
AmazonEC2ContainerServiceforEC2Role
When I connect via SSH and try to run awscli commands, I get the error
Unable to locate credentials. You can configure credentials by running "aws configure".
running any command.
But with
curl http://169.254.169.254/latest/meta-data/iam/info
I see the correct instance profile ARN and with
curl http://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance
I see temporary credentials. With these in the awscli configuration,
aws sts get-caller-identity
returns the correct results. There are no iptables rules or routes blocking access to the metadata service and I've deactivated IMDSv2 token requirement for the time being.
I'm using the latest stable version of ECS-optimized Amazon Linux 2.
What might be the issue here?
Trying to host a web application (html) using server-less approach on AWS Amplify connecting to the AWS CodeCommit repository(where the html code version history is maintained). Save and Deploy app on Amplify is failing in 'Build' step and is returning the following error:
2020-08-17T01:32:37.631Z [INFO]: Cloning into 'Test'...
2020-08-17T01:32:42.406Z [INFO]: fatal: unable to access 'https://git-codecommit.us-east-1.amazonaws.com/v1/repos/Test/': The requested URL returned error: 403
2020-08-17T01:32:42.409Z [ERROR]: !!! Unable to clone repository
Steps followed: https://aws.amazon.com/getting-started/hands-on/build-serverless-web-app-lambda-apigateway-s3-dynamodb-cognito/module-1/
The step-1(Host a static website, in above link) only working if I give the repo name as 'wildrydes-site' exactly. If I jus change the name to something else with all the same files, it doesn't work. Am I missing something here??
If you are getting a 403 error, you could check the policy associated with the service role in IAM. You need to specify the CodeCommit repository within the policy that uses the service role you specified in Amplify.
Amplify App Detail
Service Role Policy
You need to set service role for your app.
If you don't have a service role for amplify backend deployment, you have to create one.
The Amplify Console requires permissions to deploy backend resources with your front end. You use a service role to accomplish this
The following would be helpful.
Adding a service role to the Amplify Console when you connect an app
create role for aws service. select use cases "Amplify" then "Backend Deployment"
go to amplify console. open app settings, general. Set this role for your app's service role
The amplify app is is missing permissions to your git repository. Make sure you connect your AWS Amplify app to your repository in AWS CodeCommit.
Here's an image excerpt:
Please check the auto generated policy "AWSAmplifyExecutionPolicy" created by AWSAmplify in IAM console. The autogenerated AWSAmplifyExecutionPolicy specifies permission to access your repository in CodeCommit. The Resource in the CodeCommit policy, should have the ARN of your repository.
Add a inline policy to give access on Codecommit to clone the repository and check the build code for any further errors.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "codecommit:*",
"Resource": "*"
}
]
}
check your role policy json in that check whether this is policy having access of your repo arn
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:logs:ap-south-1:<accountid>:log-group:/aws/amplify/xxxxxx",
"arn:aws:logs:ap-south-1:<accountid>:log-group:/aws/amplify/xxxxxx:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [ // here is your repo arn is required if not not present add it
"arn:aws:codecommit:ap-south-1:<accountid>:<repo_name>",
"arn:aws:codecommit:ap-south-1:<accountid>:<repo_name>"
],
"Action": [
"codecommit:GitPull"
]
}
]
}
I encounter the same issue. As other answer mentioned, there need a role.
I want to give my detail steps:
goto amplify console;
choose the application;
click "general" in "application setting" in the left menu;
click "edit" at the right top;
click "create new role";
In the next page, some items will be choosed automatically, include "AWS production", "Amplify", "Amplify - Backend Deployment";
next and next;
If this procedure failed, try to get more authority or login as admin.
I have a tomcat instance that runs in Beanstalk and in the configuration for Beanstalk I pass in a config.file as a parameter like so:
-Dconfig.url=https://s3.amazonaws.com/my-bucket/my-file.txt
This file is in s3 but I have to set permissions to 'Everyone': 'Open', which I do not like doing because this is unsafe, but can't seem to find any other way of doing this. I've looked at the url signing method and this isn't a good solution as both the file and the Beanstalk app are updated frequently and I'd like to have all this automated i.e, if the app breaks and restarts it will not be able to read the file because the signing key would have expired.
I've looked at the docs regararding roles but cannot seem to get this working. I've added a custom policy to the aws-elasticbeanstalk-ec2-role (shown below) and this isn't doing anything - my app can still not access files in the bucket. Could someone please tell me how / whether this can be fixed?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]
}
Is there another way I can allow the Beanstalk application to read files in an S3 bucket? Any help is appreciated.
I have files stored on S3 and wrote .ebextensions config to automatically copy the them to new instances. I'm receiving this error in the Elastic Beanstalk console:
[Instance: INSTANCEID Module: AWSEBAutoScalingGroup ConfigSet: null] Command failed on instance. Return code: 1 Output: [CMD-AppDeploy/AppDeployStage0/EbExtensionPreBuild] command failed with error code 1: Error occurred during build: Failed to retrieve https://s3-us-west-1.amazonaws.com/MyBucket/MyFolder/_MyFile.txt: HTTP Error 403 : AccessDenied
My .ebextension config file has this section:
files:
"/target/file/path" :
mode: "000777"
owner: ec2-user
group: ec2-user
source: https://s3-us-west-1.amazonaws.com/_MyBucket_/_MyFolder_/_MyFile.txt
In attempting to make this file copying work, I've also relaxed permissions by giving the elastic beanstalk IAM role the standard read only access policy to all of S3. It's policy is this:
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "*"
}
Yet the prebuild copying step still fails. Did I give the source url in the correct format? Is there another security entity/policy involved? Help please :)
The documentation is very sketchy on the subject (probably an ideal candidate for StackExchange Docs!).
To do this correctly with .ebextensions, you need to allow the Beanstalk instance IAMs user in the bucket policy, setup an AWS::CloudFormation::Authentication: auth config and attach config to remote sources. This is kind of a hybrid of all the other answers, but all failed in one way or another for me.
Assuming your IAM instance role is aws-elasticbeanstalk-ec2-role:
Set your AWS bucket to allow the Beanstalk IAM User. Edit "bucket policy":
{
"Version": "2012-10-17",
"Id": "BeanstalkS3Copy",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "<beanstalk_iam_role_arn>"
},
"Action": [
"s3:ListBucketVersions",
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucket_name>",
"arn:aws:s3:::<bucket_name>/*"
]
}
]
}
where:
beanstalk_iam_role_arn = the fully qualified instance IAMs role. See "IAM role" associated with a running instance if available or see environment configuration. Example: arn:aws:iam::12345689:role/aws-elasticbeanstalk-ec2-role
bucket_name = your bucket name
In your .ebextension/myconfig.config, add an S3 authentication block that uses your IAMs instance user:
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Auth:
type: "s3"
buckets: ["bucket_name"]
roleName:
"Fn::GetOptionSetting":
Namespace: "aws:asg:launchconfiguration"
OptionName: "IamInstanceProfile"
DefaultValue: "aws-elasticbeanstalk-ec2-role"
Set bucket_name appropriately
Define a remote file and attach the S3 Authentication block:
"/etc/myfile.txt" :
mode: "000400"
owner: root
group: root
authentication: "S3Auth" # Matches to auth block above.
source: https://s3-eu-west-1.amazonaws.com/mybucket/myfile.txt
Set your source URL appropriately
Similar to chaseadamsio's answer, you can configure the role given to the EC2 instance with a policy to access S3 resources, then use the pre-installed AWS CLI utilities to move files around.
The way I approached this is to create a role dedicated to the given EB application, then attach a policy similar to:
"Statement": [
{
"Sid": "<sid>",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<your_bucket_path>/*"
]
}
]
This gives your instance access, then to get the files, add a 'commands' block to your config such as:
commands:
01-get-file:
command: aws s3 cp s3://<your_bucket_path>/your-file.txt /home/ec2-user
02-execute-actions:
[unpack, run scripts, etc..]
Obviously you can use other AWS CLI utlities as needed. I found this solved a lot of problems I was having with S3 access and makes deployment a lot easier.
I found a solution to overcome this error. It turns out adding a Resources section to the .ebextensions config file makes it work. The entire file becomes:
files:
"/target/file/path" :
mode: "000777"
owner: ec2-user
group: ec2-user
source: https://s3-us-west-1.amazonaws.com/_MyBucket_/_MyFolder_/_MyFile.txt
Resources:
AWSEBAutoScalingGroup:
Metadata:
AWS::CloudFormation::Authentication:
S3Access:
type: S3
roleName: aws-elasticbeanstalk-ec2-role
buckets: _MyBucket
At this point, I don't know enough to grok why it has to be this way. Hopefully it can help someone who's lost move forward and eventually gain a better understanding. I based my answer on this link https://forums.aws.amazon.com/message.jspa?messageID=541634
An alternative to setting the .ebextensions config would be to set a policy on the aws-elasticbeanstalk-ec2-role within the IAM Manager (or create a new role specifically for your elastic beanstalk environments to sandbox your autoscaled ec2 instances.
To do so, go to the IAM manager within the web console, and click on "Roles" on the left side. You should see your instance name in the list of roles, clicking on that will take you to the administration page for that particular role. Attach a new role policy to the role under "Permissions" with a policy document matching what you want your ec2 to have permissions to do (in this case, you'd give it a policy to access an s3 bucket called _MyBucket and you should no longer need the Resources section in your .ebextensions config.
If you have your IAM role for the machine configured to get access to the file you can do the following in .ebextensions
commands:
01a_copy_file:
command: aws s3 cp s3://bucket/path/file /destination/
I'm trying to use a log rotation configuration for my nginx server that I'm using as a reverse proxy machine located on an EC2 Ubuntu instance.
I want to store those logs on a S3 bucket after a rotation but I'm only getting "access denied, are you sure you keys have ListAllMyBuckets permissions errors" when I'm trying to configure s3cmd tools.
I'm pretty sure that my credentials is correctly configured at IAM, tried at least five different credentials (even the root cred) with the same result. It works fine to list all of my buckets from my local computer with aws cli tools with the same credentials so it puzzles me that I don't have any access just on my EC2 instance.
this is what I run:
which s3cmd
/usr/local/bin/s3cmd
s3cmd --configure --debug
Access Key: **************
Secret Key: *******************************
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
and this is the result
...
DEBUG: ConnMan.put(): connection put back to pool (http://s3.amazonaws.com#1)
DEBUG: S3Error: 403 (Forbidden)
DEBUG: HttpHeader: x-amz-id-2: nMI8DF+............
DEBUG: HttpHeader: server: AmazonS3
DEBUG: HttpHeader: transfer-encoding: chunked
DEBUG: HttpHeader: x-amz-request-id: 5912737605BB776C
DEBUG: HttpHeader: date: Wed, 23 Apr 2014 13:16:53 GMT
DEBUG: HttpHeader: content-type: application/xml
DEBUG: ErrorXML: Code: 'AccessDenied'
DEBUG: ErrorXML: Message: 'Access Denied'
DEBUG: ErrorXML: RequestId: '5912737605BB776C'
DEBUG: ErrorXML: HostId: 'nMI8DF+............
ERROR: Test failed: 403 (AccessDenied): Access Denied
ERROR: Are you sure your keys have ListAllMyBuckets permissions?
The only thing that is in front of my nginx server is a load balancer, but I can't see why it could interfere with my request.
Could it be something else that I've missed?
Please check That IAM user permission which keys you are using
Steps would be
AWS console go to IAM panel
IAM user > Select that User > in the bottom menu 2nd tab is
permission
attach a user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::YOU-Bucket-Name"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::YOU-Bucket-Name/*"
}
]
}
Let me know how it goes
Please dont trust the --configure switch:
i was facing the same problem.
it was showing 403 in --configure but at the end i saved the Settings and then tried:
ERROR: Test failed: 403 (AccessDenied): Access Denied
Retry configuration? [Y/n] n
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
# s3cmd put MyFile s3://MyBucket/
& it worked..
s3cmd creates a file called .s3cfg in your home directory when you set this up. I would make sure you put this file somewhere where your logrotate script can read this, and use the -c flag.
For example to upload the logfile.txt file to the logbucket bucket:
/usr/local/bin/s3cmd -c /home/ubuntu/.s3cfg put logfile.txt s3://logbucket
What is the version of s3cmd you are using?
I tried it using s3cmd 1.1, it seems s3cmd 1.1 does not work with IAM roles.
But someone says s3cmd 1.5 alpha2 has support for IAM roles.(http://t1983.file-systems-s3-s3tools.file-systemstalk.info/s3cmd-1-5-0-alpha2-iam-roles-supportincluded-t1983.html)
I have tried s3cmd 1.5 beta1(https://github.com/s3tools/s3cmd/archive/v1.5.0-beta1.tar.gz), it works fine with IAM roles.
So there are two ways to access s3 bucket of s3cmd:
Using access key and secret key
`
you need to set a config file in /root/.s3cfg(default path) as bellow
access_key=xxxxxxxx
secret_key=xxxxxxxxxxxxxxxxxxxx
Note that just set above two key-value in .s3cfg, no need other keys.
`
Using IAM add s3 policy with s3cmd > 1.5 alph2.
`
you need add a IAM to ec2 instance, this role may has a policy as bellow
{
"Effect": "Allow",
"Action": [
"s3:"
],
"Resource": ""
}
`
I found out a solution for my problems by deleting all installation of s3cmd. Then made sure that apt-get was up to date and installing it from apt-get again. After my configuration (the same as before) it worked out just fine!
I also had a similar problem. Even after associating my EC2 instance to an IAM role with s3 full access policy, my s3cmd was failing as there wasn't any .s3cfg file in it. I fixed by updating the version of my s3cmd.
sudo pip install s3cmd==1.6.1
Did the trick!