We've always used the following to assume a role for longer than an hour on a remote machine:
# Prep environment to use roles.
unset AWS_CONFIG_FILE
unset AWS_DEFAULT_REGION
unset AWS_DEFAULT_PROFILE
CONFIG_FILE=$(mktemp)
# Creates temp file with instance profile credentials as default
# AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, ROLE_ARN are available from the environment.
printf "[default]\naws_access_key_id=$AWS_ACCESS_KEY_ID\naws_secret_access_key=$AWS_SECRET_ACCESS_KEY\n[profile role_profile]\nrole_arn = $ROLE_ARN\nsource_profile = default" > $CONFIG_FILE
# make sure instance profile takes precedence
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN
export AWS_CONFIG_FILE=$CONFIG_FILE
export AWS_DEFAULT_REGION=us-east-1
export AWS_DEFAULT_PROFILE=role_profile
Unfortunately, this method recently started to fail. We can reproduce the failure just by running:
aws sts get-caller-identity
Adding the --debug flag to the last command:
09:11:47 2018-06-21 14:11:47,731 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/1.15.40 Python/2.7.12 Linux/4.9.76-3.78.amzn1.x86_64 botocore/1.10.40
...
09:11:47 2018-06-21 14:11:47,811 - MainThread - botocore.hooks - DEBUG - Event choose-signer.sts.GetCallerIdentity: calling handler <function set_operation_specific_signer at 0x7f22d19a6ed8>
09:11:47 2018-06-21 14:11:47,812 - MainThread - botocore.credentials - WARNING - Refreshing temporary credentials failed during mandatory refresh period.
09:11:47 Traceback (most recent call last):
09:11:47 File "/var/lib/jenkins/.local/lib/python2.7/site-packages/botocore/credentials.py", line 432, in _protected_refresh
...
09:11:47 raise KeyError(cache_key)
09:11:47 KeyError: 'xxxx' (redacted)
09:11:47 2018-06-21 14:11:47,814 - MainThread - awscli.clidriver - DEBUG - Exiting with rc 255
Apparently a key is missing from a Python "cache" dictionary.
The obvious solution is just to find and remove the cache:
rm ~/.aws/cli/cache/*
This doesn't explain how this started happening, though (and if it will happen again). Can anyone explain what happened?
Probably, the permissions inside ~/.aws/cli are wrong.
Check the permissions:
ls -la ~/.aws/cli
ls -la ~/.aws/cli/cache
It might be possible that your files have wrong permissions or ownership. Correct them and aws cli commands would work correctly.
The permissions needed for files inside ~/.aws/cli/cache are -rw-------.
Hope it helps.
Related
I have an issue while trying to publish a java library (jar) to an AWS CodeArtifact Maven repository. I get HTTP Status code 401 (unauthorized) when I try to publish it. Which would indicate that I'm doing something wrong like a missing CODEARTIFACT_AUTH_TOKEN environment variable, or using the wrong aws credentials/profile, etc. But AWS CodeArtifact is very straightforward: we just need to:
generate a new CODEARTIFACT_AUTH_TOKEN and set it as an Environment Variable,
update our local Maven .m2/settings.xml to point to the AWS CodeArtifact server using username=aws and password=${env.CODEARTIFACT_AUTH_TOKEN}
make sure that we generate that token from an account which has access to the AWS CodeArtifact Domain and Maven repo (it would error out if we didn't have access anyway).
...Super simple. Yet I get 401 Unauthorized when I try to "mvn deploy-file" with my setup... See my full setup below:
I set up an AWS CodeArtifact domain, and Maven repository through a Cloudformation template (ignore the NPM and upstream repos if you want):
AWSTemplateFormatVersion: "2010-09-09"
Description: CodeArtifact Domain, Maven repo, NPM repo, and upsteam repos
Resources:
CodeArtifactDomain:
Type: AWS::CodeArtifact::Domain
Properties:
DomainName: mydomain
PermissionsPolicyDocument:
Version: 2012-10-17
Statement:
- Action:
- codeartifact:CreateRepository
- codeartifact:DescribeDomain
- codeartifact:GetAuthorizationToken
- codeartifact:GetDomainPermissionsPolicy
- codeartifact:ListRepositoriesInDomain
- sts:GetServiceBearerToken
- codeartifact:DescribePackageVersion
- codeartifact:DescribeRepository
- codeartifact:GetPackageVersionReadme
- codeartifact:GetRepositoryEndpoint
- codeartifact:ListPackageVersionAssets
- codeartifact:ListPackageVersionDependencies
- codeartifact:ListPackageVersions
- codeartifact:ListPackages
- codeartifact:ReadFromRepository
- codeartifact:PublishPackageVersion
- codeartifact:PutPackageMetadata
Effect: Allow
Principal:
AWS:
- "arn:aws:iam::123456788904:root"
- "arn:aws:iam::123456789098:root"
- "arn:aws:iam::123456789087:root"
Resource: "*"
Tags:
- Key: Name
Value: CodeArtifact Domain
ArtifactUpstreamRepositoryMaven:
Type: AWS::CodeArtifact::Repository
Properties:
RepositoryName: maven-upstream-repo
DomainName: !GetAtt CodeArtifactDomain.Name
ExternalConnections:
- public:maven-central
ArtifactRepositoryMaven:
Type: AWS::CodeArtifact::Repository
Properties:
RepositoryName: maven-repo
Description: Maven CodeArtifact Repository
DomainName: !GetAtt CodeArtifactDomain.Name
Upstreams:
- !GetAtt ArtifactUpstreamRepositoryMaven.Name
Tags:
- Key: Name
Value: Maven CodeArtifact Repository
ArtifactUpstreamRepositoryNPM:
Type: AWS::CodeArtifact::Repository
Properties:
RepositoryName: npm-upstream-repo
DomainName: !GetAtt CodeArtifactDomain.Name
ExternalConnections:
- public:npmjs
ArtifactRepositoryNPM:
Type: AWS::CodeArtifact::Repository
Properties:
RepositoryName: npm-repo
Description: NPM CodeArtifact Repository
DomainName: !GetAtt CodeArtifactDomain.Name
Upstreams:
- !GetAtt ArtifactUpstreamRepositoryNPM.Name
Tags:
- Key: Name
Value: NPM CodeArtifact Repository
Outputs:
CodeArtifactDomain:
Description: The CodeArtifact Domain
Value: !Ref CodeArtifactDomain
Export:
Name: CodeArtifactDomain
I ran the above cloudformation template and confirmed that it completed successfully then navigated to CodeArtifact to check that the CodeArtifact Domain and Repositories were successfully created (they are). I then looked up the connection instructions for my repository. Using these conneciton instructions I first cut and paste the first one:
export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token --domain mydomain --domain-owner <MY_ACCOUNT_NUMBER --query authorizationToken --output text`
I then go setup my maven settings in ~/.m2/settings.xml and put all the settings shown on the connection instructions (in the AWS Console) for my repository:
<?xml version="1.0" encoding="UTF-8"?>
<settings xmlns="http://maven.apache.org/SETTINGS/1.2.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.2.0 https://maven.apache.org/xsd/settings-1.2.0.xsd">
<servers>
<server>
<id>mydomain-maven-repo</id>
<username>aws</username>
<password>${env.CODEARTIFACT_AUTH_TOKEN}</password>
</server>
</servers>
<profiles>
<profile>
<id>mydomain-maven-repo</id>
<activation>
<activeByDefault>true</activeByDefault>
</activation>
<repositories>
<repository>
<id>mydomain-maven-repo</id>
<url>https://mydomain-<MY_ACCOUNT_NUMBER>.d.codeartifact.us-east-1.amazonaws.com/maven/maven-repo/</url>
</repository>
</repositories>
</profile>
</profiles>
</settings>
Finally, I try to mvn:deploy one of my libraries to the AWS CodeArtifact maven repo:
mvn deploy:deploy-file \
-DgroupId=com.myorg \
-DartifactId=my-client_2.12 \
-Dversion=1.0.1-play28 \
-Dfile=./my-client_2.12-1.0.1-play28.jar \
-Dsources=./my-client_2.12-1.0.1-play28-sources.jar \
-Djavadoc=./my-client_2.12-1.0.1-play28-javadoc.jar \
-Dpackaging=jar \
-DrepositoryId=maven-repo \
-Durl=https://mydomain-<MY_ACCOUNT_NUMBER>.d.codeartifact.us-east-1.amazonaws.com/maven/maven-repo/
And I get this error:
[INFO] Scanning for projects...
[INFO]
[INFO] ------------------< org.apache.maven:standalone-pom >-------------------
[INFO] Building Maven Stub Project (No POM) 1
[INFO] --------------------------------[ pom ]---------------------------------
[INFO]
[INFO] --- maven-deploy-plugin:2.7:deploy-file (default-cli) # standalone-pom ---
Uploading to maven-repo: https://my-domain-<MY_ACCOUNT_NUMBER>.d.codeartifact.us-east-1.amazonaws.com/maven/maven-repo/.../my-client_2.12/1.0.1-play28/my-client_2.12-1.0.1-play28.jar
Uploading to maven-repo: https://my-domain-<MY_ACCOUNT_NUMBER>.d.codeartifact.us-east-1.amazonaws.com/maven/maven-repo/.../my-client_2.12/1.0.1-play28/my-client_2.12-1.0.1-play28.pom
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1.319 s
[INFO] Finished at: 2021-09-27T15:10:56-04:00
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy-file (default-cli) on project standalone-pom: Failed to deploy artifacts: Could not transfer artifact my-client_2.12:jar:1.0.1-play28 from/to maven-repo (https://my-domain-<MY_ACCOUNT_NUMBER>.d.codeartifact.us-east-1.amazonaws.com/maven/maven-repo/): Transfer failed for https://my-domain-<MY_ACCOUNT_NUMBER>.d.codeartifact.us-east-1.amazonaws.com/maven/maven-repo/.../my-client_2.12/1.0.1-play28/my-client_2.12-1.0.1-play28.jar 401 Unauthorized -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
I can confirm that I'm using the correct credentials in my ~/.aws/credentials by running
aws sts get-caller-identity
I also confirm that I
have the latest mvn executable
set the M2_HOME to point to my ~/.m2
got a recent token (not more than 12 hours)
I have no idea why I get 401 unauthorized when I mvn deploy-file... Any ideas?
Arg, found it. The issue is in one of the "mvn deploy:deploy-file" arguments:
The:
-DrepositoryId=maven-repo
... needs to match the server id in ~/.m2/settings.xml:
<id>mydomain-maven-repo</id>
If I change my mvn command to put:
-DrepositoryId=mydomain-maven-repo
... The error 401 Unauthorized goes away!!! Argg AWS: shouldn't this be a 404, 400, or other? This is not an unauthorized, it's an unknown repository. It's pushing the definition of 401...
Anyway, dear Internet: if CodeArtifact ever returns 401 on you, be aware you might have misconfigured something. It might not be an authorization issue.
I have a Serverless Python app and I am trying to deploy it using sls deploy. The serverless.yml is as follows.
service: update-register
provider:
name: aws
runtime: python3.8
profile: haumi
stage: ${opt:stage, 'staging'}
environment: ${file(environment.yml):${self:provider.stage}}
region: ${self:provider.environment.REGION}
iamRoleStatements:
# para poder leer y escribir en el bucket
- Effect: "Allow"
Action:
- "sqs:SendMessage"
Resource: "*"
functions:
update:
handler: handler.update
events:
- sqs: ${self:provider.environment.AWS_SQS_QUEUE}
The handler file is like this:
def update(event, context):
print("=== event: ", event)
However when I try to deploy and trigger the update function the following error appears in AWS Cloudwatch
[ERROR] PermissionError: [Errno 13] Permission denied: '/var/task/handler.py'
Traceback (most recent call last):
File "/var/lang/lib/python3.8/imp.py", line 300, in find_module
with open(file_path, 'rb') as file:
I tried changing the permissions of this file but I can't. Any ideas?
This issue had nothing to do with Serverless but with the permissions of my mounted NTFS partition in Ubuntu 18.04.
tl;dr
Change /etc/fstab to of the mounted partition to
UUID=8646486646485957 /home/Data ntfs defaults,auto,umask=002,uid=1000,gid=1000 0 0
Use id -u to get the uid and gid.
The long explanation
What I found out is that in an NTFS partition you cannot just change a file permissions with chmod. You require to configure the mask while mounting the partition. Since I mount the partition upon booting Ubuntu, the required change was in my fstab file. The umask parameter determines which permissions cannot be set. You can find more information about this parameter here.
After you do this reboot. You will find that the files have a different permission. In my case the permissions that allowed my deployed code to work were
-rwxrwxr-x 1 user group 59 jul 24 00:47 handler.py*
I am sure there are issues by allowing others to execute the file. But this solved the issue.
Another cause for the exact same error:
[ERROR] PermissionError: [Errno 13] Permission denied: '/var/task/<my_lambda_python_file>.py'
Traceback (most recent call last):
File "/var/lang/lib/python3.8/imp.py", line 300, in find_module
with open(file_path, 'rb') as file:
I was deploying the Lambda using Atlassian Bamboo, and Bamboo seemed to be messing with the permissions of the files that make up the lambda:
-rw-r-----# 1 <user> <group> 2.5K 19 Nov 18:10 <my_lambda_python_file>.py
I worked around this problem by adding to the Bamboo script:
chmod 755 <my_lambda_python_file>.py
just before bundling the code into a zip file.
I have been working with AWS Systems Manager and I have created a Document to run a command, But it appears there is no way to overwrite the timeout for a run command in an SSM
I have changed the execution timeout here in the parameters but does not work.
also, I added a timeoutSeconds in my Document and it doesn't work either.
This is my Document (I'm using schema version 2.2):
schemaVersion: "2.2"
description: "Runs a Python command"
parameters:
Params:
type: "String"
description: "Params after the python3 keyword."
mainSteps:
- action: "aws:runShellScript"
name: "Python3"
inputs:
timeoutSeconds: '300000'
runCommand:
- "sudo /usr/bin/python3 /opt/python/current/app/{{Params}}"
1: The setting that’s displayed in your screenshot in the Other parameters section is the Delivery Timeout, which is different from the execution timeout.
You must specify the execution timeout value in the Execution Timeout field, if available. Not all SSM documents require that you specify an execution timeout. If a Systems Manager document doesn't require that you explicitly specify an execution timeout value, then Systems Manager enforces the hard-coded default execution timeout.
2: In your document, the timeoutSeconds attribute is in the wrong place. It needs to be on the same level as the action.
...
mainSteps:
- action: "aws:runShellScript"
timeoutSeconds: 300000
name: "Python3"
inputs:
runCommand:
- "sudo /usr/bin/python3 /opt/python/current/app/{{Params}}"
timeoutSeconds: '300000'
Isn't this string but not integer?
I'm trying to adapt my CircleCI config file to build my node.js app to a Docker image and deploy it to AWS ECS. I started with this config.yml file from ricktbaker and I'm trying to make it work on Fargate.
When I initially ran these changes in CircleCI, I got this error:
An error occurred (InvalidParameterException) when calling the UpdateService operation: Task definition does not support launch_type FARGATE.
It looks like I should be able to modify line 71 with the requires-compatibilities option to change how the task definition is registered, but I keep getting an error I can't figure out.
json=$(aws ecs register-task-definition --container-definitions "$task_def" --family "$FAMILY" --requires-compatibilities "FARGATE")
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
Unknown options: --requires-compatibilities, FARGATE
Am I adding the option incorrectly? It seems to match AWS' docs... Thanks, for any tips.
I tried adding the debug option as well, but I don't see anything particularly helpful in the log (slightly redacted, below).
2019-03-13 03:05:45,948 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/1.11.76 Python/2.7.15 Linux/4.4.0-141-generic botocore/1.5.39
2019-03-13 03:05:45,948 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['ecs', 'register-task-definition', '--container-definitions', 'MYCONTAINERDEFINITION', '--family', 'MYTASKNAME', '--debug', '--requires-compatibilities', 'FARGATE']
2019-03-13 03:05:45,948 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function add_scalar_parsers at 0x7fd7e93fbb90>
2019-03-13 03:05:45,948 - MainThread - botocore.hooks - DEBUG - Event session-initialized: calling handler <function inject_assume_role_provider_cache at 0x7fd7e985d398>
2019-03-13 03:05:45,949 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python2.7/site-packages/botocore/data/ecs/2014-11-13/service-2.json
2019-03-13 03:05:45,962 - MainThread - botocore.hooks - DEBUG - Event service-data-loaded.ecs: calling handler <function register_retries_for_service at 0x7fd7ea57ecf8>
2019-03-13 03:05:45,962 - MainThread - botocore.handlers - DEBUG - Registering retry handlers for service: ecs
2019-03-13 03:05:45,963 - MainThread - botocore.hooks - DEBUG - Event building-command-table.ecs: calling handler <function add_waiters at 0x7fd7e9381d70>
2019-03-13 03:05:45,966 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python2.7/site-packages/botocore/data/ecs/2014-11-13/waiters-2.json
2019-03-13 03:05:45,967 - MainThread - awscli.clidriver - DEBUG - OrderedDict([(u'family', <awscli.arguments.CLIArgument object at 0x7fd7e8f066d0>), (u'task-role-arn', <awscli.arguments.CLIArgument object at 0x7fd7e8f06950>), (u'network-mode', <awscli.arguments.CLIArgument object at 0x7fd7e8f06990>), (u'container-definitions', <awscli.arguments.ListArgument object at 0x7fd7e8f069d0>), (u'volumes', <awscli.arguments.ListArgument object at 0x7fd7e8f06a10>), (u'placement-constraints', <awscli.arguments.ListArgument object at 0x7fd7e8f06a50>)])
2019-03-13 03:05:45,967 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.ecs.register-task-definition: calling handler <function add_streaming_output_arg at 0x7fd7e9381140>
2019-03-13 03:05:45,968 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.ecs.register-task-definition: calling handler <function add_cli_input_json at 0x7fd7e98661b8>
2019-03-13 03:05:45,968 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.ecs.register-task-definition: calling handler <function unify_paging_params at 0x7fd7e9402ed8>
2019-03-13 03:05:45,971 - MainThread - botocore.loaders - DEBUG - Loading JSON file: /usr/lib/python2.7/site-packages/botocore/data/ecs/2014-11-13/paginators-1.json
2019-03-13 03:05:45,972 - MainThread - botocore.hooks - DEBUG - Event building-argument-table.ecs.register-task-definition: calling handler <function add_generate_skeleton at 0x7fd7e947e320>
2019-03-13 03:05:45,972 - MainThread - botocore.hooks - DEBUG - Event before-building-argument-table-parser.ecs.register-task-definition: calling handler <bound method CliInputJSONArgument.override_required_args of <awscli.customizations.cliinputjson.CliInputJSONArgument object at 0x7fd7e8f06a90>>
2019-03-13 03:05:45,972 - MainThread - botocore.hooks - DEBUG - Event before-building-argument-table-parser.ecs.register-task-definition: calling handler <bound method GenerateCliSkeletonArgument.override_required_args of <awscli.customizations.generatecliskeleton.GenerateCliSkeletonArgument object at 0x7fd7e8f1e890>>
Your command line format is correct, i.e. register-task-definition --requires-compatibilities "FARGATE"
Since Fargate is quite new. So, you may have to make sure that awscli is recent version.
What is your installed awscli version? the latest version is 1.16.123
And, the recommended way pip3 install awscli --upgrade --user
Hope this helps.
I am trying to use AWS CodeDeploy. I use aws deploy push --debug command. The file to be uploaded is around 250 KB. But upload doesn't finish. Following is the logs displayed.
2017-10-27 11:11:40,601 - MainThread - botocore.auth - DEBUG - CanonicalRequest:
PUT
/frontend-deployer/business-services-0.0.1-SNAPSHOT-classes.jar
partNumber=39&uploadId=.olvaJkxreDZf1ObaHCMtHmkQ5DFE.uZ9Om0sxZB08YG3tqRWBxmGLTFWSYQaj9mHl26LPJk..Stv_vPB5NMaV.zAqsYX6fZz_S3.uN5J4FlxHZFXoeTkMiBSYQB2C.g
content-md5:EDXgvJ8Tt5tHYZ6Nkh7epg==
host:s3.us-east-2.amazonaws.com
x-amz-content-sha256:UNSIGNED-PAYLOAD
x-amz-date:20171027T081140Z
content-md5;host;x-amz-content-sha256;x-amz-date
UNSIGNED-PAYLOAD
...
2017-10-27 11:12:12,035 - MainThread - botocore.endpoint - DEBUG - Sending http request: <PreparedRequest [PUT]>
2017-10-27 11:12:12,035 - MainThread - botocore.awsrequest - DEBUG - Waiting for 100 Continue response.
2017-10-27 11:12:12,189 - MainThread - botocore.awsrequest - DEBUG - 100 Continue response seen, now sending request body.
Even though the file is fairly small (250 KB), upload doesn't finish.
On the other hand, upload via aws s3 cp command lasts 1 second.
How can I increase the upload speed in aws deploy push command?