I am new to docker and in our dev env, we have S3 access key id and AWS secret access key. We are using the digital ocean spaces to offload our dev env static files. We need to run the collect static command manually to update the media and static files. The whole process was working fine for the last couple of months but I am recently facing this error. After some research, I updated the Acess key id and AWS secret access key but this error remains the same. Can anyone please help me with this issue?
enter image description here
Related
I am facing this weird scenario. I generate my AWS AccessKeyId, SecretAccessKey and SessionToken by running assume-role-with-saml command. After copying these values to .aws\credentials file, I try to run command "aws s3 ls" and can see all the S3 buckets. Similarly I can run any AWS command to view objects and it works perfectly fine.
However, when I write .Net Core application to list objects, it doesn't work on my computer. Same .Net application works find on other colleagues' computers. We all have access to AWS through the same role. There are no users in IAM console.
Here is the sample code, but I am not sure there is nothing wrong with the code, because it works fine on other users' computers.
var _ssmClient = new AmazonSimpleSystemsManagementClient();
var r = _ssmClient.GetParameterAsync(new Amazon.SimpleSystemsManagement.Model.GetParameterRequest
{
Name = "/KEY1/KEY2",
WithDecryption = true
}).ConfigureAwait(false).GetAwaiter().GetResult();
Any idea why running commands through CLI works and API calls don't work? Don't they both look at the same %USERPROFILE%.aws\credentials file?
I found it. Posting here since it can be useful for someone having same issue.
Go to this folder: %USERPROFILE%\AppData\Local\AWSToolkit
Take a backup of all files and folders and delete all from above location.
This solution applies only if you can run commands like "aws s3 ls" and get the results successfully, but you get error "The provided token has expired" while running the same from .Net API libraries.
I am trying to run a demo project for uploading to S3 with Grails 3.
The project in question is this, more specifically the S3 upload is only for the 'Hotel' example at the end.
When I run the project and go to upload the image, I get an updated message but nothing actually happens - there's no inserted url in the dbconsole table.
I think the issue lies with how I am running the project, I am using the command:
grails -Daws.accessKeyId=XXXXX -Daws.secretKey=XXXXX run-app
(where I am supplementing the X's for my keys obviously).
This method of running the project appears to be slightly different to the method shown in the example. I run my project from the command line and I do not use GGTS, just Sublime.
I have tried inserting my AWS keys into the application.yml but I receive an internal server error then.
Can anyone help me out here?
Check your bucket policy in s3. You need to grant permissions to the API user to allow uploads.
Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.
im trying to install simple app in Amazon AWS. Since im really new to servers i used Elastic Beanstalk.
Everythink ok, but when i run my app i get an error: PDO error: could not find driver.
I tried mysqli_ping the connection and got boolean true, so this is OK.
I checked for help, but all i found is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_PHP.rds.html
2.If you plan to use PDO, install the PDO drivers. For more information, go to http://www.php.net/manual/pdo.installation.php.
But i really don't know what to do with this information. Any help?
so, its quite a procedure, first you have to get ssh access to your instance:
1. Generate key value for you instance & download it in pem format.
go to: https://console.aws.amazon.com/ec2/v2/home?region=eu-west-1#Instances (change for your region)
click Key Pairs, Create Key pairs and create your new key par and download it to you comp.
assosiate your instance with the key, go to elastic beanstalk, select you application, select configuration, instances and select your new key from drop-down of EC2 key pair.
2. download putty for windows (installer) & install it: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
3. Transform key to .pkk format using PuTTYgen:
http://www.techrepublic.com/blog/the-enterprise-cloud/connect-to-amazon-ec2-with-a-private-key-using-putty-and-pageant/?tag=nl.e011#.
4. Setup putty to use key: http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-deploy-app-connect.html
5. run putty and find you instance public dns and add ec2-user in front of it, so it looks like this: ec2-user#ec2-54-76-47-0.eu-west-1.compute.amazonaws.com
Then is as simple as: yum install php-pdo
My build on travis is failing to deploy. I using the same s3 access key id as another build that is working. Do I have to use different access keys for each build project?
Every encrypted key in your .travis.yml must be unique per repo.
So even if key shjowjdjpakk19o works on test/test, it will not work on test/nother-test.
You can create a new key by deleting the old one and with the travis tool run travis encrypt SECRET-KEY and copying in the new key.