I am facing this weird scenario. I generate my AWS AccessKeyId, SecretAccessKey and SessionToken by running assume-role-with-saml command. After copying these values to .aws\credentials file, I try to run command "aws s3 ls" and can see all the S3 buckets. Similarly I can run any AWS command to view objects and it works perfectly fine.
However, when I write .Net Core application to list objects, it doesn't work on my computer. Same .Net application works find on other colleagues' computers. We all have access to AWS through the same role. There are no users in IAM console.
Here is the sample code, but I am not sure there is nothing wrong with the code, because it works fine on other users' computers.
var _ssmClient = new AmazonSimpleSystemsManagementClient();
var r = _ssmClient.GetParameterAsync(new Amazon.SimpleSystemsManagement.Model.GetParameterRequest
{
Name = "/KEY1/KEY2",
WithDecryption = true
}).ConfigureAwait(false).GetAwaiter().GetResult();
Any idea why running commands through CLI works and API calls don't work? Don't they both look at the same %USERPROFILE%.aws\credentials file?
I found it. Posting here since it can be useful for someone having same issue.
Go to this folder: %USERPROFILE%\AppData\Local\AWSToolkit
Take a backup of all files and folders and delete all from above location.
This solution applies only if you can run commands like "aws s3 ls" and get the results successfully, but you get error "The provided token has expired" while running the same from .Net API libraries.
Related
I get the error "An error occurred (NoSuchResourceException) when calling the GetServiceQuota operation:" while trying running the following boto3 python code to get the value of quota for "Buckets"
client_quota = boto3.client('service-quotas')
resp_s3 = client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
In the above code, QuotaCode "L-89BABEE8" is for "Buckets". I presumed the value of ServiceCode for Amazon S3 would be "s3" so I put it there but I guess that is wrong and throwing error. I tried finding the documentation around ServiceCode for S3 but could not find it. I even tried with "S3" (uppercase 'S' here), "Amazon S3" but that didn't work as well.
What I tried?
client_quota = boto3.client('service-quotas') resp_s3 = client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
What I expected?
Output in the below format for S3. Below example is for EC2 which is the output of resp_ec2 = client_quota.get_service_quota(ServiceCode='ec2', QuotaCode='L-6DA43717')
I just played around with this and I'm seeing the same thing you are, empty responses from any service quota list or get command for service s3. However s3 is definitely the correct service code, because you see that come back from the service quota list_services() call. Then I saw there are also list and get commands for AWS default service quotas, and when I tried that it came back with data. I'm not entirely sure, but based on the docs I think any quota that can't be adjusted, and possibly any quota your account hasn't requested an adjustment for, will probably come back with an empty response from get_service_quota() and you'll need to run get_aws_default_service_quota() instead.
So I believe what you need to do is probably run this first:
client_quota.get_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
And if that throws an exception, then run the following:
client_quota.get_aws_default_service_quota(ServiceCode='s3', QuotaCode='L-89BABEE8')
I have a react Amplify App. All I want to do is work on it, push the changes to amplify, etc. These are all standard and basic commands like amplify push.
The problem is that shortly after starting to work on my app ( a month or two ), I was no longer allowed to push, pull, or work on the app from the command line. There is no explanation, and the only error is this ...
An error occurred during the push operation: /
Access Denied
✅ Report saved: /var/folders/8j/db7_b0d90tq8hgpfcxrdlr400000gq/T/storygraf/report-1658279884644.zip
✔ Done
The logs created from the error show this.
error.json
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-20T01:20:01.876Z",
"requestId": "DRFVQWYWJAHWZ8JR",
"extendedRequestId": "hFfxnwUjbtG/yBPYG+GW3B+XfzgNiI7KBqZ1vLLwDqs/D9Qo+YfIc9dVOxqpMo8NKDtHlw3Uglk=",
"statusCode": 403,
"retryable": false,
"retryDelay": 60.622127086356855
}
I have two users in my .aws/credentials file. One is the default (which is my work account). The other is called "personal". I have tried to push with
amplify push
amplify push --profile default
amplify push --profile personal
It always results in the same.
I followed the procedure located here under the title "Create environment variables to assume the IAM role and verify access" and entered a new AWS_ACCESS_KEY_ID and a new AWS_SECRET_ACCESS_KEY. When I then run the command ...
aws sts get-caller-id
It returns the correct Arn. However, there is a AWS_SESSION_TOKEN variable that the docs say need to be set, and I have no idea what that is.
Running amplify push under this new profile still results in an error.
I have also tried
AWS_PROFILE=personal aws sts get-caller-identity
Again, this results in the correct settings, but the amplify push still fails for the same reasons.
At this point, i'm ready to drop it and move to something else. I've been debugging this for literally months now and it would be far easier to setup a standard react app on S3 and stand up my resources manually without dealing with this.
Any help is appreciated.
This is the same issue for me. There seems to be no way to reconfigure the CLI once its authentication method is set to profile. I'm trying to change it back to amplify studio and have not been able to crack the code on updating it. Documentation in this area is awful.
In the amplify folder there is a .config directory. There are three files:
local-aws-info.json
local-env-info.json
project-config.json
project-config.json is required, but the local-* files maintain state for your local configuration. Delete these and you can reinit the project and reauthenticate the amplify cli for the environment
I'm following the instructions in the Cloud Data Fusion sample tutorial and everything seems to work fine, until I try to run the pipeline right at the end. Cloud Data Fusion Service API permissions are set for the Google managed Service account as per the instructions. The pipeline preview function works without any issues.
However, when I deploy and run the pipeline it fails after a couple of minutes. Shortly after the status changes from provisioning to running the pipeline stops with the following permissions error:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X.",
"reason" : "forbidden"
} ],
"message" : "xxxxxxxxxxx-compute#developer.gserviceaccount.com does not have storage.buckets.create access to project X."
}
xxxxxxxxxxx-compute#developer.gserviceaccount.com is the default Compute Engine service account for my project.
"Project X" is not one of mine though, I've no idea why the pipeline startup code is trying to create a bucket there, it does successfully create temporary buckets ( one called df-xxx and one called dataproc-xxx) in my project before it fails.
I've tried this with two separate accounts and get the same error in both places. I had tried adding storage/admin roles to the various service accounts to no avail but that was before I realized it was attempting to access a different project entirely.
I believe I was able to reproduce this. What's happening is that the BigQuery Source plugin first creates a temporary working GCS bucket to export the data to, and I suspect it is attempting to create it in the Dataset Project ID by default, instead of your own project as it should.
As a workaround, create a GCS bucket in your account, and then in the BigQuery Source configuration of your pipeline, set the "Temporary Bucket Name" configuration to "gs://<your-bucket-name>"
You are missing setting up permissions steps after you create an instance. The instructions to give your service account right permissions is in this page https://cloud.google.com/data-fusion/docs/how-to/create-instance
I am trying to run a demo project for uploading to S3 with Grails 3.
The project in question is this, more specifically the S3 upload is only for the 'Hotel' example at the end.
When I run the project and go to upload the image, I get an updated message but nothing actually happens - there's no inserted url in the dbconsole table.
I think the issue lies with how I am running the project, I am using the command:
grails -Daws.accessKeyId=XXXXX -Daws.secretKey=XXXXX run-app
(where I am supplementing the X's for my keys obviously).
This method of running the project appears to be slightly different to the method shown in the example. I run my project from the command line and I do not use GGTS, just Sublime.
I have tried inserting my AWS keys into the application.yml but I receive an internal server error then.
Can anyone help me out here?
Check your bucket policy in s3. You need to grant permissions to the API user to allow uploads.
I have setup my pc with python and connections to AWS. This has been successfully tested using the s3_sample.py file, I had to create an IAM user account with the credentials in a file which worked fine for S3 buckets.
My next task was to create an mqtt bridge and put some data in a stream in kinesis using the awslab - awslabs/mqtt-kinesis-bridge.
This seems to be all ok except I get an error when I run the bridge.py. The error is:
Could not find ACTIVE stream:my_first_stream error:Stream my_first_stream under account 673480824415 not found.
Strangely this is not the account I use in the .boto file that is suggested to be set up for this bridge, which are the same credentials I used for the S3 bucket
[Credentials]
aws_access_key_id = AA1122BB
aws_secret_access_key = LlcKb61LTglis
It would seem to me that the bridge.py has a hardcoded account but I can not see it and i can't see where it is pointing to the .boto file for credentials.
Thanks in Advance
So the issue of not finding the Active stream for the account is resolved by:
ensure you are hooked into the US-EAST-1 data centre as this is the default data centre for bridge.py
create your stream, you will only need 1 shard
The next problem stems from the specific version of MQTT and the python library paho-mqtt I installed. The bridge application was written with the API of MQTT 1.2.1 using paho-mqtt 0.4.91 in mind.
The new version which is available for download on their website has a different way of interacting with the paho-mqtt library which passes an additional "flags" object to the on_connect callback. This generates the error I was experiencing, since its not expecting the 5th argument.
You should be able to fix it by making the following change to bridge.py
Line 104 currently looks like this:
def on_connect(self, mqttc, userdata, msg):
Simply add flags, after userdata, so that the callback function looks like this:
def on_connect(self, mqttc, userdata,flags, msg):
This should resolve the issue of the final error of the incorrect number of arguments being passed.
Hope this helps others, thank for the efforts.
When you call python SDK for aws service, there is a line to import the boto modules for aws services in bridge.py.
import boto
The setting is pointing to the .boto for credentials and defined defaultly in boto.
Here is the explanation Boto Config :
Details
A boto config file is a text file formatted like an .ini configuration file that specifies values for options that control the behavior of the boto library. In Unix/Linux systems, on startup, the boto library looks for configuration files in the following locations and in the following order:
/etc/boto.cfg - for site-wide settings that all users on this machine will use
~/.boto - for user-specific settings
~/.aws/credentials - for credentials shared between SDKs
Of course, you can set the environment directly,
export AWS_ACCESS_KEY_ID="Your AWS Access Key ID"
export AWS_SECRET_ACCESS_KEY="Your AWS Secret Access Key"