Uninstalled Amplify CLI but keep getting local IAM credentials - amazon-web-services

I have a Next.js app connected with Amplify. I logged in to my system using amplify CLI through
amplify configure
amplify init
I need to remove those credentials, because I want to change the amplify project, so I used
amplify remove
amplify uninstall
But the project is still connected to the credentials, and I can't logout. What's the problem here?
My project doesn't have any amplify variables hardcoded. I try to use the following command
new AdminUserGlobalSignOutCommand({
UserPoolId: this.clientPoolId,
Username: username,
})
And it is working, when it shouldn't because it's supposed to don't have the credentials.

Related

Amplify remove auth - Resource cannot be removed because it has a dependency on another resource

Original user pool was imported. Trying to remove imported user pool to replace with new one. I've tried running amplify update auth and amplify remove auth to no success
Auth has already been imported to this project and cannot be modified
from the CLI. To modify, run "amplify remove auth" to unlink the
imported auth resource. Then run "amplify add auth".
Following the above, when I run amplify remove auth I get
Resource cannot be removed because it has a dependency on another
resource

Need to perform AWS calls for account xxx, but no credentials have been configured

I'm trying to deploy my stack to aws using cdk deploy my-stack. When doing it in my terminal window it works perfectly, but when im doing it in my pipeline i get this error: Need to perform AWS calls for account xxx, but no credentials have been configured. I have run aws configure and inserted the correct keys for the IAM user im using.
So again, it only works when im doing manually in the terimal but not when the pipeline is doing it. Anyone got a clue to why I get this error?
I encountered the same error message on my Mac.
I had ~/.aws/config and credentials files set up. My credentials file had a user that didn't exist in IAM.
For me, the solution was to go back into IAM in the AWS Console; create a new dev-admin user and add AdministratorAccess privileges like this ..
Then update my ~/.aws/credentials file with the new [dev-admin] user and added the keys which are available under the "Security Credentials" tab on the Summary page shown above. The credentials entry looks like this..
[dev-admin]
aws_access_key_id=<your access key here>
aws_secret_access_key=<your secret access key here>
I then went back into my project root folder and ran
cdk deploy --profile dev-admin -v
Not sure if this is the 'correct' approach but it worked for me.
If you are using a named profile other than 'default', you might want to pass the name of the profile with the --profile flag.
For example:
cdk deploy --all --profile mynamedprofile
If you are deploying a stack or a stage you can explicitly specify the environment you are deploying resources in. This is important for cdk-pipelines because the AWS Account where the Pipeline construct is created can be different from where the resources get dployed. For example (C#):
Env = new Amazon.CDK.Environment()
{
Account = "123456789",
Region = "us-east-1"
}
See the docs
If you get this error you might need to bootstrap the account in question. And if you have a tools/ops account you need to trust this from the "deployment" accounts.
Here is an example with dev, prod and tools:
cdk bootstrap <tools-account-no>/<region> --profile=tools;
cdk bootstrap <dev-account-no>/<region> --profile=dev;
cdk bootstrap <prod-account-no>/<region> --profile=prod;
cdk bootstrap --trust <tools-account-no> --profile=dev --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=prod --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=tools --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
Note that you need to commit the changes to cdk.context.json
The only way worked with me is to make sure that ~/.aws/config and ~/.aws/credentials files they both can't have a default profile section.
So if you removed the default profile from both files, it should work fine with you :)
Here is a sample of my ~/.aws/config ====> (Note: i don't use default profile at all)
[profile myProfile]
sso_start_url = https://hostname/start#/
sso_region = REPLACE_ME_WITH_YOURS
sso_account_id = REPLACE_ME_WITH_YOURS
sso_role_name = REPLACE_ME_WITH_YOURS
region = REPLACE_ME_WITH_YOURS
output = yaml
And this is ~/.aws/credentials ====> (Note: i don't use default profile at all)
[myProfile]
aws_access_key_id=REPLACE_ME_WITH_YOURS
aws_secret_access_key=REPLACE_ME_WITH_YOURS
aws_session_token=REPLACE_ME_WITH_YOURS
source_profile=myProfile
Note: if it still doesn't work, so try to use one profile only in config and credentials holding your AWS configurations and credentials.
I'm also new to this. I was adding sudo before cdk bootstrap command. Removing sudo made it work.
You can also do aws configure list to list down all the profiles to check if credentials are being created and stored in a proper manner.
If using a CI tool, check the output of cdk <command> --verbose for hints at the root cause for credentials not found.
In one case, the issue was simply the ~/.aws/credentials file was missing (although not technically required if running on EC2) - more details in this answer.
I too had this issue.
when I checked ~/.aws/credentials, it was having some older account details. So I just deleted that file.
and
==> aws configure
==> cdk bootstrap aws://XXXXXX/ap-south-1
it worked.

AWS Cognito: Getting error in Auth.signIn (Validate that amazon-cognito-identity-js has been linked)

I'm new to Amplify integration with Cognito and working on a react-native app using Amplify with Cognito for Authentication. I have configured the user pool and Federated Identity in the AWS console.
I have created my own signup and login interface with the respective screens using the react-navigation 5.x version.
Below are the AWS related modules I added in package.json
"#aws-amplify/auth": "^3.4.24",
"#aws-amplify/core": "^3.8.16",
Here is the Amplify configuration in the App.js
Amplify.configure({
Auth: {
identityPoolId: 'eu-west-2:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx',
region: 'eu-west-2',
userPoolId: 'eu-west-2_xxxxxxxx',
userPoolWebClientId: 'xxxxxxxxxxxxxxxxxxxxxx',
authenticationFlowType: 'USER_PASSWORD_AUTH'
}
});
I'm able to successfully invoke Auth.signUp but getting error when I'm trying to invoke Auth.signIn(username, password)
Validate that amazon-cognito-identity-js has been linked
How do I able to invoke Auth.signIn successfully, please help in resolving the issue?
I had the same problem and I fix it by installing cognito identity.
Run the following:
npm i amazon-cognito-identity-js
After install start again rn
npm start
If you're running on ios, in addition to installing amazon-cognito-identity-js as mentioned, remember to also run pod install

Google Cloud credentials with Terraform

This is a bit of a newbie question, but I've just gotten started with GCP provisioning using Terraform / Terragrunt, and I find the workflow with obtaining GCP credentials quite confusing. I've come from using AWS exclusively, where obtaining credentials, and configuring them in the AWS CLI was quite straightforward.
Basically, the Google Cloud Provider documentation states that you should define a provider block like so:
provider "google" {
credentials = "${file("account.json")}"
project = "my-project-id"
region = "us-central1"
zone = "us-central1-c"
}
This credentials field shows I (apparently) must generate a service account, and keep a JSON somewhere on my filesystem.
However, if I run the command gcloud auth application-default login, this generates a token located at ~/.config/gcloud/application_default_credentials.json; alternatively I can also use gcloud auth login <my-username>. From there I can access the Google API (which is what Terraform is doing under the hood as well) from the command line using a gcloud command.
So why does the Terraform provider require a JSON file of a service account? Why can't it just use the credentials that the gcloud CLI tool is already using?
By the way, if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
Initializing modules...
Initializing the backend...
Error: Failed to get existing workspaces: querying Cloud Storage
failed: Get
https://www.googleapis.com/storage/v1/b/terraform-state-bucket/o?alt=json&delimiter=%2F&pageToken=&prefix=projects%2Fsomeproject%2F&prettyPrint=false&projection=full&versions=false:
private key should be a PEM or plain PKCS1 or PKCS8; parse error:
asn1: syntax error: sequence truncated
if I configure Terraform to point to the application_default_credentials.json file, I get the following errors:
The credentials field in provider config expects a path to service account key file, not user account credentials file. If you want to authenticate with your user account try omitting credentials and then running gcloud auth application-default login; if Terraform doesn't find your credentials file you can set the GOOGLE_APPLICATION_CREDENTIALS environment variabe to point to ~/.config/gcloud/application_default_credentials.json.
Read here for more on the topic of service accounts vs user accounts. For what it's worth, Terraform docs explicitly advice against using application-default login:
This approach isn't recommended- some APIs are not compatible with credentials obtained through gcloud
Similarly GCP docs state the following:
Important: For almost all cases, whether you are developing locally or in a production application, you should use service accounts, rather than user accounts or API keys.
Change the credentials to point directly to the file location. Everything else looks good.
Example: credentials = "/home/scott/gcp/FILE_NAME"
Still it is not recommended to use gcloud auth application-default login, Best best approaches are
https://www.terraform.io/docs/providers/google/guides/provider_reference.html#credentials-1

What credentials does Boto3 use when running in AWS CodeBuild?

So I've written a set of deployment scripts that run in CodeBuild and use Boto3 to deploy some dockerised apps to ECS. The problem I'm having is when I want to deploy to our separate production account.
If I'm running the CodeBuild project from the dev account but want to create resources in the production account, it's my understanding that I should set up a role in the target account, allow the codebuild role to assume it, then call:
sts_client.assume_role(
RoleArn=arn_of_a_role_I_set_up,
RoleSessionName=some_name
)
This returns an access key, secret key, and session token. This works and returns what I'd expect.
Then what I want to do is just assign those values to these environment variables:
AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
This is because according to the documentation here: http://boto3.readthedocs.io/en/latest/guide/configuration.html Boto3 should defer to if you don't explicitly set those variables in the client or session methods.
However, when I do this the resources still get created in the same dev account.
Also, if I call printenv in the first part of my buildspec.yml before my scripts attempt to set the environment variables, those AWS key/secret/token variables aren't present at all.
So when it's running in CodeBuild, where is Boto3 getting its credentials from?
Is the solution just going to be to pass in a key/secret/token to every boto3.client() call to be perfectly sure?
The credentials in the CodeBuild environment are from the service role associated with your CodeBuild project. Boto and botocore will use the "ContainerProvider" automatically to grab those credentials in the CodeBuild environment.