AWS CLI - How do I encrypt the credentials - amazon-web-services

When using aws configure, the credentials are stored on my workstation in clear text. This is a HUGE security violation. I tried opening an issue at the aws cli github and it was summarily closed. I am using Terraform AND the aws cli directly, so a work-aroundneeds to support this.
Example:
[MyProfile]
aws_access_key_id = xxxxxxxxxxxxxxx
aws_secret_access_key = yyyyyyyyyyyyyyyyyy
region=us-east-2
output=json

This is the simplest work-around I could find.
References:
https://devblogs.microsoft.com/powershell/secretmanagement-and-secretstore-are-generally-available/
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sourcing-external.html
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.secretmanagement/?view=ps-modules
The following powershell creates an encrypted vault.
#This will destroy existing AWS vault
#The Vault will be set accessible to the current User with no password.
#When AWS CLI invokes this there is no way to request a password.
Install-Module Microsoft.PowerShell.SecretManagement
Install-Module Microsoft.PowerShell.SecretStore
Set-SecretStoreConfiguration -Authentication None -Scope CurrentUser -Interaction None
Register-SecretVault -Name "AWS" -ModuleName Microsoft.PowerShell.SecretStore -DefaultVault -AllowClobber
Set-Secret -Vault "AWS" -Name "test" -Secret "test"
Get-SecretVault
Write-Host "Vault Created"
This powershell can create the secret. Notice it is possible to expire the secret.
$profile = Read-Host -Prompt "Enter AWS Account Number"
$aws_access_key_id = Read-Host -Prompt "Enter AWS access key"
$aws_secret_access_key = Read-Host -Prompt "Enter AWS secret access key"
$secretIn = #{
Version=1;
AccessKeyId= $aws_access_key_id;
SecretAccessKey=$aws_secret_access_key;
SessionToken= $null; #"the AWS session token for temporary credentials";
#Expiration="ISO8601 timestamp when the credentials expire";
}
$secret = ConvertTo-Json -InputObject $secretIn
Set-Secret -Name $profile -Secret $secret
This file named credential_process.cmd needs to located on the path or next to terrform.exe.
#echo off
REM This file needs to be accessible to the aws cli or programs using it.
REM To support other paths, copy it to C:\Program Files\Amazon\AWSCLIV2
Powershell.exe -Command "Get-Secret -Vault AWS -Name %1 -AsPlainText "
Finally in your {user}.aws\credentials file place the following entry:
[XXXXX-us-east-1]
credential_process = credential_process.cmd "XXXXX"
region=us-east-1
output=json
Now you can run an aws cli command (or Terraform) using:
aws ec2 describe-vpcs --profile XXXXX-us-east-1
Drawbacks:
There is no way to prevent a user from using the simple aws configure statement and storing credentials in the clear.
There is no way to force an admin to use this method.
Like everything else AWS:
The complexity it unnecessary.
The documentation is very detailed, but somehow always missing important information.
Everything is a hack-job.
Possibilities:
It is possible to create a user (User1) that has access only to a certain secret in secret manager (User2 credentials).
User1 credentials are stored in the local Vault.
User1 would fetch the User2 credentials to be used from Secret Manager during invokation of credential_process.cmd
Person is never given the User2 credentials directly.
This would force the user to use method above.
However, the implementation of this should be in the aws configure, not hacked together. This would allow other dependent tools to just work once the configuration is complete.

I came across this link a while back and thought it was excellent in explaining all the different options that you can try to solve the problem that you described above.
Never put AWS temporary credentials in the credentials file (or env vars)—there’s a better way | by Ben Kehoe | Oct, 2021 | Medium

This is a HUGE security violation. I tried opening an issue at the aws cli github and it was summarily closed.
Running on AWS you can use the instance role (for EC2, Lambda or ECS).
Running outside AWS there is not much better option. If someone get access to the home directory, it's not your computer anymore. However - the credentials can be as well passed as env variables or cli/api parameters.
These can be encrypted and decrypted or requested when to be used, but still you need access to the decryption key or service.

you can actually use something like aws-vault: it stores the secrets in the local keychain, and basically creates a temporary shell with the creds as env variables, or you can just exec a specific command without creating a whole shell.
also another similar tool is vaulted that stores credentials in an encrypted file and creates a temporary shell session when you wanna use it

Related

AWS - Extend Time to Live for Presigned URL by using IAM Credientials

I am running some code inside a docker container, in Lambda. In the code, I use AWS CLI to generate a presigned url:
report_s3 <- paste("MY-BUCKET/stash/", cognito_user_email, cloudwatch_uuid, "/report.html", sep = "")
cmd <- paste("aws s3 presign ", report_s3, " --expires-in 604800")
presigned_url <- system(cmd, intern = TRUE)
The above code is R. And it works fine. Essentially all I'm doing here is sending the aws cli command to a bash terminal, and reading back the response.
My issue is that while the resulting presigned url works fine, it is not valid for the 7 days I requested. This is because the token used to create it expires. In order to fix this, I believe the correct approach is to :
Use Secrets Manager to save secret access key for new IAM user with permissions. This would be done in cdk, the console, CLI... Where ever really.
Retrieve the credentials from secret manager in the code to use IAM credentials. This wold be done with CLI.
Create presigned URL. Again with CLI.
I have a few questions:
How does this offer additional security over just hard coding the credentials? Since it's inside a Lambda that's only running for 300 ms.
How would you hard code the credentials? Is it just a case of dumping a copy of the files into $HOME/.aws/credentials? And specifying --profile in the aws s3 presign
Is it possible to do this with secrets manager etc using only CLI commands?

Need to perform AWS calls for account xxx, but no credentials have been configured

I'm trying to deploy my stack to aws using cdk deploy my-stack. When doing it in my terminal window it works perfectly, but when im doing it in my pipeline i get this error: Need to perform AWS calls for account xxx, but no credentials have been configured. I have run aws configure and inserted the correct keys for the IAM user im using.
So again, it only works when im doing manually in the terimal but not when the pipeline is doing it. Anyone got a clue to why I get this error?
I encountered the same error message on my Mac.
I had ~/.aws/config and credentials files set up. My credentials file had a user that didn't exist in IAM.
For me, the solution was to go back into IAM in the AWS Console; create a new dev-admin user and add AdministratorAccess privileges like this ..
Then update my ~/.aws/credentials file with the new [dev-admin] user and added the keys which are available under the "Security Credentials" tab on the Summary page shown above. The credentials entry looks like this..
[dev-admin]
aws_access_key_id=<your access key here>
aws_secret_access_key=<your secret access key here>
I then went back into my project root folder and ran
cdk deploy --profile dev-admin -v
Not sure if this is the 'correct' approach but it worked for me.
If you are using a named profile other than 'default', you might want to pass the name of the profile with the --profile flag.
For example:
cdk deploy --all --profile mynamedprofile
If you are deploying a stack or a stage you can explicitly specify the environment you are deploying resources in. This is important for cdk-pipelines because the AWS Account where the Pipeline construct is created can be different from where the resources get dployed. For example (C#):
Env = new Amazon.CDK.Environment()
{
Account = "123456789",
Region = "us-east-1"
}
See the docs
If you get this error you might need to bootstrap the account in question. And if you have a tools/ops account you need to trust this from the "deployment" accounts.
Here is an example with dev, prod and tools:
cdk bootstrap <tools-account-no>/<region> --profile=tools;
cdk bootstrap <dev-account-no>/<region> --profile=dev;
cdk bootstrap <prod-account-no>/<region> --profile=prod;
cdk bootstrap --trust <tools-account-no> --profile=dev --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=prod --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
cdk bootstrap --trust <tools-account-no> --profile=tools --cloudformation-execution-policies 'arn:aws:iam::aws:policy/ AdministratorAccess';
Note that you need to commit the changes to cdk.context.json
The only way worked with me is to make sure that ~/.aws/config and ~/.aws/credentials files they both can't have a default profile section.
So if you removed the default profile from both files, it should work fine with you :)
Here is a sample of my ~/.aws/config ====> (Note: i don't use default profile at all)
[profile myProfile]
sso_start_url = https://hostname/start#/
sso_region = REPLACE_ME_WITH_YOURS
sso_account_id = REPLACE_ME_WITH_YOURS
sso_role_name = REPLACE_ME_WITH_YOURS
region = REPLACE_ME_WITH_YOURS
output = yaml
And this is ~/.aws/credentials ====> (Note: i don't use default profile at all)
[myProfile]
aws_access_key_id=REPLACE_ME_WITH_YOURS
aws_secret_access_key=REPLACE_ME_WITH_YOURS
aws_session_token=REPLACE_ME_WITH_YOURS
source_profile=myProfile
Note: if it still doesn't work, so try to use one profile only in config and credentials holding your AWS configurations and credentials.
I'm also new to this. I was adding sudo before cdk bootstrap command. Removing sudo made it work.
You can also do aws configure list to list down all the profiles to check if credentials are being created and stored in a proper manner.
If using a CI tool, check the output of cdk <command> --verbose for hints at the root cause for credentials not found.
In one case, the issue was simply the ~/.aws/credentials file was missing (although not technically required if running on EC2) - more details in this answer.
I too had this issue.
when I checked ~/.aws/credentials, it was having some older account details. So I just deleted that file.
and
==> aws configure
==> cdk bootstrap aws://XXXXXX/ap-south-1
it worked.

AWS-CLI acccess to S3 on Linux Machine

I am wanting to set up a recursive sync from a Linux machine (Fedora) to an AWS S3 bucket. I am logged into Linux as root and have an AWS Key and Secret associated with a specific AWS user "Lisa".
I have installed aws-cli, s3cmd, and attempted to configure both. I have verified the aws/configure and aws/credentials files both have a default user and a "Lisa" user with Access Key and Secret pairs. I receive errors stating that Access is Denied, access key and secret pair not found. I have researched this on the web and verified that there are no environment variables that could be overriding the configure & credential files. I have also granted full access permissions to the bucket created through the AWS Console to all logged in users. I have not rotated the keys, as they were first created a week ago, and I was able to log-in & set-up the AWS console using that same key pair.
What else should I be doing before rotating the keys?
It looks like you haven't configured AWS credentials correctly. Make sure that you have correct access keys in your credentials file. If you don't specify any profiles, awscli uses the default profile.
~/.aws/credentials
[default]
aws_access_key_id=AKIAIDEFAULTKEY
aws_secret_access_key=Mo9T7WNO….
[Lisa]
aws_access_key_id=AKIAILISASKEY
aws_secret_access_key=H0XevhnC….
This command uses the default profile:
aws s3 ls
This command uses Lisa profile:
aws s3 ls --profile Lisa
You can set an environment variable to override the default profile.
export AWS_DEFAULT_PROFILE=Lisa
Now this command uses the profile Lisa:
aws s3 ls
If you don't know which profile is active, you can just invoke the following command:
aws sts get-caller-identity
You seem to have several terms intermixed, so it's worth knowing the difference:
Username and password is used to login to the web-based management console. They are short, to be human-readable and easy to remember.
Access Key (starting with AKIA) and Secret Key is used for making API calls. It is also used by the AWS CLI (which makes API calls on your behalf)
Key pair consists of a public and private key, used for authenticating SSH connections. It is a very long block of text.
You mention that an Access Key is not found. This could be because the wrong type of credential is being provided.

The AWS Access Key Id you provided does not exist in our records, but credentials was already set

Through boto3 library, I uploaded and downloaded file from AWS s3 successfully.
But after few hours, it shows InvalidAccessKeyId suddenly for the same code.
What I have done:
set ~/.aws/credentials
Set environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
I tried the following solutions, but the error still heppens.
adding quotes on config values
ref2
Do I miss anything? Thanks for your help.
You do not need to configure both .aws/credentials AND environment variables.
From Credentials — Boto 3 documentation:
The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
The fact that your credentials stopped working after a period of time suggests that they were temporary credentials created via the AWS Security Token Service, with an expiry time.
If you have the credentials in ~/.aws/credentials there is no need to set environment variables AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY.
Environment variables are valid only for a session.
If you are using boto3, you can specify the credentials while creating client itself.
The best way to configure AWS credential is to install the AWS Command-Line Interface (CLI) and run aws configure from the bash console:
~/.aws/credentials format
[default]
aws_access_key_id = ***********
aws_secret_access_key = ************
I found this article for the same issue.
Amazon suggests to generate new key, and I did.
Then it works, but we don't know the root cause.
Suggest to do so for saving a lot of time when having the same problem.

The AWS Access Key Id does not exist in our records

I created a new Access Key and configured that in the AWS CLI with aws configure. It created the .ini file in ~/.aws/config. When I run aws s3 ls it gives:
A client error (InvalidAccessKeyId) occurred when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
AmazonS3FullAccess policy is also attached to the user. How to fix this?
It might be happening that you have the old keys exported via env variables (bash_profile) and since the env variables have higher precedence over credential files it is giving the error "the access key id does not exists".
Remove the old keys from the bash_profile and you would be good to go.
Happened with me once earlier when I forgot I have credentials in bash_profile and gave me headache for quite some time :)
It looks like some values have been already set for the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
If it is like that, you could see some values when executing the below commands.
echo $AWS_SECRET_ACCESS_KEY
echo $AWS_ACCESS_KEY_ID
You need to reset these variables, if you are using aws configure
To reset, execute below commands.
unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
Need to add aws_session_token in credentials, along with aws_access_key_id,aws_secret_access_key
None of the up-voted answers work for me. Finally I pass the credentials inside the python script, using the client API.
import boto3
client = boto3.client(
's3',
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
aws_session_token=SESSION_TOKEN)
Please notice that the aws_session_token argument is optional. Not recommended for public work, but make life easier for simple trial.
For me, I was relying on IAM EC2 roles to give access to our machines to specific resources.
I didn't even know there was a credentials file at ~/.aws/credentials, until I rotated/removed some of our accessKeys at the IAM console to tighten our security, and that suddenly made one of the scripts stop working on a single machine.
Deleting that credentials file fixed it for me.
I made the mistake of setting my variables with quotation marks like this:
AWS_ACCESS_KEY_ID="..."
You may have configured AWS credentials correctly, but using these credentials, you may be connecting to some specific S3 endpoint (as was the case with me).
Instead of using:
aws s3 ls
try using:
aws --endpoint-url=https://<your_s3_endpoint_url> s3 ls
Hope this helps those facing the similar problem.
you can configure profiles in the bash_profile file using
<profile_name>
aws_access_key_id = <access_key>
aws_secret_access_key = <acces_key_secret>
if you are using multiple profiles. then use:
aws s3 ls --profile <profile_name>
You may need to set the AWS_DEFAULT_REGION environment variable.
In my case, I was trying to provision a new bucket in Hong Kong region, which is not enabled by default, according to this:
https://docs.aws.amazon.com/general/latest/gr/s3.html
It's not totally related to OP's question, but to topic per se, so if anyone else like myself finds trapped on this edge case:
I had to enable that region manually, before operating on that AWS s3 region, following this guide: https://docs.aws.amazon.com/general/latest/gr/rande-manage.html
I have been looking for information about this problem and I have found this post. I know it is old, but I would like to leave this post in case anyone has problems.
Okay, I have installed the AWS CLI and opened:
It seems that you need to run aws configure to add the current credentials. Once changed, I can access
Looks like ~/.aws/credentials was not created. Try creating it manually with this content:
[default]
aws_access_key_id = sdfesdwedwedwrdf
aws_secret_access_key = wedfwedwerf3erfweaefdaefafefqaewfqewfqw
(on my test box, if I run aws command without having credentials file, the error is Unable to locate credentials. You can configure credentials by running "aws configure".)
Can you try running these two commands from the same shell you are trying to run aws:
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
and then try aws command.
another thing that can cause this, even if everything is set up correctly, is running the command from a Makefile. for example, I had a rule:
awssetup:
aws configure
aws s3 sync s3://mybucket.whatever .
when I ran make awssetup I got the error: fatal error: An error occurred (InvalidAccessKeyId) when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.. but running it from the command line worked.
Adding one more answer since all the above cases didn't work for me.
In AWS console, check your credentials(My Security Credentials) and see if you have entered the right credentials.
Thanks to this discussion:
https://forums.aws.amazon.com/message.jspa?messageID=771815
This could happen because there's an issue with your AWS Secret Access Key. After messing around with AWS Amplify, I ran into this issue. The quickest way is to create a new pair of AWS Access Key ID and AWS Secret Access Key and run aws configure again.
I works for me. I hope this helps.
To those of you who run aws s3 ls and getting this exception. Make sure You have permissions to all regions under the provided AWS Account. When running aws s3 ls you try to pull all the s3 buckets under the AWS Account. therefore, in case you don't have permissions to all regions, you'll get this exception - An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records.
Follow Describing your Regions using the AWS CLI for more info.
I had the same problem in windows and using the module aws-sdk of javascript. I have changed my IAM credentials and the problem persisted even if i give the new credentials through the method update like this
s3.config.update({
accessKeyId: 'ACCESS_KEY_ID',
secretAccessKey: 'SECRET_ACCESS_KEY',
region: 'REGION',
});
After a while i found that the module aws-sdk had created a file inside the folder User on windows with this path
C:\Users\User\.aws\credentials
. The credentials inside this file take precedence over the other data passed through the method update.
The solution for me was to write here
C:\Users\User\.aws\credentials
the new credentials and not with the method s3.config.update
Kindly export the below variables from the credential file from the below directory.
path = .aws/
filename = credentials
export aws_access_key_id = AK###########GW
export aws_secret_access_key = g#############################J
Hopefully this saves others from hours of frustration:
call aws.config.update({ before initializing s3.
const AWS = require('aws-sdk');
AWS.config.update({
accessKeyId: 'AKIAW...',
secretAccessKey: 'ptUGSHS....'
});
const s3 = new AWS.S3();
Credits to this answer:
https://stackoverflow.com/a/61914974/11110509
I tries below steps and it worked:
1. cd ~
2. cd .aws
3. vi credentials
4. delete
aws_access_key_id =
aws_secret_access_key =
by placing cursor on that line and pressing dd (vi command to delete line).
Delete both the line and check gain.
If you have an AWS Educate account and you get this problem:
An error occurred (InvalidAccessKeyId) when calling the ListBuckets operation: The AWS Access Key Id you provided does not exist in our records".
The solution is here:
Go to your C:/ drive and search for .aws folder inside your main folder in windows.
Inside that folder you get the "credentials" file and open it with notepad.
Paste the whole key credential from AWS account to the same notepad and save it.
Now you are ready to use you AWS Educate account.
Assuming you already checked Access Key ID and Secret... you might want to check file team-provider-info.json which can be found under amplify/ folder
"awscloudformation": {
"AuthRoleName": "<role identifier>",
"UnauthRoleArn": "arn:aws:iam::<specific to your account and role>",
"AuthRoleArn": "arn:aws:iam::<specific to your account and role>",
"Region": "us-east-1",
"DeploymentBucketName": "<role identifier>",
"UnauthRoleName": "<role identifier>",
"StackName": "amplify-test-dev",
"StackId": "arn:aws:cloudformation:<stack identifier>",
"AmplifyAppId": "<id>"
}
IAM role being referred here should be active in IAM console.
If you get this error in an Amplify project, check that "awsConfigFilePath" is not configured in amplify/.config/local-aws-info.json
In my case I had to remove it, so my environment looked like the following:
{
// **INCORRECT**
// This will not use your profile in ~/.aws/credentials, but instead the
// specified config file path
// "dev": {
// "configLevel": "project",
// "useProfile": false,
// "awsConfigFilePath": "/Users/dev1/.amplify/awscloudformation/cEclTB7ddy"
// },
// **CORRECT**
"dev": {
"configLevel": "project",
"useProfile": true,
"profileName": "default",
}
}
Maybe you need to active you api keys in the web console, I just saw that mine were inactive for some reason...
Thanks, everyone. This helped to solve.
Something somehow happened which changed the keys & I didn't realize since everything was working fine until I connected to S3 from a spark...then from the command line also error started coming even in AWS s3 ls
Steps to solve
Run AWS configure to check if keys are set up (verify from last 4 characters & just keep pressing enter)
AWS console --> Users --> click on the user --> go to security credentials--> check if the key is the same that is showing up in AWS configure
If both not the same, then generate a new key, download csv
run --> AWS configure, set up new keys
try AWS s3 ls now
Change keys at all places in my case it was configs in Cloudera.
I couldn't figure out how to get the system to accept my Vocareum credentials so I took advantage of the fact that if you configure your instance to use IAM roles, the SDK automatically selects the IAM credentials for your application, eliminating the need to manually provide credentials.
Once a role with appropriate permissions was applied to the EC2 instance, I didn't need to provide any credentials.
Open the ~/.bash_profile file and edit the info with the new values that you received at the time of creating the new user:
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
export AWS_DEFAULT_REGION=us-east-1
Afterward, run the command:
source ~/.bash_profile
This will enable the new keys for the local machine. Now, we will need to configure the info in the terminal as well. Run the command -
aws configure
Provide the new values as requested and you are good to go.
In my case, I was using aws configure
However, I hand-edited the .aws/config file to export the KeyID and key environment variables.
This apparently caused a silent error and saw the error listed above.
I solved this by destroying the .aws directory and running aws configure again.
I have encountered this issue when trying to export RDS Postgres data to S3 following this official guide.
TL;DR Troubleshooting tips:
Reset RDS credentials using:
DROP EXTENSION aws_s3 CASCADE;
DROP EXTENSION aws_commons CASCADE;
CREATE EXTENSION aws_s3 CASCADE;
Delete and add DB instance role used for s3Export feature. Optionally reset RDS credentials (previous action point) once again after that.
Below you will find more details on my case.
In particular, I have encountered:
[XX000] ERROR: could not upload to Amazon S3
Details: Amazon S3 client returned 'The AWS Access Key Id you provided does not exist in our records.'.
To be able to perform export to S3, RDS DB instance should be configured to assume a role with permission to write to S3 bucket, the guide describes these steps.
The reason of an error was in aws_s3.query_export_to_s3 Postgres procedure using some (cached?) invalid assumed credentials. I am still not aware which credentials has it been using but I have managed to achieve the same behaviour using AWS CLI:
I have assumed a role (aws sts assume-role),
And then tried to perform another action (aws s3 cp in particular) with this credentials without session token (only AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY without AWS_SESSION_TOKEN).
This resulted in the same error from AWS CLI: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
In short: hard resetting RDS credentials helped.
I just found another cause/remedy for this error/situation. I was getting the error running a PowerShell script. The error was happening on an execution of Write-S3Object. I have been working with AWS for a while now and have been running this script with success, but had not run it in a while.
My usual method of setting AWS credentials is:
Set-AWSCredential -ProfileName <THE_PROFILE_NAME>
I tried the "aws configure" command and every other recommendation in this forum post. No luck.
Well, I am aware of the .aws\credentials file and took a look in there. I have only three profiles, with one being [default]. Everything was looking good, but then I noticed a new element in there, present in all 3 profiles, that I had not seen before:
toolkit_artifact_guid=64GUID3-GUID-GUID-GUID-004GUID236
(GUID redacting added by me)
Then I noticed this element differed between the profile I was running with and the [default] profile, which was the same profile, except for that.
On a hunch I changed the toolkit_artifact_guid in the [default] to match it to my target profile, and no more error. I have no idea why.