aws-runas- 3.3.1 validation issue - amazon-web-services

I am trying to run the aws-runas to use AWS STS to do some development work using a specific user role. I already have access the required userrole access to the AWS account.
the command executed to run the aws sts
aws-runas.exe <profile_name>
[default]
output = json
region = <example1>
saml_auth_url = <url>
saml_username = <email>
saml_provider = <provider>
federated_username = <username>
credentials_duration = 8h
[profile <name>]
role_arn = arn:aws:iam::<account_id>:role/<role_name>
source_profile = <profile_name>
This is how I formed the config file which is located at C:\Users<NAME>.aws
Debug output is as the follows.
Does anyone have a clue of what's wrong in here ?

Related

aws-runas issue: unable to determine client provider type

For some reason, there is an issue in aws-runas with the below config.
[default]
output = json
region = #####
aws_api_key_duration = 1h
saml_auth_url = #####
federated_username = #######
saml_provider = azure
saml_username = ######
[profile MyProd]
role_arn = ####
source_profile = default
region = ####
The error message it pops up is:
unable to determine client provider type
Looking for the correct configuration to run aws-runas to generate authentication tokens.

Creating a function to assume an AWS STS role in PowerShell $PROFILE

I have AWS credentials defined in my .aws/credentials like follows:
[profile source]
aws_access_key_id=...
aws_secret_access_key=...
[profile target]
role_arn = arn:aws:iam::123412341234:role/rolename
mfa_serial = arn:aws:iam::123412341234:mfa/mylogin
source_profile = source
...
and I would like to define functions in my $PROFILE to assume roles using AWS Tools for PowerShell in the said accounts because of MFA and credential lifetime of 1 hours.
The function looks like
function Use-SomeAWS {
Clear-AWSCredential
$Response=(Use-STSRole arn:aws:iam::123412341234:role/rolename -ProfileName target -RoleSessionName "my email").Credentials
$Creds=(New-AWSCredentials -AccessKey $Response.AccessKeyId -SecretKey $Response.SecretAccessKey -SessionToken $Response.SessionToken)
Set-AWSCredential -Credential $Creds
}
Copying & pasting the lines within the function work just fine, but sourcing the profile (. $PROFILE) and running the function (Use-SomeAWS) asks for the MFA code and seems to do its job, however, the credentials do not get correctly set for the session.
What am I doing wrong?
EDIT: With some further testing, this does work if I add -StoreAs someprofilename to the Set-AWSCredential and after that do Set-AWSCredential -ProfileName someprofilename but that kind of defeats the purpose.
Did you try the -Scope for Set-AWSCredential ? like this :
Set-AWSCredential -Credential $Creds -Scope global
https://docs.aws.amazon.com/powershell/latest/reference/items/Set-AWSCredential.html

AWS SAM Incorrect region

I am using AWS SAM to test my Lambda functions in the AWS cloud.
This is my code for testing Lambda:
# Set "running_locally" flag if you are running the integration test locally
running_locally = True
def test_data_extraction_validate():
if running_locally:
lambda_client = boto3.client(
"lambda",
region_name="eu-west-1",
endpoint_url="http://127.0.0.1:3001",
use_ssl=False,
verify=False,
config=botocore.client.Config(
signature_version=botocore.UNSIGNED,
read_timeout=10,
retries={'max_attempts': 1}
)
)
else:
lambda_client = boto3.client('lambda',region_name="eu-west-1")
####################################################
# Test 1. Correct payload
####################################################
with open("payloads/myfunction/ok.json","r") as f:
payload = f.read()
# Correct payload
response = lambda_client.invoke(
FunctionName="myfunction",
Payload=payload
)
result = json.loads(response['Payload'].read())
assert result['status'] == True
assert result['error'] == ""
This is the command I am using to start AWS SAM locally:
sam local start-lambda -t template.yaml --debug --region eu-west-1
Whenever I run the code, I get the following error:
botocore.exceptions.ClientError: An error occurred (ResourceNotFound) when calling the Invoke operation: Function not found: arn:aws:lambda:us-west-2:012345678901:function:myfunction
I don't understand why it's trying to invoke function located in us-west-2 when I explicitly told the code to use eu-west-1 region. I also tried to use AWS Profile with hardcoded region - the same error.
When I switch the running_flag to False and run the code without AWS SAM everything works fine.
===== Updated =====
The list of env variables:
# env | grep 'AWS'
AWS_PROFILE=production
My AWS configuration file:
# cat /Users/alexey/.aws/config
[profile production]
region = eu-west-1
My AWS Credentials file
# cat /Users/alexey/.aws/credentials
[production]
aws_access_key_id = <my_access_key>
aws_secret_access_key = <my_secret_key>
region=eu-west-1
Make sure you are actually running the correct local endpoint! In my case the problem was that I had started the lambda client with an incorrect configuration previously, and so my invocation was not invoking what I thought it was. Try killing the process on the port you have specified: kill $(lsof -ti:3001), run again and see if that helps!
This also assumes that you have built the function FunctionName="myfunction" correctly (make sure your function is spelt correctly in the template file you use during sam build)

nomad: Pull docker image from ECR with AWS Access and Secret keys

My problem
I have successfully deployed a nomad job with a few dozen Redis Docker containers on AWS, using the default Redis image from Dockerhub.
I've slightly altered the default config file created by nomad init to change the number of running containers, and everything works as expected
The problem is that the actual image I would like to run is in ECR, which requires AWS permissions (access and secret key), and I don't know how to send these.
Code
job "example" {
datacenters = ["dc1"]
type = "service"
update {
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
auto_revert = false
canary = 0
}
group "cache" {
count = 30
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
ephemeral_disk {
size = 300
}
task "redis" {
driver = "docker"
config {
# My problem here
image = "https://-whatever-.dkr.ecr.us-east-1.amazonaws.com/-whatever-"
port_map {
db = 6379
}
}
resources {
network {
mbits = 10
port "db" {}
}
}
service {
name = "global-redis-check"
tags = ["global", "cache"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
}
}
}
}
}
What have I tried
Extensive Google Search
Reading the manual
Placing the aws credentials in the machine which runs the nomad file (using aws configure)
My question
How can nomad be configured to pull Docker containers from AWS ECR using the AWS credentials?
Pretty late for you, but aws ecr does not handle authentication in the way that docker expects. There you need to run sudo $(aws ecr get-login --no-include-email --region ${your region}) Running the returned command actually authenticates in a docker compliant way
Note that region is optional if aws cli is configured. Personally, I allocate an IAM role the box (allowing ecr pull/list/etc), so that I don't have to manually deal with credentials.
I don't use ECR, but if it acts like a normal docker registry, this is what I do for my registry, and it works. Assuming the previous sentence, it should work fine for you as well:
config {
image = "registry.service.consul:5000/MYDOCKERIMAGENAME:latest"
auth {
username = "MYMAGICUSER"
password = "MYMAGICPASSWORD"
}
}

Sentry configuration with Amazon SES

There seems to be nothing on the web about this...
How do you parametrize sentry.conf.py to use Amazon SES backend for emails?
Right now, in a Django project, we use:
EMAIL_BACKEND = 'django_ses.SESBackend'
EMAIL_USE_SSL = True
AWS_ACCESS_KEY_ID = 'key'
AWS_SECRET_ACCESS_KEY = 'secret'
AWS_SES_REGION_NAME = 'eu-west-1'
AWS_SES_REGION_ENDPOINT = 'email.eu-west-1.amazonaws.com'
Sentry is a bit different, anyone has insights?
Thanks a lot,
You can configure sentry to send emails using a SMTP Server and you can obtain SMTP credentials from SES.
To set up SES for using the SMTP interface follow this guide: http://docs.aws.amazon.com/ses/latest/DeveloperGuide/send-email-smtp.html
Then configure your sentry installation to use those credentials (s. https://docs.sentry.io/server/config/#mail)
Example config.yml:
mail.backend: 'smtp'
mail.host: 'email-smtp.eu-west-1.amazonaws.com'
mail.port: 587
mail.username: 'myuser'
mail.password: 'mypassword'
mail.use-tls: true
# The email address to send on behalf of
mail.from: 'sentry#example.com'
Configuration with django-amazon-ses (different from django-ses), without SMTP credentials, is:
In sentry/config.yml :
mail.backend: 'django_amazon_ses.EmailBackend'
mail.from: 'from#example.com'
# Comment or delete the other `mail.*` options
In sentry/sentry.conf.py , if region is not us-east-1:
AWS_DEFAULT_REGION = "eu-west-1"
In sentry/enhance-image.sh (since v22.6.0):
pip install django-amazon-ses
Then restart Sentry as usual with ./install.sh and docker compose up -d
PS: assuming the instance or the default profile has the IAM ses:SendEmail permission.