fetch aws credentials from assumed role with web identity - amazon-web-services

I am trying to fetch credentials to be used for spark submit. I already have an assumed role with web identity provider that airflow task is doing for me. But in order to export these credentials to spark, I need to fetch these credentials and set it in Spark context. how can I do it?
[2022-08-23, 11:07:42 UTC] {{subprocess.py:89}} INFO - + aws configure list
[2022-08-23, 11:07:43 UTC] {{subprocess.py:89}} INFO - Name Value Type Location
[2022-08-23, 11:07:43 UTC] {{subprocess.py:89}} INFO - ---- ----- ---- --------
[2022-08-23, 11:07:43 UTC] {{subprocess.py:89}} INFO - profile <not set> None None
[2022-08-23, 11:07:43 UTC] {{subprocess.py:89}} INFO - access_key ****************WWSO assume-role-with-web-identity
[2022-08-23, 11:07:43 UTC] {{subprocess.py:89}} INFO - secret_key ****************wZz0 assume-role-with-web-identity
As you can see above, the access keys are not stored in environment variables. However a web identity access token is present and the authentication to AWS is happening through it

Once you got the environment variable of access key and secret key, you can set it by
sc._jsc.hadoopConfiguration().set("fs.s3n.awsAccessKeyId", AWS_ACCESS_KEY)
sc._jsc.hadoopConfiguration().set("fs.s3n.awsSecretAccessKey", AWS_SECRET_KEY)

Related

aws configure list show me nothing

Assume I am on a Mac and I have a ~/.aws/config file:
[profile cicd]
region = us-west-2
output = json
[profile prod]
region = us-west-2
output = json
And also a ~/.aws/credentials file:
[cicd]
aws_access_key_id = 12345
aws_secret_access_key = 12345
[prod]
aws_access_key_id = 12345
aws_secret_access_key = 12345
If I run:
aws configure list
I get:
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region <not set> None None
What have I done wrong?
Also, the company I work for has multiple AWS accounts. The cicd profile runs in one AWS account, and the prod runs in a different AWS account. A I supposed to record that fact in the AWS config files?
aws configure list just lists the current AWS credentials that you are using. It doesn't list all the available credentials you have configured on your system. The name of the command is really misleading.
It is currently showing that you have no credentials configured, because you haven't done anything to specify that you want to use one of those profiles in your config/credential files.
If you did something to select a profile, like:
export AWS_PROFILE=cicd
Then you would see some details about that particular profile when you run aws configure list.

Passing a SECRET KEY as environment variable in gcloud

I have stored a key in the Secret manager of GCP and I'm trying to use that secret in the cloudbuild.yaml but every time I have this error:
ERROR: (gcloud.functions.deploy) argument --set-secrets: Secrets value configuration must match the pattern 'SECRET:VERSION' or 'projects/{PROJECT}/secrets/{SECRET}:{VERSION}' or 'projects/{PROJECT}/secrets/{SECRET}/versions/{VERSION}' where VERSION is a number or the label 'latest' [ 'projects/gcp-project/secrets/SECRETKEY/versions/latest' ]]
My cloud build file looks like this:
steps:
- id: installing-dependencies
name: 'python'
entrypoint: pip
args: ["install", "-r", "src/requirements.txt", "--user"]
- id: deploy-function
name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
args:
- gcloud
- functions
- deploy
- name_of_my_function
- --region=us-central1
- --source=./src
- --trigger-topic=name_of_my_topic
- --runtime=python37
- --set-secrets=[ SECRETKEY = 'projects/gcp-project/secrets/SECRETKEY/versions/latest' ]
waitFor: [ "installing-dependencies" ]
I was reading the documentation, but I don't have any other clue that could help me.
As mentioned by al-dann, there should not be any space in set-secret line as you can see the documentation
Final correction in code :
--set-secrets=[SECRETKEY='projects/gcp-project/secrets/SECRETKEY/versions/latest']
For more information, you can refer to the stackoverflow thread and blog where brief information about secret manager has been well explained.

aws-azure-login don't reconize my default region

When I try to connect with aws-azure-login i get this error:
UnknownEndpoint: Inaccessible host: `sts.amazonaws.com' at port `undefined'. This service may not be available in the `us-east-1' region.
at Request.ENOTFOUND_ERROR (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\event_listeners.js:529:46)
at Request.callListeners (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\sequential_executor.js:106:20)
at Request.emit (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\request.js:686:14)
at error (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\event_listeners.js:361:22)
at ClientRequest.<anonymous> (C:\Users\500000198\AppData\Roaming\npm\node_modules\aws-azure-login\node_modules\aws-sdk\lib\http\node.js:99:9)
at ClientRequest.emit (node:events:390:28)
at ClientRequest.emit (node:domain:475:12)
at TLSSocket.socketErrorListener (node:_http_client:447:9)
at TLSSocket.emit (node:events:390:28)
at TLSSocket.emit (node:domain:475:12)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
code: 'UnknownEndpoint',
region: 'us-east-1',
But i want to connect to eu-west-3 instead of us-east-1, it seam that my configured region is never picked up.
> aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region eu-west-3 config-file ~/.aws/config
My ~/.aws/config file :
[default]
azure_tenant_id=d8f7***-**-**-9561de6
azure_app_id_uri=https://signin.aws.amazon.com/saml
azure_default_username=[my compagnie mail]
azure_default_role_arn=
azure_default_duration_hours=12
azure_default_remember_me=false
region=eu-west-3
[profile dev_dom_role]
role_arn=[ my arn role: arn:aws:iam::****:role/dev_dom_role]
source_profile=default
azure_tenant_id=d8f7***-**-**-9561de6
azure_app_id_uri=https://signin.aws.amazon.com/saml
azure_default_username=[my compagnie mail]
azure_default_role_arn=[ my arn role: arn:aws:iam::****:role/dev_dom_role]
azure_default_duration_hours=12
azure_default_remember_me=false
When i try to configure my profile with aws-azure-login --configure -p default every informations is well reconize but unfortunaly it didn't ask for region.
How i connecting ? i try with both role, dev_dom_role and default role :
aws-azure-login --mode=gui --profile dev_dom_role
aws-azure-login --mode=gui
sts.amazonaws.com wasn't reconize
nslookup.exe sts.amazonaws.com
Serveur : ad.intranet.mycompany.fr
Address: 10.10.9.9
*** ad.intranet.mycompany.com dont find sts.amazonaws.com : Non-existent domain
I set the proxy and i was finally able to connect.
PROXY=http://proxy.net:10684
echo "SET PROXY : " $PROXY
export http_proxy=$PROXY
export HTTP_PROXY=$PROXY
export https_proxy=$PROXY
export HTTPS_PROXY=$PROXY
npm config set proxy $PROXY
npm config set https-proxy $PROXY
yarn config set proxy $PROXY
yarn config set https-proxy $PROXY

Backup job of helm jenkins failing for no reason

I am using the official helm chart of Jenkins.
I have enabled backup and also provided backup credentials
Here is the relevant config in values.yaml
## Backup cronjob configuration
## Ref: https://github.com/maorfr/kube-tasks
backup:
# Backup must use RBAC
# So by enabling backup you are enabling RBAC specific for backup
enabled: true
# Used for label app.kubernetes.io/component
componentName: "jenkins-backup"
# Schedule to run jobs. Must be in cron time format
# Ref: https://crontab.guru/
schedule: "0 2 * * *"
labels: {}
annotations: {}
# Example for authorization to AWS S3 using kube2iam
# Can also be done using environment variables
# iam.amazonaws.com/role: "jenkins"
image:
repository: "maorfr/kube-tasks"
tag: "0.2.0"
# Additional arguments for kube-tasks
# Ref: https://github.com/maorfr/kube-tasks#simple-backup
extraArgs: []
# Add existingSecret for AWS credentials
existingSecret: {}
# gcpcredentials: "credentials.json"
## Example for using an existing secret
# jenkinsaws:
## Use this key for AWS access key ID
awsaccesskey: "AAAAJJJJDDDDDDJJJJJ"
## Use this key for AWS secret access key
awssecretkey: "frkmfrkmrlkmfrkmflkmlm"
# Add additional environment variables
# jenkinsgcp:
## Use this key for GCP credentials
env: []
# Example environment variable required for AWS credentials chain
# - name: "AWS_REGION"
# value: "us-east-1"
resources:
requests:
memory: 1Gi
cpu: 1
limits:
memory: 1Gi
cpu: 1
# Destination to store the backup artifacts
# Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage, Google Cloud Storage
# Additional support can added. Visit this repository for details
# Ref: https://github.com/maorfr/skbn
destination: "s3://jenkins-data/backup"
However the backup job fails as follows:
2020/01/22 20:19:23 Backup started!
2020/01/22 20:19:23 Getting clients
2020/01/22 20:19:26 NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
What is missing?
you must create secret which looks like this:
kubectl create secret generic jenkinsaws --from-literal=jenkins_aws_access_key=ACCESS_KEY --from-literal=jenkins_aws_secret_key=SECRET_KEY
then consume it like this:
existingSecret:
jenkinsaws:
awsaccesskey: jenkins_aws_access_key
awssecretkey: jenkins_aws_secret_key
where jenkins_aws_access_key/jenkins_aws_secret_key it's key of the secret
backup:
enabled: true
destination: "s3://jenkins-pumbala/backup"
schedule: "15 1 * * *"
env:
- name: "AWS_ACCESS_KEY_ID"
value: "AKIDFFERWT***D36G"
- name: "AWS_SECRET_ACCESS_KEY"
value: "5zGdfgdfgdf***************Isi"

Unable to run AWS -Nuke

I am trying to run aws-nuke to delete all the resources.
I am trying to run command
aws-nuke -c config/example.yaml --profile demo
config/example.yaml
---
regions:
- "global" # This is for all global resource types e.g. IAM
- "eu-west-1"
account-blacklist:
- "999999999999" # production
# optional: restrict nuking to these resources
resource-types:
targets:
- IAMUser
- IAMUserPolicyAttachment
- IAMUserAccessKey
- S3Bucket
- S3Object
- Route53HostedZone
- EC2Instance
- CloudFormationStack
accounts:
555133742123#demo:
filters:
IAMUser:
- "admin"
IAMUserPolicyAttachment:
- property: RoleName
value: "admin"
IAMUserAccessKey:
- property: UserName
value: "admin"
S3Bucket:
- "s3://my-bucket"
S3Object:
- type: "glob"
value: "s3://my-bucket/*"
Route53HostedZone:
- property: Name
type: "glob"
value: "*.zone.loc."
CloudFormationStack:
- property: "tag:team"
value: "myTeam"
Errors screenshot below.What is this missing
Disclaimer: I am an author of aws-nuke.
This is not an configuration problem of your YAML file, but a missing setting in your AWS account.
The IAM Alias is a globally unique name for your AWS Account. aws-nuke requires this as a safety guard, so you do not accidentally destroy your production accounts. The idea is that every production account contains at least the substring prod.
This might sound a bit unnecessary to demand this account, but we are very passionate to not nuke any production account.
You can follow the docs to specify the Alias via the web console, or you use the CLI:
aws iam create-account-alias --profile demo --account-alias my-test-account-8gmst3`
I guess we need to improve the error message.