WARNING No validation for the AWS provider has been implemented - amazon-web-services

I believe I might be missing a piece here,
I've added the aws account.
hal config provider aws account add spinnakermaster \
--account-id XXXXXXXXXXXX --asume-role role/spinnakerManaged
I've added the credentials for the AWS User.
hal config provider aws edit --access-key-id XXXXXXXXXXXXXXXXXXXX --secret-access-key
And prompted to its corresponding secret-access-key.
I've edited in the .hal directory the config file:
aws:
enabled: false
accounts:
- name: spinnakermaster
requiredGroupMembership: []
accountId: 'ZZZZZZZZZZZZZZZZZZ'
regions: []
assumeRole: role/spinnakerManaged
primaryAccount: spinnakermaster
accessKeyId: XXXXXXXXXXXXXXXXXXXX
secretAccessKey: YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
defaultKeyPairTemplate: '{{name}}-keypair'
defaultRegions:
- name: Canada
defaults:
iamRole: BaseIAMRole
And I am deploying Spinnaker with AWS support which execute with one hiccup:
Problems in default.provider.aws.spinnakermaster:
- WARNING No validation for the AWS provider has been
implemented.
Which step/info/config am I missing.
Regards
EN

updated. This warning is OK and will not affect your executions.
My suggestions after installing Spinnaker in EC2 local debian, Azure AKS and Minnaker on EC2.
Please dont install a microservice architecture in a monolith environment such as local Debian. It doesnt work
At All Cost Focus on the correct AWS Managed and Managing IAM structure. Please Follow Armory Spinnaker instructions on how to achieve this Armory IAM structure
Previous misleading answer: As of Now Spinnaker version 1.16.4 and based on the official documentation. There are 2 ways to manage the AWS infrastructure:
with aWS key and secret
with IAM role attached to the AWS EC2 instance running the spinnaker.
This error usually comes up when halyard cannot recognize the Key and secret for the corresponding account. Check halyard Code Documentation
One way to resolve it depending on your deployment type is adding an AWS account with the corresponding Key and Secret values. Check Halyard add-account
Documentation AWS Cloud Provider

Related

Error: checking AWS STS access – cannot get role ARN for current session: MissingEndpoint: 'Endpoint' configuration is required for this service

I created a cluster.yaml file which contains the below information:
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: eks-litmus-demo
region: ${AWS_REGION}
version: "1.21"
managedNodeGroups:
- instanceType: m5.large
amiFamily: AmazonLinux2
name: eks-litmus-demo-ng
desiredCapacity: 2
minSize: 2
maxSize: 4
EOF
When i run $ eksctl create cluster -f cluster.yaml to create the cluster through my terminal, I get the below error:
Error: checking AWS STS access – cannot get role ARN for current session: MissingEndpoint: 'Endpoint' configuration is required for this service
How can I resolve this? Please help!!!
Note: I have the global and regional endpoints under STS set to "valid in all AWS regions".
In my case, it was a typo in the region. I had us-east1 as the value. When it is corrected to us-east-1, the error disappeared. So it is worth checking if there are typos in any of the fields.
mention --profile if you use any aws profile other than default
eksctl create cluster -f cluster.yaml --profile <profile-name>
My SSO session token had expired:
aws sts get-caller-identity --profile default
The SSO session associated with this profile has expired or is otherwise invalid. To refresh this SSO session run aws sso login with the corresponding profile.
Then I needed to refresh my SSO session token:
aws sso login
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:
https://device.sso.us-east-2.amazonaws.com/
Then enter the code:
XXXX-XXXX
Successfully logged into Start URL: https://XXXX.awsapps.com/start
Error: checking AWS STS access – cannot get role ARN for current session:
According to this, I think its not able to get the role (in your case, cluster creator's role) which is responsible to create the cluster.
Create an IAM user with appropriate role. Attach necessary policies to that role to create the EKS cluster.
Then you can use aws configure command to add the AWS Access Key ID, AWS Secret Access Key, and Default region name.
[Make sure that the user has the appropriate access to create and access the eks cluster in your aws account. You can use aws cli to verify if you have the appropriate access]
It is important to configure the default profile for AWS CLI correctly on the command line using
set AWS_ACCESS_KEY_ID <your_access_key>
set AWS_SECRET_ACCESS_KEY <your_secret_key>

AWS IAM Role - AccessDenied error in one pod

I have a service account which I am trying to use across multiple pods installed in the same namespace.
One of the pods is created by Airflow KubernetesPodOperator.
The other is created via Helm through Kubernetes deployment.
In the Airflow deployment, I see the IAM role being assigned and DynamoDB tables are created, listed etc however in the second helm chart deployment (or) in a test pod (created as shown here), I keep getting AccessDenied error for CreateTable in DynamoDB.
I can see the AWS Role ARN being assigned to the service account and the service account being applied to the pod and the corresponding token file also being created, but I see AccessDenied exception.
arn:aws:sts::1234567890:assumed-role/MyCustomRole/aws-sdk-java-1636152310195 is not authorized to perform: dynamodb:CreateTable on resource
ServiceAccount
Name: mypipeline-service-account
Namespace: abc-qa-daemons
Labels: app.kubernetes.io/managed-by=Helm
chart=abc-pipeline-main.651
heritage=Helm
release=ab-qa-pipeline
tier=mypipeline
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::1234567890:role/MyCustomRole
meta.helm.sh/release-name: ab-qa-pipeline
meta.helm.sh/release-namespace: abc-qa-daemons
Image pull secrets: <none>
Mountable secrets: mypipeline-service-account-token-6gm5b
Tokens: mypipeline-service-account-token-6gm5b
P.S: Both the client code created using KubernetesPodOperator and through Helm chart deployment is same i.e. same docker image. Other attributes like nodeSelector, tolerations etc, volume mounts are also same.
The describe pod output for both of them is similar with just some name and label changes.
The KubernetesPodOperator pod has QoS class as Burstable while the Helm chart ones is BestEffort.
Why do I get AccessDenied in Helm deployment but not in KubernetesPodOperator? How to debug this issue?
Whenever we get an AccessDenied exception, there can be two possible reasons:
You have assigned the wrong role
The assigned role doesn't have necessary permissions
In my case, latter is the issue. The permissions assigned to particular role can be sophisticated i.e. they can be more granular.
For example, in my case, the DynamoDB tables which the role can create/describe is limited to only those that are starting with a specific prefix but not all the DynamoDB tables.
So, it is always advisable to check the IAM role permissions whenever
you get this error.
As stated in the question, be sure to check the service account using the awscli image.
Keep in mind that, there is a credential provider chain used in AWS SDKs which determines the credentials to be used by the application. In most cases, the DefaultAWSCredentialsProviderChain is used and its order is given below. Ensure that the SDK is picking up the intended provider (in our case it is WebIdentityTokenCredentialsProvider)
super(new EnvironmentVariableCredentialsProvider(),
new SystemPropertiesCredentialsProvider(),
new ProfileCredentialsProvider(),
WebIdentityTokenCredentialsProvider.create(),
new EC2ContainerCredentialsProviderWrapper());
Additionally, you might also want to set the AWS SDK classes to DEBUG mode in your logger to see which credentials provider is being picked up and why.
To check if the service account is applied to a pod, describe it and check if the AWS environment variables are set to it like AWS_REGION, AWS_DEFAULT_REGION, AWS_ROLE_ARN and AWS_WEB_IDENTITY_TOKEN_FILE.
If not, then check your service account if it has the AWS annotation eks.amazonaws.com/role-arn by describing that service account.

Serverless :: AWS profile ""workflow"" doesn't seem to be configured

I had added a new profile, workflow, using
aws configure
I have created a serverless application using
serverless create --template aws-nodejs --path ssm5
/.aws/credentials
[workflow]
aws_access_key_id=<<My Access Key>>
aws_secret_access_key=<<My Secret Key>>
/.aws/config
[profile workflow]
region = us-east-1
serverless.yml
service: ssm5
frameworkVersion: "2"
provider:
name: aws
runtime: nodejs12.x
lambdaHashingVersion: 20201221
I tried to deploy the application using
serverless deploy --aws-profile workflow
Unfortunately I am getting below error.
Serverless: Packaging service...
Serverless: Excluding development dependencies...
Serverless Error ----------------------------------------
AWS profile ""workflow"" doesn't seem to be configured
I had then set below environment variables from command prompt.
set AWS_PROFILE="workflow"
set AWS_ACCESS_KEY=<<My Access Key>>
set AWS_SECRET_ACCESS_KEY=<<My Secret Key>>
set AWS_SDK_LOAD_CONFIG=1
Unfortunately, that also didn't help me and the error still prevails.
Note: I used terraform to provision infrastructure. Terraform picks the workflow profile successfully from the aforementioned config & credential files. Problem is only with serverless.
It would be really great if someone can help me on this.
I ran into this issue and after debugging the code, I found this:
https://github.com/serverless/serverless/blob/29f0e9c840e4b1ae9949925bc5a2a9d2de742271/lib/plugins/aws/provider.js#L129
Since by default AWS.SharedIniFileCredentials does not return the roleArn by default, sls assumes the profile is invalid. The fix is to set AWS_SDK_LOAD_CONFIG=1 as suggested in the comments. That variable tells the AWS SDK to load the profile when you are using a shared config file.
Based on that I can assume that setting AWS_SHARED_CREDENTIALS_FILE might work as well since the other file should only contain the one profile.

"kubectl" not connecting to aws EKS cluster from my local windows workstation

I am trying to setup aws EKS cluster and want to connect that cluster from my local windows workstation. Not able to connect that. Here are the steps i did;
Create a aws service role (aws console -> IAM -> Roles -> click "Create role" -> Select AWS service role "EKS" -> give role name "eks-role-1"
Create another user in IAM named "eks" for programmatic access. this will help me to connect my EKS cluster from my local windows workstation. Policy i added into it is "AmazonEKSClusterPolicy", "AmazonEKSWorkerNodePolicy", "AmazonEKSServicePolicy", "AmazonEKS_CNI_Policy".
Next EKS cluster has been created with roleARN, which has been created in Step#1. Finally EKS cluster has been created in aws console.
In my local windows workstation, i have download "kubectl.exe" & "aws-iam-authenticator.exe" and did 'aws configure' using accesskey and token from step#2 for the user "eks". After configuring "~/.kube/config"; i ran below command and get error like this:
Command:kubectl.exe get svc
output:
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Unable to connect to the server: getting credentials: exec: exit status 1
Not sure what wrong setup here. Can someone pls help? I know some of the places its saying you have to use same aws user to connect cluster (EKS). But how can i get accesskey and token for aws assign-role (step#2: eks-role-1)?
For people got into this, may be you provision eks with profile.
EKS does not add profile inside kubeconfig.
Solution:
export AWS credential
$ export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
$ export AWS_SECRET_ACCESS_KEY=ssssssssss
If you've already config AWS credential. Try export AWS_PROFILE
$ export AWS_PROFILE=ppppp
Similar to 2, but you just need to do one time. Edit your kubeconfig
users:
- name: eks # This depends on your config.
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "general"
env:
- name: AWS_PROFILE
value: "<YOUR_PROFILE_HERE>" #
I think i got the answer for this issue; want to write down here so people will be benefit out of it.
When you first time creating EKS cluster; check from which you are (check your aws web console user setting) creating. Even you are creating from CFN script, also assign different role to create the cluster. You have to get CLI access for the user to start access your cluster from kubectl tool. Once you get first time access (that user will have admin access by default); you may need to add another IAM user into cluster admin (or other role) using congifMap; then only you can switch or use alternative IAM user to access cluster from kubectl command line.
Make sure the file ~/.aws/credentials has a AWS key and secret key for an IAM account that can manage the cluster.
Alternatively you can set the AWS env parameters:
export AWS_ACCESS_KEY_ID=xxxxxxxxxxxxx
export AWS_SECRET_ACCESS_KEY=ssssssssss
Adding another option.
Instead of working with aws-iam-authenticator you can change the command to aws and replace the args as below:
- name: my-cluster
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args: #<--- Change the args
- --region
- <YOUR_REGION>
- eks
- get-token
- --cluster-name
- my-cluster
command: aws #<--- Change to command to aws
env:
- name: AWS_PROFILE
value: <YOUR_PROFILE_HERE>

Serverless Error: The security token included in the request is invalid

when i type serverless deploy appear this error:
ServerlessError: The security token included in the request is invalid.
I had to specify sls deploy --aws-profile in my serverless deploy commands like this:
sls deploy --aws-profile common
Can you provide more information?
Make sure that you've got the correct credentials in ~/.aws/config and ~/.aws/credentials. You can set these up by running aws configure. More info here: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-quick-configuration
Also make sure that the IAM user in question has as an attached security policy that allows access to everything you need, such as CloudFormation.
Create a new user in AWS (don't use the root key).
In the SSH keys for AWS CodeCommit, generate a new Access Key.
Copy the values and run this:
serverless config credentials --overwrite --provider aws --key bar --secret foo
sls deploy
In my case it was missing the localstack entry in the serverless file.
I had everything that should be inside it, but it was all inside custom (instead of custom.localstack).
In my case, I added region to the provider. I suppose it's not read from the credentials file.
provider:
name: aws
runtime: nodejs12.x
region: cn-northwest-1
In my case, multiple credentials are stored in the ~/.aws/credentials file.
And serverless is picking the default credentials.
So, I kept the new credentials under [default] and removed the previous credentials. And that worked for me.
to run the function from AWS you need to configure AWS with access_key_id and secret_access_key
but
to might get this error if you want to run the function locally
so for that use this command
sls invoke local -f functionName
it will run the function locally not on aws
If none of these answers work, it's maybe because you need to add a provider in your serverless account and add your AWS keys.