aws kops create cluster errors out as InvalidClientTokenId - amazon-web-services

I am actually trying to deploy my application using Kubernetes in the AWS Kops. For this i followed the steps given in the AWS workshop tutorial.
https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/01-path-basics/101-start-here
I created a AWS Cloud9 environment by logging in as a IAM user and installed kops and other required software's as well. When i try to create the cluster using the following command
kops create cluster --name cs.cluster.k8s.local --zones $AWS_AVAILABILITY_ZONES
--yes
i get an error like below in the cloud9 IDE
error running tasks: deadline exceeded executing task IAMRole/nodes.cs.cluster.k8s.local. Example error: error creating IAMRole: InvalidClientTokenId: The security token included in the request is invalid
status code: 403, request id: 30fe2a97-0fc4-11e8-8c48-0f8441e73bc3
I am not able to find a way to solve this issue. Any help on this would be appreciable.

I found the issue and fixed it. Actually
I did not export the following 2 environment variables in the terminal where I am running create cluster. These 2 below variables are required while creating a cluster using kops
export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

Related

GCP Cloud Composer Update Failed - Questionable Permissions

I am trying to update my GCP Cloud Composer environment from composer-1.18.0-airflow-2.2.3 -> composer-1.19.4-airflow-2.2.5 .
It fails with the error that I don't have the permissions to describe the tenant Cloud SQL instance (verbose error below). I know that the command is executed on a running pod in the Composer's kubernetes cluster.
I tested the failing command, outlined below, with the environment's service account credentials activated and it works. Why does the command fail when executed inside the pod? Is the kubernetes cluster using different credentials than the Composer's service account?
Verbose error message:
Failed to update image version.
Exporting sql database failed with error [Failed to run command ['gcloud', 'sql', 'instances', 'describe', 'my-tentant-sql-instance-...-sql', '--project', 'myTenantProject-tp', '--format', 'get(serviceAccountEmailAddress)'], details: b'ERROR: (gcloud.sql.instances.describe) There was no instance found at projects/myTenantProject-tp/instances/my-tentant-sql-instance-...-sql or you are not authorized to access it.\n'].

Unable to deploy code on ec2 instance using codedeploy

I have single ec2 instance running on ubuntu server and I am trying to implement CI/CD flow using codedeploy and source is bit-bucket.I jave also installed codedeploy-agent on ec2 instance and it is installed and running successfully but whenever I am deploying code on ec2 deployment is failing with an error shown below:
The overall deployment failed because too many individual instances failed deployment, too few
healthy instances are available for deployment, or some instances in your deployment group are
experiencing problems.
In the CodeDeploy agent log file that I am accessing using less /var/log/aws/codedeploy-agent/codedeploy-agent.log showing below error:
ERROR [codedeploy-agent(31598)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller:
Missing credentials - please check if this instance was started with an IAM instance profile
I am unable to understand how can I overcome this error someone let me know.
CodeDeploy agent requires IAM permissions provided by IAM role/profile of your instance. The exact permissions needed are given in AWS docs:
Step 4: Create an IAM instance profile for your Amazon EC2 instances

Unable to kubectl apply -f to my EKS cluster : Access denied (Roles may not be assumed by root accounts)

I created a Kubernetes cluster on AWS EKS and now I'm trying to deploy an app using cmd :
kubectl apply -f deployment.yaml
But I'm getting this error :
An error occurred (AccessDenied) when calling the AssumeRole operation: Roles may not be assumed by root accounts.
Unable to connect to the server: getting credentials: exec: executable aws failed with exit code 254
I'm new to AWS and EKS and when I did some Google research it says that it might be caused by the authenticated user in aws cli tool.
For information, my cli is connected with a user called aws_cli_user who has AdministratorAccess strategy.
And for creating k8s cluster, I created a role called EksSocaClusterRole who has AmazonEKSClusterPolicy.
I think I'm missing something to link both the role and the user so It can push things to the cluster correctly !

Import manually created K8s cluster into KOps

It's been sometime I've visited all the web pages carrying word "KOps import" but did not find a way to import my manually created K8s cluster. Manually created cluster means "Deployed Infra on AWS using Terraform and Kubernetes using Terraform's provisioner script as Shell script". Now as I see managing the environment manually is a pain, I look forward to move it under KOps. For that I have done the following so far:
Installed aws cli, kubectl and kops in my local machine.
Created KOps user with policies AmazonEC2FullAccess,
AmazonRoute53FullAccess, AmazonS3FullAccess, IAMFullAccess,
AmazonVPCFullAccess and generated access and secret keys.
Configured credentials using aws configure.
Created S3 bucket to store state.
Set env variables like Region and Cluster name.
Finally, ran kops import command as below:
kops import cluster --region ${REGION} --name ${OLD_NAME}
But encountered below error:
Cluster.kops "jjm-prod-use1-kubernetes" not found
Verbosed:
$ kops import cluster --region ${REGION} --name ${OLD_NAME} -v 10
I0131 16:32:12.059651 25683 factory.go:68] state store s3://kops-state-store-jjm
I0131 16:32:13.133145 25683 s3context.go:194] found bucket in region "us-east-1"
I0131 16:32:13.133174 25683 s3fs.go:220] Reading file "s3://kops-state-store-jjm/jjm-prod-use1-kubernetes/config"
Which made me serious about posting this question. Is there any possible way where a K8s cluster created except using kubeup.sh can be brought under the control of KOps ? Please advise.
Note: There's no way I can re-create (destroy and create) the clusters as they are running in production.
EDIT: I know this can be achieved only the cluster was setup using kubeup.sh. But is there any other way ?
That is only possible with cluster bootstrapped via kube-up.sh script as officialy announced in Kops documentation pages. Actually, kube-up.sh has been excluded from the list of supported Kubernetes installation tools for AWS. Although, cluster composed by kube-up.sh provides a lot of customization settings which are specifically applicable to AWS, the initial script uses environmental variables to define these settings. Therefore, I assume that it's quite hard to achieve in your case.

Chef on AWS - ERROR: Fog::Compute::AWS::Error: AuthFailure

Chef Workstation and Server is setup on AWS as follows:
Chef Development Kit Version: 0.10.0,
Chef-server 12.2,
chef-client version: 12.5
This setup has been working for around an year.
Today, got following error when creating ec2 instance by executing'knife ec2 server create' command on chef-workstation.
ERROR: Fog::Compute::AWS::Error: AuthFailure => AWS was not able to validate the provided access credentials
There is no change in aws auth keys or file permissions. I'm not able to understand why this error all of sudden?
Thanks for any pointers.