I'm having problems setting up AWS Integration on a Kubernetes Cluster. I've already set the kubernetes.io/cluster/clustername = owned tag on all Instances, Subnets, VPC, and in a Single SG. I've also passed the --cloud-provider=aws flag to both API Server and Controller Manager, but the Controller Manager does not start.
Controller Manager Logs:
I0411 21:03:48.360194 1 aws.go:1026] Building AWS cloudprovider
I0411 21:03:48.360237 1 aws.go:988] Zone not specified in configuration file; querying AWS metadata service
F0411 21:03:48.363067 1 controllermanager.go:159] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-0442e20b4a28b2274: "error listing AWS instances: \"NoCredentialProviders: no valid providers in chain. Deprecated.\\n\\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors\""
The Policy Attached to the Master Nodes is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [ "ec2:*" ],
"Resource": [ "*" ]
},
{
"Effect": "Allow",
"Action": [ "elasticloadbalancing:*" ],
"Resource": [ "*" ]
},
{
"Effect": "Allow",
"Action": [ "route53:*" ],
"Resource": [ "*" ]
}
]
}
Querying the AWS Metadata Service from a master via cURL returns proper credentials
Any help will be much appreciated!
P.S: I'm not using Kops or anything of that kind. I've set up the control components plane by myself.
I was able to fix this by passing the --cloud-provider=aws flag to the kubelets. I thought that wasn't needed on Master nodes.
Thanks!
Related
I'm trying to limit the possibility of adding new providers to an AWS account. I'm also using Bitbucket to deploy my app via Bitbucket Pipelines and I use OpenID Connect as a secure way for the deployments.
Now I have created a SCP to deny creation/deletion of IAM user and adding/deletion of providers. In this SCP I want to make an exception, it the URL for the IDP is a specific one, it should be allowed in all accounts to create or delete this provider.
Thing is, I don't understand, why my condition is not working. Any hints?
Thx!
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Deny",
"Action": [
"iam:CreateGroup",
"iam:CreateLoginProfile",
"iam:CreateOpenIDConnectProvider",
"iam:CreateSAMLProvider",
"iam:CreateUser",
"iam:DeleteAccountPasswordPolicy",
"iam:DeleteSAMLProvider",
"iam:UpdateSAMLProvider"
],
"Resource": [
"*"
],
"Condition": {
"StringNotEquals": {
"iam:OpenIDConnectProviderUrl": [
"https://api.bitbucket.org/2.0/workspaces/my-workspace-name/pipelines-config/identity/oidc"
]
}
}
}
]
}
I have two AWS accounts, Prod and Staging. I need to migrate data from prod elasticache redis to staging. The Redis clusters in prod and staging are both 1 node and 0 shards each. I believe you cant update an already running cluster so I'm trying to
seed a new cluster with the RDB file with the same cluster configuration as already exists in staging, with a view to deleting the original staging cluster after the new one stands up
Problem is every time I go through the console and create a Redis cluster it stands up with 1 shard and 1 node. The original cluster had 0 shards. I selected 0 number of replicas, no multi AZ etc so I'm not sure why its defaulting to sharding. Am I missing an option somewhere, are you able to stand up a 1 node 0 shard cluster via the console?
I also tried creating a cluster via the AWS CLI to see if i get the same behaviour but get the error message:
An error occurred (InvalidParameterValue) when calling the CreateCacheCluster operation: No permission to access S3 object: mybucket/folder/file.rdb
I've set the bucket policy to the following:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0123456789:root"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0123456789:root"
},
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "ExampleStatement",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0123456789:user/my-user"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket/folder/file.rdb"
]
},
{
"Sid": "Stmt15399483",
"Effect": "Allow",
"Principal": {
"Service": "eu-west-1.elasticache-snapshot.amazonaws.com"
},
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket/folder/file.rdb"
]
}
]
}
Note this policy is in the permissions tab of the top level bucket. when i click on the file there is no area to add/edit a bucket policy only an ACL edit option. I did grant elasticache canonical ID read/write as it suggests in the AWS docs but still get permission denied
I was able to get the cluster configuration (1 node 0 shards) by using the CLI command:
aws elasticache delete-replication-group --replication-group-id my-cluster --retain-primary-cluster
although this requires running the command after cluster creation. I'm hoping theres a way to select this config at creation time
I've followed a great tutorial by Martin Thwaites outlining the process of logging to AWS CloudWatch using Serilog and .Net Core.
I've got the logging portion working well to text and console, but just can't figure out the best way to authenticate to AWS CloudWatch from my application. He talks about inbuilt AWS authentication by setting up an IAM policy which is great and supplies the JSON to do so but I feel like something is missing. I've created the IAM Policy as per the example with a LogGroup matching my appsettings.json, but nothing comes though on the CloudWatch screen.
My application is hosted on an EC2 instance. Are there more straight forward ways to authenticate, and/or is there a step missing where the EC2 and CloudWatch services are "joined" together?
More Info:
Policy EC2CloudWatch attached to role EC2Role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
ALL EC2 READ ACTIONS HERE
],
"Resource": "*"
},
{
"Sid": "LogStreams",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:cloudwatch-analytics-staging
:log-stream:*"
},
{
"Sid": "LogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "arn:aws:logs:*:*:log-group:cloudwatch-analytics-staging"
}
]
}
In order to effectively apply the permissions, you need to assign the role to the EC2 instance.
I'm trying to do a POC of AWS Systems Manager Session Manager Port Forwarding session but I can't seem to be able to start the PortForwarding session even though starting a normal session works.
A session starts and works as intended
aws ssm start-session --target i-xxxxxxxxxxx
aws ssm start-session --target i-xxxxxxxxxxx \
--document-name AWS-StartPortForwardingSession \
--parameters '{"portNumber":["80"],"localPortNumber":["3001"]}'
The IAM role has the AWS policy AmazonSSMManagedInstanceCore and a sessions manager policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:GetEncryptionConfiguration"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt"
],
"Resource": "arn:aws:kms:us-east-2:xxxxxxxxxxx:key/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxx"
}
]
}
I expected the session to establish tunnel and start forwarding port 80 to my local port 3001
Instead I get the following error:
SessionId xxxx-xxxxxxxxxxx
----------ERROR-------
Encountered error while initiating handshake. SessionType failed on client with status 2 error: Failed to process action SessionType: Unknown session type Port```
Here is what I am trying to accomplish:
https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/
I was having the same issue and it turned it out to be an outdated AWS session manager plugin for the aws cli. After updating the plugin it worked.
Instruction to install/update the plugin is here.
I've recently setup a Team City Cloud Agent on AWS EC2 for testing purposes. Everything is working great and the TeamCity server sees the new build server being spun up and pushes queues to them. However, I am having an issue when it comes to getting the new instances tagged upon creation. I created a couple of self-defined tags like Name, Domain, Owner, etc. I added those tags to the AMI that it's creating the build servers from and I have also given the ec2:*Tags permission to the AWS Policy. For some reason it's not tagging the servers upon creation, it just leaves everything blank. Is there something that I may not have configured or is this a functionality that it doesn't offer?
Also, is it possible to make the teamcity server assign a specific name to the build servers, like buildserver1, buildserver2 and so on?
Here is the policy I am applying on the TeamCity Cloud
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1469049271000",
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:RebootInstances",
"ec2:RunInstances",
"ec2:ModifyInstanceAttribute",
"ec2:*Tags"
],
"Resource": [
"*"
]
}
]
}
Thanks!
Why your tags are not appearing is probably a permissions problem. You should have an IAM policy that looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1469498198000",
"Effect": "Allow",
"Action": [
"ec2:*Tags"
],
"Resource": [
"*"
]
}
]
}
Also, make sure to apply that policy to the server IAM role, not the agents.
For your second question, it's entirely possible to assign a name to a build agent:
inside <TeamCity Agent Home>/conf/buildagent.properties:
## The unique name of the agent used to identify this agent on the TeamCity server
name=buildserver1