I've recently setup a Team City Cloud Agent on AWS EC2 for testing purposes. Everything is working great and the TeamCity server sees the new build server being spun up and pushes queues to them. However, I am having an issue when it comes to getting the new instances tagged upon creation. I created a couple of self-defined tags like Name, Domain, Owner, etc. I added those tags to the AMI that it's creating the build servers from and I have also given the ec2:*Tags permission to the AWS Policy. For some reason it's not tagging the servers upon creation, it just leaves everything blank. Is there something that I may not have configured or is this a functionality that it doesn't offer?
Also, is it possible to make the teamcity server assign a specific name to the build servers, like buildserver1, buildserver2 and so on?
Here is the policy I am applying on the TeamCity Cloud
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1469049271000",
"Effect": "Allow",
"Action": [
"ec2:Describe*",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:TerminateInstances",
"ec2:RebootInstances",
"ec2:RunInstances",
"ec2:ModifyInstanceAttribute",
"ec2:*Tags"
],
"Resource": [
"*"
]
}
]
}
Thanks!
Why your tags are not appearing is probably a permissions problem. You should have an IAM policy that looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1469498198000",
"Effect": "Allow",
"Action": [
"ec2:*Tags"
],
"Resource": [
"*"
]
}
]
}
Also, make sure to apply that policy to the server IAM role, not the agents.
For your second question, it's entirely possible to assign a name to a build agent:
inside <TeamCity Agent Home>/conf/buildagent.properties:
## The unique name of the agent used to identify this agent on the TeamCity server
name=buildserver1
Related
I'm trying to limit the possibility of adding new providers to an AWS account. I'm also using Bitbucket to deploy my app via Bitbucket Pipelines and I use OpenID Connect as a secure way for the deployments.
Now I have created a SCP to deny creation/deletion of IAM user and adding/deletion of providers. In this SCP I want to make an exception, it the URL for the IDP is a specific one, it should be allowed in all accounts to create or delete this provider.
Thing is, I don't understand, why my condition is not working. Any hints?
Thx!
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Deny",
"Action": [
"iam:CreateGroup",
"iam:CreateLoginProfile",
"iam:CreateOpenIDConnectProvider",
"iam:CreateSAMLProvider",
"iam:CreateUser",
"iam:DeleteAccountPasswordPolicy",
"iam:DeleteSAMLProvider",
"iam:UpdateSAMLProvider"
],
"Resource": [
"*"
],
"Condition": {
"StringNotEquals": {
"iam:OpenIDConnectProviderUrl": [
"https://api.bitbucket.org/2.0/workspaces/my-workspace-name/pipelines-config/identity/oidc"
]
}
}
}
]
}
I've followed a great tutorial by Martin Thwaites outlining the process of logging to AWS CloudWatch using Serilog and .Net Core.
I've got the logging portion working well to text and console, but just can't figure out the best way to authenticate to AWS CloudWatch from my application. He talks about inbuilt AWS authentication by setting up an IAM policy which is great and supplies the JSON to do so but I feel like something is missing. I've created the IAM Policy as per the example with a LogGroup matching my appsettings.json, but nothing comes though on the CloudWatch screen.
My application is hosted on an EC2 instance. Are there more straight forward ways to authenticate, and/or is there a step missing where the EC2 and CloudWatch services are "joined" together?
More Info:
Policy EC2CloudWatch attached to role EC2Role.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
ALL EC2 READ ACTIONS HERE
],
"Resource": "*"
},
{
"Sid": "LogStreams",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:log-group:cloudwatch-analytics-staging
:log-stream:*"
},
{
"Sid": "LogGroups",
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "arn:aws:logs:*:*:log-group:cloudwatch-analytics-staging"
}
]
}
In order to effectively apply the permissions, you need to assign the role to the EC2 instance.
I want to use ec2 instance profile to allow my python program to access a DynamoDb table. I have tested a policy by directly assigning to the user. Now I assign this same policy as a Instance Profile to ec2 instance where my job is running.
This is the policy.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:*"
],
"Resource": "arn:aws:dynamodb:us-east-2:913580688765:table/users"
}
]
}
Additionally I assigned a policy to the user to be able to Pass the ec2 role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AssociateIamInstanceProfile",
"ec2:ReplaceIamInstanceProfileAssociation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "ec2:DescribeIamInstanceProfileAssociations",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
}
]
}
But this does not work.
What am I missing here?
I don't have the exact answer for you but I have advice on how you can progress. First off, lets recap (and can you please double check all this):
1) You have an EC2 instance running, it's assigned to an IAM role.
2) The IAM Role trust relationship contains ec2.amazonaws.com.
3) The policy granting "dynamodb:*" is attached to the role.
If this is done, that means everything should be configured properly.
At this point, i would suggest you ssh to the EC2 instance and test out the permissions. This can be done by using the AWS CLI's dynamodb API to make a list/describe/get API calls to confirm they work on the instance. If they works, it means the instance has permissions to access dynamodb and there might be something wrong with how you're using the instance profile.
It's worthy to note that not all operations are going to work on "arn:aws:dynamodb:us-east-2:913580688765:table/users" since it's a specific table rather than all the tables e.g. "arn:aws:dynamodb:us-east-2:913580688765:table/*". API calls such as list-tables won't work if the resource is a specific table. You can find a list of dynamodb api calls and weather or not they support a specific table or not in the documentation here.
I'm having an issue with a seemingly trivial task of getting CodeDeploy to deploy Github code to an AutoScaling Group in a Blue/Green Deployment.
I have a Pipeline setup, a Deployment Group setup, and the AutoScaling Group, but it fails when it gets to the actual deployment:
I went to my role and it seems like it has sufficient policies for it to go through with the blue/green deployment:
Is there a policy that I'm not considering that needs to be attached to this role?
I found the answer in this link:
https://h2ik.co/2019/02/28/aws-codedeploy-blue-green/
Without wanting to take the credit, only one statement was missing from #PeskyGnat :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"iam:PassRole",
"ec2:CreateTags",
"ec2:RunInstances"
],
"Resource": "*"
}
]
}
I was also getting the error:
"The IAM role does not give you permission to perform operations in the following AWS service: AmazonAutoScaling. Contact your AWS administrator if you need help. If you are an AWS administrator, you can grant permissions to your users or groups by creating IAM policies."
I figured out the 2 permissions needed to get past this error, I created the policy below and attached it to the Code Deploy role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:PassRole",
"ec2:RunInstances",
"ec2:CreateTags"
],
"Resource": "*"
}
]
}
I want to grant permission just for only one user to create, delete and modify records on a subdomain I have created on Route 53.
I have created a new group where added a new user and I have attached a new policy as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:*"
],
"Resource": [
"arn:aws:route53:::hostedzone/*id_subdomain_zone*"
]
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:DescribeLoadBalancers"
],
"Resource": [
"*"
]
}
]
}
Then when I will try to log in aws console and it didnn't work, I amn't able to see any domain-zone with my new user.
ID Zone is alright.
Any help?
Thanks
Check this gist out - https://gist.github.com/dfox/1677191
I've tried it now, after you add the option to list all hosted zones you will be able to list your hosted zones.
It's a bit problematic, since the users can see all other hosted zones, but they cannot alter them.