AWS CodeGuru profiler under different account - amazon-web-services

We are trying to build a centralised CodeGuru profiler dashboard as described by the documentation at https://aws.amazon.com/blogs/devops/building-a-centralized-codeguru-profiler-dashboard-multi-account/.
So in effect, we have our CodeGuru profiling group under a central aws-code-analysis account and the actual application running under aws-application account. We are facing an issue with the cross-account connectivity. It appears the agent running under the aws-application account is trying to look for the profiling group under the local aws-application account instead of connecting to the central aws-code-analysis account.
Both the command line invocation of the agent (as documented here) as well as integration by code (as documented here) accept only the profiling-group-name as input and not the full ARN or account-id, profiling-group-name combination. So I'm not sure how the agent would determine which account to connect to? I couldn't find a way of explicitly specifying account-id to use anywhere.
Appreciate any pointers.

You should be able to pass in the role from your centralised account using awsCredentialsProvider, e.g. "arn:aws:iam::<CODEGURU_CENTRAL_ACCOUNT_ID>:role/CodeGuruCrossAccountRole". This will configure the agent to send profiling data to this account.
I would also check that the region is set to the region of the profiling group in the centralised account. So it should look something like this:
static String roleArn = "arn:aws:iam::<v>:role/CodeGuruCrossAccountRole";
static String sessionName = "codeguru-java-session";
Profiler.builder()
.profilingGroupName("JavaAppProfilingGroup")
.awsCredentialsProvider(AwsCredsProvider.getCredentials(
roleArn,
sessionName))
.region(<CODEGURU_CENTRAL_ACCOUNT_REGION>)
.withHeapSummary(true)
.build()
.start();

Related

I can't find and disable AWS resources

My free AWS tier is going to expire in 8 days. I removed every EC2 resource and elastic IP associated with it. Because that is what I recall initializing and experimenting with. I deleted all the roles I created because as I understand it, roles permit AWS to perform actions for AWS services. And yet, when I go to the billing page it shows I have these three services that are in current usage.
[1]: https://i.stack.imgur.com/RvKZc.png
I used the script as recommended by AWS documentation to check for all instances and it shows "no resources found".
Link for script: https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-listec2resources.html
I tried searching for each service using the dashboard and didn't get anywhere. I found an S3 bucket, I don't remember creating it but I deleted it anyway, and still, I get the same output.
Any help is much appreciated.
ok, I was able to get in touch with AWS support via Live chat, and they informed me that those services in my billing were usages generated before the services were terminated. AWS support was much faster than I expected.

List of services used in AWS

Please how can get the list of all services I am using.
I have gone to Service Quotas at
https://ap-east-1.console.aws.amazon.com/servicequotas/home?region=ap-east-1
on the dashboard. I could see a list of Items e.g. EC2, VPC, RDS, Dynamo etc but I did not understand what is there.
As I did not request for some of the services I am seeing I even went into budget at
https://console.aws.amazon.com/billing/home?region=ap-east-1#/budgets
and also credits. Maybe I can get the services I have been given credits to use
https://console.aws.amazon.com/billing/home?region=ap-east-1#/budgets?
Also, how can I stop any service which I do not want?
The Billing service is not giving me tangible information also. I do not want the bill to pile up before I start taking needed steps.
Is there a location where I can see all services I am using or maybe there is a code I can enter somewhere which would produce such result?
You can use AWS Config Resource Inventory feature.
AWS Config will discover resources that exist in your account, record their current configuration, and capture any changes to these configurations. Config will also retain configuration details for resources that have been deleted. A comprehensive snapshot of all resources and their configuration attributes provides a complete inventory of resources in your account.
https://aws.amazon.com/config/
There is not an easy answer on this one, as there is not an AWS service that you can use to do this out of the box (yet).
There are some AWS services that you can use to get you close, like:
AWS Config (as suggested by #kepils)
Another option is to use Resource Groups and Tagging to list all resources within a region within account (as described in this answer).
In both cases however, the issue is that both Config and Resource Groups come with the same limitation - they can't see all AWS services on their own.
Another option would be to use a third party tool to do this, if your end goal is to find what do you currently have running in your account like aws-inventory or cloudmapper
On the second part of your question on how to stop any services which you don't want you can do the following:
Don't grant excessive permissions to your users. If someone needs to work on EC2 instances, then their IAM role and respective policy should allow only that instead of for example full access.
You can limit the scope and services permitted for use within account by creating Service Control Policies which are allowing only the specific resources you plan to use.
Set-up an AWS Budget Notifications and potentially AWS Budget Actions.

AWS X-Ray Crossacount data collection

I have an application that is distributed over two AWS accounts.
One part of the application ingest data from one account into the other account.
The producer part is realised as python lambda microservices.
The consumer part is a spring-boot app in elastic beanstalk and additional python lambdas that further distribute data to external systems after they have processed by the spring-boot app in EBeanstalk.
I don't have an explicit X-Ray daemon running anywhere.
I am wondering if it is possible to send the x-ray traces of the one account to the account so i can monitor my application in one place.
I could not find any hints in the documentation regarding cross account usage. Is this even doable ?
If you running X-Ray daemon, you can provide RoleARN to the daemon, so it assumes the role and sends data it receives from X-Ray SDK from Account 1 to Account 2.
However if you have enabled X-Ray on API Gateway or AWS Lambda, segments generated by these services are sent to the account they run in and its not possible to send data cross account for these services.
Please let me know if you have questions. If yes, include the architecture flow and solution stack you are using to better guide you.
Thanks,
Yogi
It is possible but you'd have to run your own xray-daemon as a service.
By default, lambda uses its own xray daemon process to send traces to the account it is running in. However, the X-Ray SDK supports environment variables which can be used to use a custom xray daemon process instead. These environment variables are applicable even if the microservice is running inside a lambda function.
Since your lambda is written in python, you can refer to this AWS Doc which talks about an environment variable. You can set the value to the address of the custom xray daemon service.
AWS_XRAY_DAEMON_ADDRESS = x.x.x.x:2000
Let's say you want to send traces from Account 1 to Account 2. You can do that by configuring your daemon to assume a role. This role must be present in the Account 2 (where you want to send your traces). Then use this role's ARN by passing in the options while running your XRay daemon service in Account 1 (from where you want the traces to be sent). The options to use are mentioned in this AWS Doc here.
--role-arn, arn:aws:iam::123456789012:role/xray-cross-account
Make sure you attach permissions in Account 1 as well to send traces to Account 2.
This is now possible with this recent launch: https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-cloudwatch-cross-account-observability-multiple-aws-accounts/.
You can link accounts to another account to share traces, metrics, and logs.

Network default is not accessible to Dataflow Service account

Having issues starting a Dataflow job(2018-07-16_04_25_02-6605099454046602382) in a project without a local VPC Network when I get this error
Workflow failed. Causes: Network default is not accessible to Dataflow
Service account
There is a shared VPC connected to the project with a networked called default with a subnet default in us-central1 – however the service account used to run dataflow job don't seam to have access to it. I have given the dataflow-service-producer service account Compute Network User, without any noticeable effect. Any ideas on how I can processed?
The usage of subnetworks in Cloud Dataflow requires to specify the subnetwork parameter when running the pipeline; However, in the case of subnetwork that are located in a Shared VPC network, it is required to use the complete URL based on the following format, as you well mentioned.
https://www.googleapis.com/compute/v1/projects/<HOST_PROJECT>/regions/<REGION>/subnetworks/<SUBNETWORK>
Additionally, in this cases is recommended to verify that you are adding the project's Dataflow service account into the Shared VPC's project IAM table and give it the "Compute Network User" role permission in order to ensure that the service has the required access scope.
Finally, it is seems that the Subnetwork parameter official Google's documentation is alraedy available with detailed information about this matter.
Using the --subnetwork option with the following (undocumented) fully qualified subnetwork format made the Dataflow job run. Where {PROJECT} is the name of the project hosting the shared VPC and {REGION} matches the region you run your dataflow job in.
--subnetwork=https://www.googleapis.com/compute/alpha/projects/{PROJECT}/regions/{REGION}/subnetworks/{SUBNETWORK}

Take backup of AWS configuration across all services

Having spent a couple of days setting up and configuring a new AWS account I would like to grab an export of the account configuration across all services. I've Googled around for existing scripts, etc, but have yet to find anything that would automate this process.
Primarily this would be as a backup incase the account was corrupted in some way (including user error!) but this would also be useful to document the system.
From an account administration perspective, there are various parts of the AWS console that don't display friendly names for various resources. Being able to cross reference against offline documentation would simplify these scenarios. For example, friendly names for vpc's and subnets aren't always displayed when configuring resources to use them.
Lastly I would like to be able to use this to spot suspicious changes to the configuration as part of intrusion detection. For example, looking out for security group changes to protected resources.
To clarify, I am looking to backup the configuration of AWS resources, not the actual resources themselves. Resource backups (e.g. EC2 instances) is already covered.
The closest i've seen to that is CloudFormer.
That would create a CloudFormation template from your account's resources. Mind that this template would be only a starting point, not meant to be reproducible out-of-the-box. For example, it won't log into your instances or anything like that.
As for the intrusion detection part, see CloudTrail
Check out AWS Config: https://aws.amazon.com/config/
AWS Config records the configuration of AWS resources automatically, allowing you to query and react to configuration changes. As AWS Config stores data on S3, that is probably enough backup, but you can also sync the bucket elsewhere for paranoid redundancy.