I have an application that is distributed over two AWS accounts.
One part of the application ingest data from one account into the other account.
The producer part is realised as python lambda microservices.
The consumer part is a spring-boot app in elastic beanstalk and additional python lambdas that further distribute data to external systems after they have processed by the spring-boot app in EBeanstalk.
I don't have an explicit X-Ray daemon running anywhere.
I am wondering if it is possible to send the x-ray traces of the one account to the account so i can monitor my application in one place.
I could not find any hints in the documentation regarding cross account usage. Is this even doable ?
If you running X-Ray daemon, you can provide RoleARN to the daemon, so it assumes the role and sends data it receives from X-Ray SDK from Account 1 to Account 2.
However if you have enabled X-Ray on API Gateway or AWS Lambda, segments generated by these services are sent to the account they run in and its not possible to send data cross account for these services.
Please let me know if you have questions. If yes, include the architecture flow and solution stack you are using to better guide you.
Thanks,
Yogi
It is possible but you'd have to run your own xray-daemon as a service.
By default, lambda uses its own xray daemon process to send traces to the account it is running in. However, the X-Ray SDK supports environment variables which can be used to use a custom xray daemon process instead. These environment variables are applicable even if the microservice is running inside a lambda function.
Since your lambda is written in python, you can refer to this AWS Doc which talks about an environment variable. You can set the value to the address of the custom xray daemon service.
AWS_XRAY_DAEMON_ADDRESS = x.x.x.x:2000
Let's say you want to send traces from Account 1 to Account 2. You can do that by configuring your daemon to assume a role. This role must be present in the Account 2 (where you want to send your traces). Then use this role's ARN by passing in the options while running your XRay daemon service in Account 1 (from where you want the traces to be sent). The options to use are mentioned in this AWS Doc here.
--role-arn, arn:aws:iam::123456789012:role/xray-cross-account
Make sure you attach permissions in Account 1 as well to send traces to Account 2.
This is now possible with this recent launch: https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-cloudwatch-cross-account-observability-multiple-aws-accounts/.
You can link accounts to another account to share traces, metrics, and logs.
Related
CONTEXT:
We have a platform where users can create their own projects - multiple projects per user. We need to provide them with a browser-based IDE to edit those projects.
We decided to go with coder-server. For this we need to configure an auto-scalable cluster on AWS. When the user clicks "Edit Project" we will bring up a new container each time.
https://hub.docker.com/r/codercom/code-server
QUESTION:
How to pass parameters from the url query (my-site.com/edit?project=1234) into a startup script to pre-configure the workspace in a docker container when it starts?
Let's say the stack is AWS + ECS + Fargate. We could use kubernetes instead of ECS if it helps.
I don't have any experience in cluster configuration. Will appreciate any help or at least a direction where to dig further.
The above can be achieved using multiple ways in AWS ECS. The basic requirements for such systems are to launch and terminate containers on the fly while persisting the changes in the files. (I will focus on launching the containers)
Using AWS SDK's:
The task can be easily achieved using AWS SDKs, Using a base task definition. AWS SDK allows starting tasks with overrides on the base task definition.
E.G. If task definition has a memory of 2GB then the SDK can override the memory to parameterised value while launching a task from task def.
Refer to the boto3 (AWS SDK for Python) docs.
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecs.html#ECS.Client.run_task
Overall Solution
Now that we know how to run custom tasks with python SDK (on demand). The overall flow for your application is your API calling AWS lambda function whit parameters to spin up and wait to keep checking task status and update and rout traffic to it once the status is healthy.
API calls AWS lambda functions with parameters
Lambda function using AWS SDK create a new task with overrides from base task definition. (assuming the base task definition already exists)
Keep checking the status of the new task in the same function call and set a flag in your database for your front end to be able to react to it.
Once the status is healthy you can add a rule in the application load balancer using AWS SDK to route traffic to the IP without exposing the IP address to the end client (AWS application load balancer can get expensive, I'll advise using Nginx or HAProxy on ec2 to manage dynamic routing)
Note:
Ensure your Image is lightweight, and the startup times are less than 15 mins as lambda cannot execute beyond that. If that's the case create a microservice for launching ad-hoc containers and hosting them on EC2
Using Terraform:
If you looking for infrastructure provisioning terraform is the way to go. It has a learning curve so recommend it as a secondary option.
Terraform is popular for parametrising using variables and it can be plugged in easily as a backend for an API. The flow of your application still remains the same from step 1, but instead of AWS Lambda API will be calling your ad-hoc container microservice, which in turn calls terraform script and passing variables to it.
Refer to the Terrafrom docs for AWS
https://registry.terraform.io/providers/hashicorp/aws/latest
first time asker.
So I've been trying to implement AWS Cloud Watch to monitor Disk Usage on an EC2 instance running EC2 Linux. I'm interesting in doing this just using the CW Agent and I've installed it according to the how-to found here. The install runs fine and I've made sure I've created an IAM Role for the instance as is described here. Unfortunately whenever I run the amazon-cloudwatch-agent.service it only sends log files and not the custom used_percent measurement specified. I receive this error when I tail the logs.
2021-06-18T15:41:37Z E! WriteToCloudWatch failure, err: RequestError: send request failed
caused by: Post "https://monitoring.us-west-2.amazonaws.com/": dial tcp 172.17.1.25:443: i/o timeout
I've done my best googlefu but gotten nowhere thus far. If you've got any advice it would be appreciated.
Thank you
Belated answer to my own question. I had to create a security group that would accept traffic from that same security group!
Having the same issue, it definitely wasn't a network restriction as I was still able to telnet to the monitoring endpoint.
From AWS docs: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/create-iam-roles-for-cloudwatch-agent.html
One role or user enables CloudWatch agent to be installed on a server
and send metrics to CloudWatch. The other role or user is needed to
store your CloudWatch agent configuration in Systems Manager Parameter
Store. Parameter Store enables multiple servers to use one CloudWatch
agent configuration.
If you're using the default cloudwatchagent configuration wizard, you may require extra policy CloudWatchAgentAdminRole in your role for the agent to connect to the monitoring service.
I am stuck on one point I have created one EC2 Linux based instance in Aws.
Now I want to send the EC2 metrics data to the managed Elasticsearch domain for monitoring purposes in Kiban, I go through the cloud watch console and check the metric is present of instance but didn't get how to connect with the Elasticsearch domain that I have created.
Can anyone please help me with this situation?
There is no build in mechanism for extraction/streaming of metrics data points in real time. You have to develop a custom solution for that. For example, by having a lambda function which is invoked every minute and which reads data points using get_metric_data. The the lambda would inject the points into your ES.
To invoke a lambda function periodically, e.g. every 1 minute you would have to setup CloudWatch Event rule with schedule Expressions. Lambda function would also need to have permissions granted to interact with CloudWatch metrics.
Welcome to SO :)
An alternative to the solution suggested by Marcin is to install metricbeat on the EC2 Instance and configure the metricbeat config file to send metrics to your Managed AWS ES Domain.
This is pretty simple and you should be able to do this fairly quickly.
I am kind of confused with the difference between aws system manager and aws cloudwatch?
Could someone help me to get clear with the difference?
Thank you very much.
They have different purposes.
aws system manager in the core of its functionality allows you to manage a fleet of instances as well as on-premise servers. Using the manger you can updated hundreds of instances with just a single command, execute custom scripts on all of them, monitor their patch compliance (i.e. do all your instances of interest have latest updates) and so on.
aws cloudwatch is primary used as a central location for storing variety of logs, from your applications (e.g. lambda execution logs), aws services and so on. It also allows you to monitor performance metrics of your instances (e.g. CPU utilization) as well as other resources. Other functionality is to respond to live events from resources (e.g. execute lambda whenever an instance is terminated)
In short, AWS System Manger is a centralized tool to automate management of AWS resources.
Whereas AWS Cloudwatch is centralized tool for monitoring AWS resource logs.
These short video resources might help -
AWS System Manager -
https://www.youtube.com/watch?v=MK4ZoCs-muo&ab_channel=AmazonWebServices
AWS Cloudwatch -
https://www.youtube.com/watch?v=a4dhoTQCyRA&ab_channel=AmazonWebServices
I need to check who has created the instance or who has stopped/terminated/rebooted instance along with time.
Use AWS Cloud Trail.
Please see the documentation: AWS CloudTrail.
You can get complete history of api calls to your account.
It is not expensive. Check pricing at: AWS CloudTrail Pricing
For Linux, log files are located under the /var/log directory and its subdirectories. Within this directory there are several log files with different names and which record different types of info. Some examples include, but are not limited to:
/var/log/message
Contains global system messages, including the messages that are logged during system startup. Includes mail, cron, daemon, kern, auth, etc.
/var/log/auth.log
Authenication logs
/var/log/kern.log
Kernel logs
/var/log/cron.log
Crond logs
https://blog.logentries.com/2013/11/where-are-my-aws-logs/
You will be able to access the details of the EC2 instance status from the console dashboard for a short period of time.
Until and unless you enable Cloud trail , you wont be able to access the logs and activities of what has happened in the AWS console some days back.
Cloud Trail requires you to use and S3 bucket to store the logs, and the cost you incur for Cloudtrail service is the cost of the space used to store logs in s3.