Elastic Beanstalk deploy hooks: getting user's username - amazon-web-services

Is there a way to get the username of the AWS user who initiated EB deployment from within a deploy hook?

I believe that's possible with AWS CLI command and apply filters to CloudTrail logs. But it has a few limitations.
Like CloudTrail is not real-time, it updates your logs approximately after 15 minutes which ultimately will a very slow deployment.

Related

How to audit changes to the AWS account

I wanted to know if there was a way to track alerts or audit anything that happens with the AWS account like who changed what and why. I did find this https://docs.aws.amazon.com/opensearch-service/latest/developerguide/audit-logs.html where they use a comand line for enabling audit logs on an existing domain: aws opensearch update-domain-config --domain-name my-domain --log-publishing-options "AUDIT_LOGS={CloudWatchLogsLogGroupArn=arn:aws:logs:us-east-1:123456789012:log-group:my-log-group,Enabled=true}" but this is in regard to Amazon OpenSearch Service which I believe is only free for 12 months if you haven't used already. AWS Audit Manager. I am aware there are services that can do this but require a fee and I wanted to know if there were any free options
From the AWS documentation:
With AWS CloudTrail, you can monitor your AWS deployments in the cloud by getting a history of AWS API calls for your account, including API calls made by using the AWS Management Console, the AWS SDKs, the command line tools, and higher-level AWS services. You can also identify which users and accounts called AWS APIs for services that support CloudTrail, the source IP address from which the calls were made, and when the calls occurred. You can integrate CloudTrail into applications using the API, automate trail creation for your organization, check the status of your trails, and control how administrators turn CloudTrail logging on and off.
AWS Config provides a detailed view of the resources associated with your AWS account, including how they are configured, how they are related to one another, and how the configurations and their relationships have changed over time.
Basically, AWS CloudTrail keeps a log of API calls (requests to AWS to do/change stuff), while AWS Config tracks how individual configurations have changed over time (for a limited range of resources, such as Security Group rule being changed).

Is there a logging service in AWS for debug information?

I'm trying out AWS. I create a app that is running in an EC2 instance. I want to send debug/diagnostic logs to stdout or syslog and have some way to easily collect and let me read them.
Currently I use Stackdriver logging, I install a google-fluentd plugin in the EC2 instance and it picks up the syslog and send to Stackdriver. I'm wondering whether there is a similar offering in AWS so that I don't need to create a GCP project just for reading logs?
Thanks!
AWS allows you dump all your logs to cloud watch where you can store them click here to be redirected to the corresponding aws documentation. The documentation teaches you how to set up the ec2 machine in order to dump the logs to aws
You can install the AWS Cloudwatch agent in your EC2 Instance. The agent then allows you to ship custom log files to AWS Cloudwatch. From AWS cloudwatch you could analyze them. You could also ship system and application logs through the agent. Here is a blog post explaining how it could be done on a Windows machine not hosted in AWS, its pretty much the same approach for a EC2 instance.
You can use AWS Cloud watch Logs to monitor, store, and access your log files from Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS CloudTrail, RouteĀ 53, and other sources. You can then retrieve the associated log data from CloudWatch Logs.

How can I set retention policy for logs exported from Elastic beanstalk into a bucket?

I am using elastic beanstalk to serve up our app. I have enabled the "Enable log file rotation to Amazon S3" option and I see the logs a writing to a bucket. It looks like the logs are going to the default bucket for the Elastic BeanStalk instance.
How can I tell the app to write logs to a different bucket?
I want to make sure we set a retention policy (say 10 days) for the logs in that bucket so that the bucket stays groomed.
Thank you in advance for your help. Much appreciated.
It is not possible to configure the Elastic Beanstalk app to publish logs to a different bucket. It can only be set off or on.
To delete logs after 10 days, you can add a lifecycle rule to your bucket that will delete the logs after 10 days: http://docs.aws.amazon.com/AmazonS3/latest/UG/lifecycle-configuration-bucket-no-versioning.html
However, it may also possible to configure your Elastic Beanstalk application to publish logs to CloudWatch logs:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.managing.cw.html
The docs show a console screenshot, but I don't see anywhere in the console to set this. The docs do reference settings you can set via configuration files, CLI, or SDK

Use aws cloudtrail to collect application logs

Is it possible to use cloud trail to recieve custom logs like application logs, access logs, security logs?
And cloud trail keeps the logs for how long?
You might be thinking of CloudWatch Logs, which does capture, provide search, and groom custom logs from EC2 instances. The retention grooming rules are configurable.
No. CloudTrail is for AWS APIs activity only. It logs the activity for the last 7 days of API activity for supported services. The list only includes API activity for create, modify, and delete API calls. You can optionally save the logs in S3 buckets for historic API activity.
You could configure VPC flow logs, CloudTrail logs and AWS Config logs with CloudWatch. You can setup a S3 bucket with lifecycle policies enabled to retain logs forever. Refer this.

Continuous deploys on elastic beanstalk

I have everything setup and working with rolling deploys and being able to do git aws.push but how do I add a authorized key to EB server so my CI server can deploy as well?
Since you are using Shippable, I found this guide on Continuous Delivery using Shippable and Amazon Elastic Beanstalk that shows how to set it up on their end. Specifically, step 3 is what you are looking for.
It doesn't look like you need an authorized key, instead, you just need to give an AWS ID and AWS Secret Key that will allow Shippable to make API calls on your behalf. To do this, I recommend creating an IAM role that is specifically for Shippable. That way you can revoke it if you ever need to and only give it the permissions that it needs.