CloudWatch alert when a new deploy is made AWS - amazon-web-services

There are some issues in a company I'm working for. Basically the dev team is pushing new deploys to the API Gateway before consulting with the security guy.
This leads to the security person noticing a new endpoint on the application was released when security issues start to arise.
I was wondering if there's any simple way of creating an alert that pops up on AWS CloudWatch when a new deploy is created. If I recall corectly, these are called "alarms".
I have looked a bit into alarms but they seem to be based on metrics and I was not able to find a metric that shows a new endpoint being created on deploy.
This is certainly not the best approach to the problem, but It should work for now until the deploy process is changed.

I was thinking you could come up with a script that runs aws cloudformation list-stacks and check whether the output has more number of stacks than last time. But this method will only work for new stacks, not for stack modifications.

Related

What is the difference between AWS Lambda & AWS Elastic Beanstalk

I am studying for my AWS Cloud Practitioner Certification and I am confused with the difference between AWS Lambda & AWS Elastic Beanstalk. From my understanding, for both services you upload your code to AWS and AWS essentially manages the underlying infrastructure for you.
I know with Lambda you upload your code to a 'Lambda Function' and set triggers for when the code executes.
With AWS EB you upload your application code and EB automatically handles the deployment, capacity, provisioning, etc...
They both sound very similar as you upload your code to both and both handle underlying instances/environments.
Thanks!
Elastic beanstalk and lambda are very different though some of the features may look similar. At high level, elastic beanstalk deploys a long running application whereas lambda deploys short running code function
Lambda can at maximum run for 15 minutes, whereas EB can run continuously. Generally, we deploy websites/apps on EB whereas lambda are generally used for triggered functionality like processing image when image gets uploaded to S3.
Lambda can only handle one request at a time whereas number of concurrent requests EB can handle depends on your underlying infrastructure. So, if you are having say 100 requests, 100 lambdas will be created whereas these 100 requests can be handled by one underlying EC2 instance in EB
Lambda is serverless (underlying infra is entirely abstracted from developer). Whereas EB is automation over infra provisioning. You can still see your EC2 instances, load balancer, auto scaling group etc. in your AWS console. You can even ssh/rdp to your instance and change running services. AWS EB allows you also to have your custom AMIs.
Lambda is having issue of cold starts as in lambda, infra needs to be provisioned on demand by AWS, whereas in EB, you generally have EC2 instances already provisioned to handle your requests.
All great (and exam-specific) points by SmartCoder. If I may add a general ancillary comment:
Wittgenstein said, "In most cases, the meaning of a word is its use." I think this maxim is remarkably apt for software engineering too. In the context of your question, those two AWS services are used for significantly different purposes.
Lambda - Say you developed a photo uploading application with Node.js that uploads some processed images to an S3 bucket. The core logic for this is probably quite straightforward, and it's got a singular, distinct task. Simply take in an image, do some processing and if not for any exception, store it in a bucket. In this case, it's inefficient to waste time spinning up servers, configuring them with a runtime environment, downloading dependencies, maintenance, etc. A literal copy and paste of your code into the Lambda console while setting up a few configurations should get your job done. Plus, you save a lot of money as infrastructure is "provisioned" only when your Node.js function is invoked. Again, keep in mind the principle of this code performing a singular task.
Elastic Beanstalk - This same photo uploading system mentioned above might now mature into a more complex full-fledged software application that requires user management, authentication, and further processing of the images, which certainly requires more provisioning of resources. This application will probably do a lot of things with multiple code repositories for you to manage and deploy. And yet, you don't want to spend money on a DevOps engineer or learn to use an IaC (Infrastructure as Code) platform like CloudFormation or Terraform. In this case, Elastic Beanstalk is useful for a developer without too much in-depth DevOps knowledge as it's a PaaS (Platform as a Service) tool; it pretty much gives you a clear interface to spin up whole new production-ready systems.
Here are two good whitepapers I read a while back on the above topics.
https://docs.aws.amazon.com/whitepapers/latest/serverless-architectures-lambda/serverless-architectures-lambda.pdf
https://docs.aws.amazon.com/whitepapers/latest/introduction-devops-aws/introduction-devops-aws.pdf
Lambda is run based on specific trigger events.. and it exits as soon as its work is over.

How to update AWS Fargate service outside AWS code deploy in order to change desired task count

When set up AWS code deploy to deploy an AWS service we have to provide 2 target groups lets say
TargetGroupBlue and TargetGroupGreen.
In the cloudformation template we use the TargetGroupBlue when linking the Service to Loadbalancer.
TargetGroupGreen is created only to be used by AWS during code deploy.
Step 1 : We executed create stack command in order to create the service and loadbalancer. We have a workable service now. Traffic is routed via TargetGroupBlue.
Step 2 : Then use code deploy to do another deploy which will the swap the target group to TargetGroupGreen once done.
Step 3 : Now we need to update the desired task count in service so use cloudformation update stack command. This fails because the targetgroup is TargetGroupGreen (as Code deploy changed it in step 2) and out cloud formation templates has used TargetGroupBlue for linking the service to Loadbalancer.
The workaround could be do all service related updates outside code deploy in a even numbered release (so always have to do code deploy twice so that we know traffic is always routed TargetGroupBlue)
Is this the way we should work with service updates via cloudformation and Code Deploy?
Please help to get this figured out.
Even though AWS provides many cool ways to work with when it comes to BlueGreen deploys with CodeDeploy or CloudFormation it really sucks.
The work around they suggested was to use Custom Resources in cloudformation which will actually trigger a lambda function to get the services updated cheating the cloudformation stack updates. Sample.
But there are no proper samples to do that so it would take lot of time to get it to work the way you need.
Furthermore, the cloudforamtion with hooks does not really work for bigger projects as the LBs cannot be shared.
So here is the open ticket, please help to put a thumbs up so the AWS will prioritize this in their roadmap.
https://github.com/aws-cloudformation/aws-cloudformation-coverage-roadmap/issues/483

AWS - Log aggregation and visualization

We have couple of application running on AWS. Currently we are redirecting all our logs to single bucket. However for ease of access to users, I am thinking to install ELK Stack on EC2 instance.
Would want to check if there is alternate way available where I don't have to maintain this stack.
Scaling won't be an issue, as this is only for logs generated through application running on AWS, so not ingestion or processing is required. mostly log4j logs.
You can go for either the managed Elasticsearch available in AWS or setup your own in an EC2 instance
It usually comes down to the price involved and the amount of time you have in hand in setting up and maintaining your own setup
With your own setup, you can do a lot more configurations than that provided by the managed service and also helps in reducing the cost
You can find more info on this blog

how to deploy code on multiple instances Amazon EC2 Autocaling group?

So we are launching an ecommerce store built on magento. We are looking to deploy it on Amazon EC2 instance using RDS as database service and using amazon auto-scaling and elastic load balancer to scale the application when needed.
What I don't understand is this:
I have installed and configured my production magento enviorment on an EC2 instance (database is in RDS). This much is working fine. But now when I want to dynamically scale the number of instances
how will I deploy the code on the dynamically generated instances each time?
Will aws copy the whole instance assign it a new ip and spawn it as a
new instance or will I have to write some code to automate this
process?
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
A detailed explanation or direction towards some resources on the topic will be greatly appreciated.
You do this in the AutoScalingGroup Launch Configuration. There is a UserData section in the LaunchConfiguration in CloudFormation where you would write a script that is ran when ever the ASG scales up and deploys a new instance.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html#cfn-as-launchconfig-userdata
This is the same as the UserData section in an EC2 Instance. You can use LifeCycle hooks that will tell the ASG not to put the EC2 instance into load until everything you want to have configured it set up.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-as-lifecyclehook.html
I linked all CloudFormation pages, but you may be using some other CI/CD tool for deploying your infrastructure, but hopefully that gets you started.
To start, do check AWS CloudFormation. You will be creating templates to design how the infrastructure of your application works ~ infrastructure as code. With these templates in place, you can rollout an update to your infrastructure by pushing changes to your templates and/or to your application code.
In my current project, we have a github repository dedicated for these infrastructure templates and a separate repository for our application code. Create a pipeline for creating AWS resources that would rollout an updated to AWS every time you push to the repository on a specific branch.
Create an infrastructure pipeline
have your first stage of the pipeline to trigger build whenever there's code changes to your infrastructure templates. See AWS CodePipeline and also see AWS CodeBuild. These aren't the only AWS resources you'll be needing but those are probably the main ones, of course aside from this being done in cloudformation template as mentioned earlier.
how will I deploy the code on the dynamically generated instances each time?
Check how containers work, it would be better and will greatly supplement on your learning on how launching new version of application work. To begin, see docker, but feel free to check any resources at your disposal
Continuation with my current project: We do have a separate pipeline dedicated for our application, but will also get triggered after our infrastructure pipeline update. Our application pipeline is designed to build a new version of our application via AWS Codebuild, this will create an image that will become a container ~ from the docker documentation.
we have two triggers or two sources that will trigger an update rollout to our application pipeline, one is when there's changes to infrastructure pipeline and it successfully built and second when there's code changes on our github repository connected via AWS CodeBuild.
Check AWS AutoScaling , this areas covers the dynamic launching of new instances, shutting down instances when needed, replacing unhealthy instances when needed. See also AWS CloudWatch, you can design criteria with it to trigger scaling down/up and/or in/out.
Will aws copy the whole instance assign it a new ip and spawn it as a new instance or will I have to write some code to automate this process?
See AWS ElasticLoadBalancing and also check out more on AWS AutoScaling. On the automation process, if ever you'll push through with CloudFormation, instance and/or containers(depending on your design) will be managed gracefully.
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
As mentioned, earlier having a pipeline for rolling out new versions of your application via CodeBuild, this will create an image with the new code changes and when everything is ready, it will be deployed ~ becomes a container. The old EC2 instance or the old container( depending on how you want your application be deployed) will be gracefully shut down after a new version of your application is up and running. This will give you zero downtime.

Using CloudWatch API to get statistics

I have deployed a LAMP stack application on AWS. I need to monitor that using CloudWatch.
Can someone guide me on how to use the CloudWatch API for GetMetrics for CPU utilization? The AWS documentation is very scarce.
I see that the putmetrics call will let me create my own metrics.
My requirement is that I need to display those metric results in a mobile app.
My app monitors a project deployed on AWS. The alerts and metrics that come in must stream into the app.
I don't want just the metrics data in the AWS console,
I want it viewable in my mobile app. The app is developed in MEAN stack.
I must also add that the app is deployed on AWS and the application that is
being monitored is also in there(its a LAMP stack). I have managed to set 2 endpoints(HTTP and DB) and I have written
simple scripts in Javascript to monitor them. But ideally they should happen via Cloudwatch.
Providing a piece of code that replicates the issue that you are seeing normally allows who sees the question to help you better than guessing what you're doing.
Are you using an SDK to do this? What language/version?
here are links to the API docs:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html
http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_ListMetrics.html
The pattern is to list the metrics and after that use the result and feed it into getmetricsstatistics.
In your specific case, googling the issue a bit before might answer the question before you ask it on SO. For example:
https://forums.aws.amazon.com/thread.jspa?messageID=295740
This can happen when you are hitting the wrong endpoint. Check if you are hitting endpoint of the right AWS service.
For example, trying to hit DynamoDB's endpoint when you want to access CloudWatch APIs.