Check aws cloudfront function status - amazon-web-services

I'm updating the AWS cloudfront function using commands
aws cloudfront update-function
aws cloudfront publish-function
and then in web console I see that status of my function become "Updating", saying that The function is published to the live stage and its associated distributions are deploying the most recent changes.
Is there a way to check when the function gets the Deployed status after Updating using the aws cli? When I run
aws cloudfront list-functions
I get my function listed twice with both LIVE and DEVELOPMENT stages, that's it, no Updating

You can use aws cloudfront wait distribution-deployed to check status of distribution.
See: https://docs.aws.amazon.com/cli/latest/reference/cloudfront/wait/distribution-deployed.html

Related

How to have a static URL for serverless framework AWS deployments?

When I use serverless framework to deploy with sls deploy to an AWS endpoint with lambdas and dynamodb, the host of the endpoint changes every time with a different prefix. This is a problem because if I release a client, I won't be able to deploy with serverless again.
For example, a host might look like this: 9svhw8numd.execute-api.us-east-1.amazonaws.com
That 9svhw8numd part changes each time there is a new deployment.
I've checked the serverless documentation and I can't seem to find anything that tells me how to configure it to have a static URL. How do I keep the host static for each serverless deployment?
What you're seeing is a URL for an AWS API Gateway instance. If you delete and re-create your serverless stack, a new endpoint will be generated. If you don't remove the stack, it'll stay the same throughout multiple serverless deploy commands.
If you'd like a custom domain instead of one generated by API Gateway, you'll need to configure a domain name via AWS Route 53. If you're using the Serverless Framework, here's a good guide to do that.

Trigger Gitlab-ci from aws lambda

Im looking for lambda that can trigger Gitlab-ci pipeline to deploy specific branches and send results to slack.
Thx.
Trigger a pipeline
As per GitLab Trigger API manual:
To trigger a job you need to send a POST request to GitLab’s API endpoint:
curl -X POST <API url>/projects/<your_awesome_gitlab_project>/trigger/pipeline
The required parameters are the trigger’s token and the Git ref on which the trigger will be performed. Valid refs are the branch and the tag. The :id of a project can be found by querying the API or by visiting the CI/CD settings page which provides self-explanatory examples.
Watching a pipeline
To check pipeline results, use CloudWatch Events:
You can set up a rule to run an AWS Lambda function on a schedule. This tutorial shows how to use the AWS Management Console or the AWS CLI to create the rule. If you would like to use the AWS CLI but have not installed it, see the AWS Command Line Interface User Guide.
To check jobs status, use: Get a single pipeline or List project pipelines API calls.
curl --header "PRIVATE-TOKEN: " "https://gitlab.example.com/api/v4/projects/1/pipelines/46"
Inform on Slack
To send Slack notifications with lambda, use this tutorial:
Creating an AWS Lambda Function and API Endpoint | Slack
Two cents about endpoint security
CI Trigger is secured by token. In general, it's enough for securing your endpoints.
But, if the approach isn't enough, there are some techniques to "hide" endpoints:
client IP whitelisting with GitLab
AWS Security Groups for Lambda or for EC2
Securing URLs with Nginx or with HAProxy

How to get BitBucket Server v5.15.1 (on-premise) webhook to trigger Lambda via API Gateway to get into S3?

I'm working with an on-premise older version of BitBucket Server v5.15.1 that does not have the Bitbucket Pipelines feature and I need how to get the webhooks to notify AWS Lambda via HTTPS POST via AWS API Gateway after a commit is made to master branch...then Lambda downloads a copy of the repo, zips it up and places it into an S3 bucket...and of course this is where CodePipeline can finally be triggered...But I'm having issues getting this on-premise BitBucket Server located within my AWS account to connect its webhook to Lambda.
I tried following this documentation below and launched the CloudFormation template with all the needed resources but I'm assuming it is for BitBucket Cloud not Bitbucket Server OP.
https://aws.amazon.com/blogs/devops/integrating-git-with-aws-codepipeline/
Anyones help with this would be really appreciated.
I suppose you are following this below blog from AWS :
https://aws.amazon.com/blogs/devops/integrating-codepipeline-with-on-premises-bitbucket-server/
We had also implemented it. If the event is coming to Lambda, then make sure your Lambda is within a VPC and it has correct outbound(read as inbound) rules to connect the Bitbucket server over HTTPS. Also the Bitbucket server accepts the VPC IP range.

AWS trying to use Lambda

Sorry for doing this kind of question.. but I´m a bit lost here....
I have an app which consist in an Angular4 as frontend and Java app as Backend.
But I´m planning to use AWS Lambda as I´m interested after seeing the videos in Amazon.
The issue is that I don´t know how to get the best from AWS.
My Java app has a very time consuming task to process some images (which takes several seconds).
But I'm not sure if I can deploy all my app in Lambda, or if the idea is to use a EC2 server and then the specific task for the image processing in the lambda. Can anyone please shed some light here?
Also, the frontend app can be deploy in a lambda, or again, lambda is just for specific task?
EDIT:
The application flow would be:
The user in the angular app upload an image, the image goes to the backend server in Java and it´s stored in (maybe) a AWS bucket.. Then the Java app with imagemagick process the image and the result is store in (maybe) another bucket.
So the question is when I need to use Lambda? just to convert the image or if the full backend (and maybe frontend) app would be there?<
I'm asking because I cannot find enough information about that...
First of all you can deploy your Angular frontend to Amazon S3. Also you can use AWS CloudFront to add custom domains and free SSL certificates from Amazon using Amazon Certificate Manager for your domain. For more details refer the article Deploying Angular/React Apps in AWS.
If you don't need to show tge image processing results immediately in frontend
For the image processing backend you can use AWS API Gateway and Lambda along with S3. For this recommended flow is you can use the API Backend to get an Signed URL or AWS STS in Lambda (Or Use Cognito Federated Identities) to get temporary access to Amazon S3 Bucket to Upload the image directly to S3 from Angular App. For more details on this refer the article Upload files Securely to AWS S3 Directly from Browser.
Note: AWS recently released a JavaScript Library called AWS Amplify to simplify the implementation of the above tasks.
After Uploading the image to S3 you can setup an event driven workflow by using Amazon S3 triggers to invoke an Lambda function to perform the image processing and save the process image back to S3 (If you need to store the result).
If you need to show the result immediately
Still use tge previous approach upto Upload to S3 from frontend and then invoke an API Gateway Lambda function passing the file path in S3 to process the image.
To understand the details in connecting both frontend and backend with AWS serverless technologies refer the article Full Stack Serverless Web Apps with AWS.
As a side note, you should be able to implement the required functionality with AWS Lambda without using AWS EC2.

Deploying AWS Global infrastructure with API Gateway, Lambda, Cognito, S3, Dynamodb

Let say I need an API Gateway that is going to run Lambdas and I want to make the best globally distributed performing infrastructure. Also, I will use Cognito for authentication, Dynamodb, and S3 for user data and frontend statics.
My app is located at myapp.com
First the user get the static front end from the nearest location:
user ===> edge location at CloudFront <--- S3 at any region (with static front end)
After that we need to comunicate with API Gateway.
user ===> API Gateway ---> Lambda ---> S3 || Cognito || Dynamodb
API Gateway can be located in several regions, and even though is distributed with CloudFront, each endpoint is pointing to a Lambda located at a given region: Let say I deploy an API at eu-west-1. If a request is sent from USA, even if my API is on CloudFront, the Lambda it runs is located at eu-west-1, so latency will be high anyway.
To avoid that, I need to deploy another API at us-east-1 and all my Lambdas too. That API will be pointing to those Lambdas
If I deploy one API for every single region, I would need one endpoint for each one of them, and the frontend should decide which one to request. But how could we know which one is the nearest location?
The ideal scenario is a single global endpoint at api.myapp.com, which is going to go to the nearest API Gateway which runs the Lambdas located in that region too. Can I configure that using Route 53 latency routing with multiple A records pointing to each api gateway?
If this is not right way to do this, can you point me in the right direction?
AWS recently announced support for regional API endpoints using which you can achieve this.
Below is an AWS Blog which explains how to achieve this:
Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda
Excerpt from the blog:
The default API endpoint type in API Gateway is the edge-optimized API
endpoint, which enables clients to access an API through an Amazon
CloudFront distribution. This typically improves connection time for
geographically diverse clients. By default, a custom domain name is
globally unique and the edge-optimized API endpoint would invoke a
Lambda function in a single region in the case of Lambda integration.
You can’t use this type of endpoint with a Route 53 active-active
setup and fail-over.
The new regional API endpoint in API Gateway moves the API endpoint
into the region and the custom domain name is unique per region. This
makes it possible to run a full copy of an API in each region and then
use Route 53 to use an active-active setup and failover.
Unfortunately, this is not currently possible. The primarily blocker here is CloudFront.
MikeD#AWS provides the info on their forums:
When you create a custom domain name it creates an associated CloudFront distribution for the domain name and CloudFront enforces global uniqueness on the domain name.
If a CloudFront distribution with the domain name already exists, then the CreateCloudFrontDistribution will fail and API Gateway will return an error without saving the domain name or allowing you to define it's associated API(s).
Thus, there is currently (Jun 29, 2016) no way to get API Gateway in multiple regions to handle the same domain name.
AWS has no update on providing the needful since confirming existence of an open feature request on July 4, 2016. AWS Form thread for updates
Checkout Lambda#Edge
Q: What is Lambda#Edge? Lambda#Edge allows you to run code across AWS
locations globally without provisioning or managing servers,
responding to end users at the lowest network latency. You just upload
your Node.js code to AWS Lambda and configure your function to be
triggered in response to Amazon CloudFront requests (i.e., when a
viewer request lands, when a request is forwarded to or received back
from the origin, and right before responding back to the end user).
The code is then ready to execute across AWS locations globally when a
request for content is received, and scales with the volume of
CloudFront requests globally. Learn more in our documentation.
Usecase, minimizing latency for globally distributed users
Q: When should I use Lambda#Edge? Lambda#Edge is optimized for latency
sensitive use cases where your end viewers are distributed globally.
Ideally, all the information you need to make a decision is available
at the CloudFront edge, within the function and the request. This
means that use cases where you are looking to make decisions on how to
serve content based on user characteristics (e.g., location, client
device, etc) can now be executed and served right from the edge in
Node.js-6.10 without having to be routed back to a centralized server.