How to re-deploy (update) a Cloud Function via CLI? - google-cloud-platform

I have created a http-triggered Cloud Function via the GUI, which uses source code from a repository of mine. If the source code changes, I can fairly easily re-deploy(update) the Cloud Function manually via the UI by clicking on Edit -> Code -> Deploy.
I would like to set up a CI/CD pipeline, using Cloud Build, with a trigger on the repo, so that when the source code changes (on master branch), the Cloud Function is re-deployed. In order to do this, I need to figure out how to re-deploy the Cloud Function via the CLI. Reading the cloud functions deploy docs, it says "create or update a Google Cloud Function". However, how much I try, I don't manage to update my existing Cloud Function, only create new ones.
I've tried updating it like specified in this SO answer, by running;
gcloud functions deploy <MY-FUNCTION-NAME> --source=https://source.developers.google.com/projects/<MY-PROJECT>/repos/<MY-REPO>/paths/<FOLDER-PATH>
which gives the error One of arguments [--trigger-topic, --trigger-bucket, --trigger-http, --trigger-event] is required: You must specify a trigger when deploying a new function. Notice the ... when deploying a new function.
Any ideas of how I can re-deploy (update) my existing one, and then (automatically), also use the latest source code?

After a lot of testing-different-stuff, I finally figured it out. In order to re-deploy the same Cloud Function, I needed to specify all arguments that defined my Cloud Function. Only the required ones were not enough. To re-deploy;
gcloud functions deploy <MY-FUNCTION-NAME> --source=https://source.developers.google.com/projects/<MY-PROJECT>/repos/<MY-REPO>/moveable-aliases/<MY-BRANCH>/paths/<FOLDER-PATH> --trigger-http --runtime=<RUNTIME> --allow-unauthenticated --entry-point=<ENTRY-POINT> --region=<REGION> --memory=<MEMORY>

You need to pass all arguments you used in the first deploy again so it'll conclude that's the same function. Give a look here to more into creating CI pipelines to functions with cloud build.

Related

How to manage automatic deployment to ECS using Terraform Cloud and CircleCI?

I have an ECS task which has 2 containers using 2 different images, both hosted in ECR. There are 2 GitHub repos for the two images (app and api), and a third repo for my IaC code (infra). I am managing my AWS infrastructure using Terraform Cloud. The ECS task definition is defined there using Cloudposse's ecs-alb-service-task, with the containers defined using ecs-container-definition. Presently I'm using latest as the image tag in the task definition defined in Terraform.
I am using CircleCI to build the Docker containers when I push changes to GitHub. I am tagging each image with latest and the variable ${CIRCLE_SHA1}. Both repos also update the task definition using the aws-ecs orb's deploy-service-update job, setting the tag used by each container image to the SHA1 (not latest). Example:
container-image-name-updates: "container=api,tag=${CIRCLE_SHA1}"
When I push code to the repo for e.g. api, a new version of the task definition is created, the service's version is updated, and the existing task is restarted using the new version. So far so good.
The problem is that when I update the infrastructure with Terraform, the service isn't behaving as I would expect. The ecs-alb-service-task has a boolean called ignore_changes_task_definition, which is true by default.
When I leave it as true, Terraform Cloud successfully creates a new version whenever I Apply changes to the task definition. (A recent example was to update environment variables.) BUT it doesn't update the version used by the service, so the service carries on using the old version. Even if I stop a task, it will respawn using the old version. I have to manually go in and use the Update flow, or push changes to one of the code repos. Then CircleCI will create yet aother version of the task definition and update the service.
If I instead set this to false, Terraform Cloud will undo the changes to the service performed by CircleCI. It will reset the task definition version to the last version it created itself!
So I have three questions:
How can I get Terraform to play nice with the task definitions created by CircleCI, while also updating the service itself if I ever change it via Terraform?
Is it a problem to be making changes to the task definition from THREE different places?
Is it a problem that the image tag is latest in Terraform (because I don't know what the SHA1 is)?
I'd really appreciate some guidance on how to properly set up this CI flow. I have found next to nothing online about how to use Terraform Cloud with CI products.
I have learned a bit more about this problem. It seems like the right solution is to use a CircleCI workflow to manage Terraform Cloud, instead of having the two services effectively competing with each other. By default Terraform Cloud will expect you to link a repo with it and it will auto-plan every time you push. But you can turn that off and use the terraform orb instead to run plan/apply via CircleCI.
You would still leave ignore_changes_task_definition set to true. Instead, you'd add another step to the workflow after the terraform/apply step has made the change. This would be aws-ecs/run-task, which should relaunch the service using the most recent task definition, which was (possibly) just created by the previous step. (See the task-definition parameter.)
I have decided that this isn't worth the effort for me, at least not at this time. The conflict between Terraform Cloud and CircleCI is annoying, but isn't that acute.

Is there a way of running AWS Step Functions locally when defined by CDK?

AWS Step Functions may be run in a local Docker environment using Step Functions Local Docker. However, the step functions need to be defined using the JSON-based Amazon States Language. This is not at all convenient if your AWS infrastructure (Step Functions plus lambdas) is defined using AWS CDK/CloudFormation.
Is there a way to create the Amazon States Language definition of a state machine from the CDK or CloudFormation output, such that it’s possible to run the step functions locally?
My development cycle is currently taking me 30 minutes to build/deploy/run my Lambda-based step functions in AWS in order to test them and there must surely be a better/faster way of testing them than this.
We have been able to achieve this by the following:
Download:
https://docs.aws.amazon.com/step-functions/latest/dg/sfn-local.html
To run step functions local, in the directory where you extracted the local Step Function files run:
java -jar StepFunctionsLocal.jar --lambda-endpoint http://localhost:3003
To create a state machine, you need a json definition (It can be pulled from the generated template or can get the toolkit plug in for Vs code, type step functions, select from a template and that can be your starter. Can also get it from the AWS console in the definition tab on the step function.
Run this command in the same directory as the definition json:
aws stepfunctions --endpoint http://localhost:8083 create-state-machine --definition "cat step-function.json" --name "local-state-machine" --role-arn "arn:aws:iam::012345678901:role/DummyRole"
You should be able to hit the SF now (hopefully) :)
You can use cdk watch or the --hotswap option to deploy your updated state machine or Lambda functions without a CloudFormation deployment.
https://aws.amazon.com/blogs/developer/increasing-development-speed-with-cdk-watch/
If you want to test with Step Functions local, cdk synth generates the CloudFormation code containing the state machine's ASL JSON definition. If you get that and replace the CloudFormation references and intrinsic functions, you can use it to create and execute the state machine in Step Functions Local.
How some people have automated this:
https://nathanagez.com/blog/mocking-service-integration-step-functions-local-cdk/
https://github.com/kenfdev/step-functions-testing
Another solution that might help is to use localstack what is supports many tools such CDK or CloudFormation and let developers to run stack locally.
There are a variety ways to run it, one of them is to run it manually in docker container, according to the instruction get started.
Next following the instruction what's next configure aws-cli or use awslocal.
All next steps and templates should be the same as for AWS API in the cloud.

In GCP, how can I trigger the automatic deployment of Cloud Function in to Production project from the source files present in repo of DEV project

I want to automate the deployment of Cloud Function in to Production project through Cloud Build whose source files are present in Cloud Source Repository of DEV project. How can I ensure that the moment I push the code in production branch of Cloud Source Repository of DEV project, the Cloud Function gets created in to Production Project .
If I understand, you are trying to trigger a build from a repository stored on another project.
This is not possible, the build triggers must be on the same project than the repositories
I think my answer will help here: How to pass API parameters to GCP cloud build triggers
Basically what Claudio recommended, use the examples to build your steps. I believe what you want to do is create a step that the. Triggers the cloud function when you push changes to the dev production branch. When the trigger is called and ran, you then add a step to either run the cloud function or use the REST API to trigger the build by its ID. See my example above.

Google Cloud Function - What is the default update behaviour for functions that rely on code in a Repo?

I have a Google Cloud Function that runs from source code stored in a Google Cloud Source Repository. If I update the source code in the repo, do I have to manually update the cloud function or is this done automatically?
There is no automatic deployment. You will have to run whatever command line you would normally run to deploy the code to Cloud Functions.

How to avoid deployment of all five functions in a server of serverless framework if only one function is changed

I have a serverless framework service with (say)five aws lambda functions using python. By using github I have created a CodePipeline for CI/CD.
When I push the code changes, it deploys all the functions even only function is changed.
I want to avoid the deployment of all functions and the CI/CD should determine the changed function and deploy it. Rest of functions should not be deployed again.
Moreover, is there anyway to deal with such problems using AWS SAM, as at this stage I have an option to switch towards SAM by quitting serverless framework
Unfortunately there is no "native" way to do it. You would need to write a bash that will loop through the changed files and call sls deploy -s production -f for each one of them.
I was also faced this issue, and eventually it drove me to create an alternative.
Rocketsam takes advantage of sam local to allow deploying only changed functions instead of the entire microservice.
It also supports other cool features such as:
Fetching live logs for each function
Sharing code between functions
Template per function instead of one big template file
Hope it solves your issue :)