Since now, I've used Cloud Build as a vanilla CICD for running terraform and for building the infrastructure (sometimes I've Docker containers to build, sometimes I've not).
Now that Cloud Workflows is available I was wondering if this could be a better tool for pipelining atomic steps execution, for easiness and better control (for ex. conditional executions, error handling and so on, centralized log pushing and so)
I think that everything of the aboves can be done in Cloud Build, but it's usually not trivial to do.
Is Workflows ok for that and, if not, which is the best use case of this new tool instead?
You can have similarities, if, for example, your Cloud Build only call APIs to run/deploy/configure stuff.
However, keep in mind 2 things:
Cloud Workflow can only call APIs and sleep. You can't build a container image (with Docker for example) with Workflow. it's not a runtime environment, just a stuff which call APIs
Cloud Build can be trigger on push, tag and pull request. You can't do that with Workflow.
So, yes, sometime you can ask yourselves if you can change one by the other, but personally, I think that you have to use the right product for the right job.
API call orchestration -> Workflow
CICD -> Cloud Build
Related
Hi I am using the serverless framework to develop my application and I need to set it up in a local environment I am using API gateway, Lambda, VPC , SNS, SQS, and DB is connected via VPC peering, currently, everytime I am deploying and testing my code and its tedious process and takes 5 mins to deploy, Is there any way to set up a local environment to have everything in one place
It should be possible in theory, but it is not an easy thing to do. There are products like LocalStack that offer exactly this.
But, I would not recommend going that route. Ultimately, by design this will always be a huge cat and mouse game. AWS introduces a new feature or changes some minor detail of their implementation and products like LocalStack need to catch up. Furthermore, you will always only get an "approximation" of the "actual cloud". It never won't be a 100% match.
I would think there is a lot of work involved to get products like LocalStack working properly with your setup and have it running well.
Therefore, I would propose to invest the same time into proper developer experience within the "actual cloud". That is what we do: every developer deploys their version of the project to AWS.
This is also not trivial, but the end result is not a "fake version" of the cloud that might or might not reflect the "real cloud".
The key to achieve this is Infrastructure as code and as much automation as possible. We use Terraform and Makefiles which works very well for us. If done properly, we only ever build and deploy what we changed. The result is that changes can be deployed in seconds to AWS and the developer can test the result either through the Makefile itself or using the AWS console.
And another upside of this is, that in theory you need to do all the same work anyway for your continuous deployment, so ultimately you are reducing work by not having to maintain local deployments and cloud deployments.
I am quite new to GCP. My requirement is to implement devops solution on GCP. We are going to use python scripts and bigqueries.
I want to know which is the best cost effective devops solution to implement in GCP?
The built in CI/CD solution on Google Cloud is Cloud Build. I like this tool and I strongly recommend it. In summary, you have to define the steps to your build. Each steps are based on container. Load it, use it, go to the next one. Only the /workspace directory is kept between step (which creates some challenge sometime). You can redefine your entrypoint, your env vars for a step,... There is a lot of capabilities and there is a lot of help/tips on Stack Overflow or elsewhere.
For the pricing, it's interesting: you have 120 minutes of build free per day and PER BILLING ACCOUNT.
I'm not a Jenkins expert, I used it 6 years ago!
The main difference is the GUI and Plugins. You can do all with the GUI with jenkins, with Cloud Build, only the trigger and the jobs running/terminated (+ logs) are viewable on the console. The steps' configurations are only done by code (YAML or JSON file). Plugin are custom workers, but you haven't the same library as Jenkins.
On the other hand, Jenkins need to be hosted on VM, to be upgraded, the VM to be patched. And you have a minimum fee for Jenkins even if you have any builds.
Opinionated answer are difficult, because it depends on many factors!!
I'm presently looking into GCP's Deployment Manager to deploy new projects, VMs and Cloud Storage buckets.
We need a web front end that authenticated users can connect to in order to deploy the required infrastructure, though I'm not sure what Dev Ops tools are recommended to work with this system. We have an instance of Jenkins and Octopus Deploy, though I see on Google's Configuration Management page (https://cloud.google.com/solutions/configuration-management) they suggest other tools like Ansible, Chef, Puppet and Saltstack.
I'm supposing that through one of these I can update something simple like a name variable in the config.yaml file and deploy a project.
Could I also ensure a chosen name for a project, VM or Cloud Storage bucket fits with a specific naming convention with one of these systems?
Which system do others use and why?
I use Deployment Manager, as all 3rd party tools are reliant upon the presence of GCP APIs, as well as trusting that those APIs are in line with the actual functionality of the underlying GCP tech.
GCP is decidedly behind the curve on API development, which means that even if you wanted to use TF or whatever, at some point you're going to be stuck inside the SDK, anyway. So that's why I went with Deployment Manager, as much as I wanted to have my whole infra/app deployment use other tools that I was more comfortable with.
To specifically answer your question about validating naming schema, what you would probably want to do is write a wrapper script that uses the gcloud deployment-manager subcommand. Do your validation in the wrapper script, then run the gcloud deployment-manager stuff.
Word of warning about Deployment Manager: it makes troubleshooting very difficult. Very often it will obscure the error that can help you actually establish the root cause of a problem. I can't tell you how many times somebody in my office has shouted "UGGH! Shut UP with your Error 400!" I hope that Google takes note from my pointed survey feedback and refactors DM to pass the original error through.
Anyway, hope this helps. GCP has come a long way, but they've still got work to do.
I have a software that process some files. What I need is:
start a default image on google cloud (I think docker should be a good solution) using an API or a run command
download files from google storage
process it, run my software using those downloaded files
upload the result to google storage
shut the image down, expecting not to be billed anymore
What I do know is how to create my image hehe. But I can't find any info saying me what google cloud service should I use or even if I could do it like I'm thinking. I think I'm not using the right keywords to find what i need.
I was looking at Kubernetes, but i couldn't figure out how to manipulate those instances to execute a one time processing.
[EDIT]
Explaining better the process I have an app that receive images and send it to Google storage. After that, I need to process that images, apply filters, georeferencing, split image etc. So I want to start a docker image to process it and upload the results to google cloud again.
If you are using any of the runtimes supported by Google Cloud Functions, they are easiest way to do those kind of operations (i.e. fetch something from Google Cloud Storage, perform some actions on those files and upload them again). The Cloud Functions will be triggered by an event of your choice, and after the job, it will die.
Next option in terms of complexity would be to deploy a Google App Engine application in standard environment. It allows you to deploy your own application written in any of the supported languages for this environment. While there is traffic in your application, you will have instances serving, but the number of instances running can go down to 0 when they are not serving, which would mean less cost.
Another option would be Google App Engine in flexible environment. This product allows you to deploy your application in any custom runtime. This option has always at least one instance running, so it would never shut down.
Lastly, you can use Google Compute Engine to "create and run virtual machines on Google infrastructure". Otherwise than GAE, this is not that managed by Google, which means that most of the configuration is up to you. In this case, you would need to programmatically indicate your VM to shut down after you have finished your operations.
Based on your edit where you stated that you already have an app that is inserting images into Google Cloud Storage, your easiest option would be to use Cloud Functions that are triggered by additions, changes, or deletions to objects in Cloud Storage buckets.
You can follow the Cloud Functions tutorial for Cloud Storage to get an idea of the generic process and then implement your own code that handles your specific tasks. There are other tutorials like the Imagemagick tutorial for Cloud Functions that might also be relevant to the type of processing you intend to do.
Cloud Functions is probably your lightest weight approach. You could of course do more full scale applications, but that is likely overkill, more expensive, and more complex. You can write your processing code in Node.js, Python, or Go.
I am currently looking for a solution to run arbitrary scripts on a cloud instance (aws, digitalocean, rackspace, I'm not picky). I am not doing something shady I simply want to use it for performance testing and need reproducible results (deploy a service, set specific testdata, run the performance tests, kill everything, repeat if necessary).
Of course I can use the API of these providers and build a custom solution, but I'm wondering if there is a framework or a bunch of tools that will help me with that.
What I need is:
- Only using an instance for the runtime of the script
- possibility to store data outside of the instance for result analysis
There are a lot of tools to automate setting up cloud instances but they all seem targeted for deployment purposes. What I need is a cloud script runner.
From your description is sounds like you might be looking for something like AWS's new(ish) Lambda service.
This allows you define scripts and triggers to run them in he clued without the overhead of spinning up and having to manage cloud compute 'servers'.
More info:
https://aws.amazon.com/lambda/
One thing to be careful of when using the cloud for performance testing - you have no teal control over the actual HW that your code will run on and different runs may run on different HW. This is true even for server or instance based cloud testing.