How can I dev locally with GCP Cloud Run without an IDE - google-cloud-platform

Google has a Cloud Run Emulator but the only documentation I can find binds it tightly to an IDE, mostly seems to be VSCode. It seems like a strange design decision and I don't want my whole team to need to use VSCode to test this.
https://cloud.google.com/code/docs/vscode/develop-service
Why did Google decide to have an IDE dependency for their emulator?
How do I get around this dependency?

The IDE is the preferred place for developer to create their code. VS Code and IntelliJ are the most popular and Google Cloud has focused its effort on them.
However, it's not mandatory to have an IDE to run a Cloud Run container on a K3S cluster. The command gcloud beta code dev, documentation here, allow you to achieve the same thing locally.
You have less visual functionalities and possibilities, but you can script your deployment, even your tests, locally with that command.

Related

SonarQube integration withn GCP cloud build

I have a task to use SonarQube.
My build are done using Google Cloud Build. How can I integrate SonarQube with Google Cloud Build
Thanks for your help
You can use custom builders. At the end, each build step is a container image:
Cloud builders are container images with common languages and tools installed in them. You can configure Cloud Build to run a specific command within the context of these builders.
There GCP documentation provides a guide on how to create a custom build. However, notice that it's inteded to be general and doesn't include any specific functionality that you might require. Nevertheless, is a great starting point for understanding how the custom builders work and create your own.
Aside from this approach, there is a community builder for Sonarqube that you can use as reference or might even suit your needs.
Edit:
In case your question is about code analysis with Sonarqube. The community builder is still relevant as it allows you to run static code analysis for your project from sonarcloud.io.

Using cloud functions vs cloud run as webhook for dialogflow

I don't know much about web development and cloud computing. From what I've read when using Cloud functions as the webhook service for dialogflow, you are limited to write code in just 1 source file. I would like to create a real complex dialogflow agent, so It would be handy to have an organized code structure to make the development easier.
I've recently discovered Cloud run which seems like it can also handle webhook requests and makes it possible to develop a complex code structure.
I don't want to use Cloud Run just because it is inconvenient to write everything in one file, but on the other hand it would be strange to have a cloud function with a single file with thousands of lines of code.
Is it possible to have multiple files in a single cloud function?
Is cloud run suitable for my problem? (create a complex dialogflow agent)
Is it possible to have multiple files in a single cloud function?
Yes. When you deploy to Google Cloud Functions you create a bundle with all your source files or have it pull from a source repository.
But Dialogflow only allows index.js and package.json in the Built-In Editor
For simplicity, the built-in code editor only allows you to edit those two files. But the built-in editor is mostly just meant for basic testing. If you're doing serious coding, you probably already have an environment you prefer to use to code and deploy that code.
Is Cloud Run suitable?
Certainly. The biggest thing Cloud Run will get you is complete control over your runtime environment, since you're specifying the details of that environment in addition to the code.
The biggest downside, however, is that you also have to determine details of that environment. Cloud Funcitons provide an HTTPS server without you having to worry about those details, as long as the rest of the environment is suitable.
What other options do I have?
Anywhere you want! Dialogflow only requires that your webhook
Be at a public address (ie - one that Google can resolve and reach)
Runs an HTTPS server at that address with a non-self-signed certificate
During testing, it is common to run it on your own machine via a tunnel such as ngrok, but this isn't a good idea in production. If you're already familiar with running an HTTPS server in another environment, and you wish to continue using that environment, you should be fine.

Machine Learning (NLP) on AWS. Cloud9? SageMaker? EC2-AMI?

I have finally arrived in the cloud to put my NLP work to the next level, but I am a bit overwhelmed with all the possibilities I have. So I am coming to you for advice.
Currently I see three possibilities:
SageMaker
Jupyter Notebooks are great
It's quick and simple
saves a lot of time spent on managing everything, you can very easily get the model into production
costs more
no version control
Cloud9
EC2(-AMI)
Well, that's where I am for now. I really like SageMaker, although I don't like the lack of version control (at least I haven't found anything for now).
Cloud9 seems just to be an IDE to an EC2 instance.. I haven't found any comparisons of Cloud9 vs SageMaker for Machine Learning. Maybe because Cloud9 is not advertised as an ML solution. But it seems to be an option.
What is your take on that question? What have I missed? What would you advise me to go for? What is your workflow and why?
I am looking for an easy work environment where I can quickly test my models, exactly. And it won't be only me working on it, it's a team effort.
Since you are working as a team I would recommend to use sagemaker with custom docker images. That way you have complete freedom over your algorithm. The docker images are stored in ecr. Here you can upload many versions of the same image and tag them to keep control of the different versions(which you build from a git repo).
Sagemaker also gives the execution role to inside the docker image. So you still have full access to other aws resources (if the execution role has the right permissions)
https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb
In my opinion this is a good example to start because it shows how sagemaker is interacting with your image.
Some notes on other solutions:
The problem of every other solution you posted is you want to build and execute on the same machine. Sure you can do this but keep in mind, that gpu instances are expensive and therefore you might only switch to the cloud when the code is ready to run.
Some other notes
Jupyter Notebooks in general are not made for collaborative programming. I think they want to change this with jupyter lab but this is still in development and sagemaker only use the notebook at the moment.
EC2 is cheaper as sagemaker but you have to do more work. Especially if you want to run your model as docker images. Also with sagemaker you can easily build an endpoint for model inference which would be even more complex to realize with ec2.
Cloud 9 I never used this service and but on first glance it seems good to develop on, but the question remains if you want to do this on a gpu machine. Because you're using ec2 as instance you have the same advantage/disadvantage.
One thing I'd like to call out first is SageMaker notebook is not the only IDE environment in which you can interact with other components of SageMaker such as training and hosting. In fact you can make API calls to SageMaker training/hosting through Cloud9 or any IDEs you've installed on EC2 or even your laptop, as long as you have AWS SDK or SageMaker Python SDK installed.
Regarding the choice of the IDE, it's really up to your particular needs. SageMaker notebook is Jupyter based (now also supports JupyterLab beta), ML focused, and fully managed. Hundreds of Python packages that are commonly used in ML, as well as Tensorflow, Keras, MxNet, SageMaker Python SDK, etc., are preinstalled and automatically maintained for you. It also integrates more closely with other components of SageMaker as one can imagine.
Cloud9 is a managed IDE too but it is for general purpose rather than ML specific. If you want to use Jupyter on cloud9 it requires extra work from your side. It does not preinstall and maintain the version of common ML/DL related packages like SageMaker notebook does.

Usage of Cloud Foundry Spaces in the development chain

I am currently evaluating the possibility of introducing a private Java PAAS cloud. So far I am quite excited about the whole solution, especially combining the foundry with openstack.
What I am wondering though, is how this can be combined with development. I obviously want the developer to run the developed code on the cloud and no longer on his unmanaged workstation.
Is it possible to do the following:
Developer develops his application code on the local host OS. A virtual machine is used to build and run the application. I have seen this in vagrant and liked this alot. Ideally the local vagrant box is a cloud foundry space.
If the developer is OK with his code, he should push his application out of the local vm to a developer specific acceptance space run by cloud foundry on the network. Here the application is a more production like environment and automated acceptance / disaster recovery tests can be executed.
If the developer decides this is OK and merges his changes to the trunk (SVN/GIT) a CI tool should deploy the application to the "global" test, acceptance and production spaces.
I assume the last point is no problem. I just cannot find a way, how the first steps can be achieved.
Any ideas?
are you actually looking for a complete cf deployment on top of openstack?
That can be achieved using BOSH cloud foundry deployment for openstack.
http://docs.cloudfoundry.com/docs/running/deploying-cf/openstack/
you can have different spaces in the cf deployment: test , production etc. and can move application from one space to another after testing is done.

Utilizing EC2 or Azure to scale out a distributed C# build system?

Quite a few build and CI systems support steps for pushing build output to Azure, but I haven't seen any which can actually run on Azure (or EC2). Ideally I would like to be able to spin up an arbitrary number of instances (depending on the # of pending submits) to deal with the actual build + quality gates (UTs, FXCop, other static analysis tools) + source repository checkin process.
Are there existing tools which can do this, or has anyone built something which they can discuss?
Thanks!
[Edit: I found this question which is quite similar but didn't have any informative answers, so I'll keep my question alive]
If you're using Git or Mercurial for source control, AppHarbor might be what you're looking for. It's a CI build/deploy environment that runs exclusively in the cloud (EC2), and can deploy build output to Azure.
Here are some links for reference:
http://sourcecodebean.com/archives/appharbor-heroku-for-net/987
http://lostechies.com/chrismissal/2011/03/12/using-appharbor-for-continuous-integration
http://haacked.com/archive/2011/05/12/making-let-me-bing-that-for-you-open-source.aspx
http://appharbor.com/page/pricing
The open souce Jenkins CI server has an EC2 plugin that will spin up EC2 instances automatically depending on your build load. I couldn't find anything for Azure, but I highly recommend Jenkins - it's easy to configure, well maintained and has stacks of features.
Continuous Integration on Windows Azure http://code.google.com/p/cassis/ (over Mercurial)
Disclaimer: work produced by my 1st year CS students
Also Teamcity has support for this: http://www.jetbrains.com/teamcity/features/amazon_ec2.html