How do I use the timeout flag in gcloud run deploy? - google-cloud-platform

I am trying to deploy a node app to cloud run using the reference https://cloud.google.com/sdk/gcloud/reference/run/deploy. The build takes around 12 mins so I am including the timeout flag mentioned here: https://cloud.google.com/sdk/gcloud/reference/run/deploy#--timeout.
My command looks like this:
gcloud run deploy my-node-project --timeout 900 --source .
My build still times out after 10 minutes though. How do I adjust it to give me the 12 minutes I need?

Related

Cron Job is not running on EC2 Instance

I have searched the community questions before posting this and tried the suggestions to no avail.
I am confused as to why my cron jobs in my ec2 are not getting triggered.
I ran the following in my ec2:
sudo service crond start
sudo crontab -e
Here I added the following:
30 7 * * * root /usr/bin/python3 /home/ec2-user/hello-world.py
The python script creates a file on running. The job is not running as scheduled.
Please suggest me what I am missing here.

How to increase the cloud build timeout when using ```gcloud run deploy```?

When attempting to deploy to Cloud Run using the gcloud run deploy I am hitting the 10m Cloud Build timeout limit. gcloud run deploy is working well as long as the build step does not exceed 10m. When the build step exceeds 10m, the build fails with the "Timed out" status as shown in below screenshot. AFAIK there are no arguments to gcloud run deploy that can set the Cloud Build timeout limit. gcloud run deploy docs are here: https://cloud.google.com/sdk/gcloud/reference/run/deploy
I've attempted to increase the Cloud Build timeout limit using gcloud config set builds/timeout 20m and gcloud config set container/build_timeout 20m, but these settings are not reflected in the execution details of the cloud build process when using gcloud run deploy.
In the GUI, this is the setting I want to change:
Is it possible to increase the Cloud Build timeout limit using gcloud run deploy?
How about splitting the command into (more easily configured) constituents?
[I've not tried this]
Build the container image specifying the timeout
:
gcloud builds submit --source=.... --timeout=...
Then reference the image that results when you gcloud run deploy:
gcloud run deploy ... --image=...
I know this is answered and confirmed, but #DazWikin's solution was the harder way to solve this problem than #SimonKarman's solution.
For those who do not have the cloudbuild.yml file like myself, this solution still is a valid one, you just need to edit the one created by google itself. You can find it under builds > triggers > Desired Trigger (Edit)
Then when you open the editor you can apply the timeout. If you want other changes to the yaml file you can also checkout the schema here:
https://cloud.google.com/build/docs/build-config-file-schema#yaml
Note: I am using cloudrun and this worked for me and therefore I am not 100% if it works with all builds generated by google
Hope it will be helpful for someone else in future :)
If you're using a --source such as the cloudbuild.yaml you can add the following property to alter the timeout in seconds:
...
timeout: "1800s"
...
You can find this in the documentation

Run command from terminal window in AWS Instance at specified time or on start up

I have a AWS Cloud9 Instance that starts running at 11:52 PM MST and stops running at 11:59 PM MST. I have a dockerfile within the Instance that when ran with the correct mount will run a set of c++ .cpp files that collect live web data. The ultimate goal of this instance is to be fully automatic so that every night it collects the live web data for that date, hence why the Instance is open at the very end of the day each night. Is it possible to have my AWS Instance run a given command in a terminal window at a certain time, say 11:55 PM or even upon startup. So at the time, or at startup, the command "docker run -it...." is ran within the instance.
Is automating this process possible? I have looked into CloudWatch events and think that might be the best way to go about automating this process but I am not quite sure how I would create a rule to fulfill the job. If it is not possible to automate a certain command within a terminal window, could I automate the dockerfile to run at a certain time?
ofcourse you can automate running of commands not just docker but for the fact any commands using cron daemon. all you need to do is place your command in shell script file say doc.sh in your desired directory.
ssh into your instance
open terminal and type crontab -e
enter the following details in this manner a b c d e /directory/command
where a -Minute, b-hour c-day d-month e-day of the week
the /directory/command specifies the location and script you want to run.
for more reference cron examples,https://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/
If you have a dockerfile that you want to run for a few minutes a day, you should look into Fargate. You can schedule an event with Cloudwatch, run the container and then shut it down when it's done.
It will probably cost around $0.01/day to run this.

invalid cloud build timeout?

We have a build that takes anywhere from 1 minute to 15 minutes(monobuild that is not parallized yet so it may build 8 servers or 1). It was timing out so I modified the build file to
steps:
- name: gcr.io/$PROJECT_ID/continuous-deploy
timeout: 1200s
I also ran these commands(the last one failed though even though I got that from another post so it worked for them somehow)...
Deans-MacBook-Pro:orderly dean$ gcloud config set app/cloud_build_timeout 1250
Updated property [app/cloud_build_timeout].
Deans-MacBook-Pro:orderly dean$ gcloud config set builds/timeout 1300
Updated property [builds/timeout].
Deans-MacBook-Pro:orderly dean$ gcloud config set container/build-timeout 1350
ERROR: (gcloud.config.set) Section [container] has no property [build-timeout].
Deans-MacBook-Pro:orderly dean$
I get the following error that anything greater than 10 minutes is invalid on google
invalid build: invalid timeout in build step #0: build step timeout "20m0s" must be <= build timeout "10m0s"
Why MUST it be less than 10m0s? I really need our builds to be about 20 minutes.
I was going off of
Why can't I override the timeout on my Google Cloud Build?
and
GCP Cloud build ignores timeout settings
thanks,
Dean
The timeout of the steps should be less or equal than the timeout of the whole task.
By setting the timeout at the step level to 20 minutes it is causing the error as the default timeout for the whole task is 10 minutes by default.
The way to avoid this happenning is to set the timeout of the full task to be grater or equal to the the timeout of the specific steps.
I added a small example on how to define this.
steps:
- name: gcr.io/$PROJECT_ID/continuous-deploy
timeout: 1200s # Step Timeout
timeout: 1200s # Full Task Timeout

Getting Data From A Specific Website Using Google Cloud

I have a machine learning project and I have to get data from a website every 15 minutes. And I cannot use my own computer so I will use Google cloud. I am trying to use Google Compute Engine and I have a script for getting data (here is the link: https://github.com/BurkayKirnik/Automatic-Crypto-Currency-Data-Getter/blob/master/code.py). This script gets data every 15 mins and writes it down to csv files. I can run this code by opening an SSH terminal and executing it from there but it stops working when I close the terminal. I tried to run it by executing it in startup script but it doesn't work this way too. How can I run this and save the csv files? BTW I have to install an API to run the code and I am doing it in startup script. There is no problem in this part.
Instances running in Google Cloud Platform can be configured with the same tools available in the operating system that they are running. If your instance is a Linux instance, the best method would be to use a cronjob to execute your script repeatedly at your chosen interval.
Once you have accessed the instance via SSH, you can open the crontab configuration file by running the following command:
$ crontab -e
The above command will provide access to your personal crontab configuration (for the user you are logged in as). If you want to run the script as root you can use this instead:
$ sudo crontab -e
You can now edit the crontab configuration and add an entry that tells cron to execute your script at your required interval (in your case every 15 minutes).
Therefore, your crontab entry should look something like this:
*/15 * * * * /path/to/you/script.sh
Notice the first entry is for minutes, so by using the */15, you are telling the cron daemon to execute the script once every 15 minutes.
Once you have edited the crontab configuration file, it is a good idea to restart the cron daemon to ensure the change you made will take place. To do this you can run:
$ sudo service cron restart
If you would like to check the status to ensure the cron service is running you can run:
$ sudo service cron status
You script will now execute every 15 minutes.
In terms of storing the CSV files, you could either program your script to store them on the instance, or an alternative would be to use Google Cloud Storage bucket. File can be copied to buckets easily by making use of the gsutil (part of Cloud SDK) command as described here. It's also possible to mount buckets as a file system as described here.