After my app is successfully pushed via cf I usually need do manually ssh-log into the container and execute a couple of PHP scripts to clear and warmup my cache, potentially execute some DB schema updates etc.
Today I found out about Cloudfoundry Tasks which seems to offer a pretty way to do exactly this kind of things and I wanted to test it whether I can integrate it into my build&deploy script.
So used cf login, got successfully connected to the right org and space, app has been pushed and is running and I tried this command:
cf run-task MYAPP "bin/console doctrine:schema:update --dump-sql --env=prod" --name dumpsql
(tried it with a couple of folder changes like app/bin/console etc.)
and this was the output:
Creating task for app MYAPP in org MYORG / space MYSPACE as me#myemail...
Unexpected Response
Response Code: 404
FAILED
Uses CF CLI: 6.32.0
cf logs ArcticTenTestBackend --recent does not output anything (this might be the case because I have enabled an ELK instance for logging - as I wanted to service-connect to ELK to look up the logs I found out that the service-connector cf plugin is gone for which I will open a new ticket).
Created new Issue for that: https://github.com/cloudfoundry/cli/issues/1242
This is not a CF CLI issue. Swisscom Application Cloud does not yet support the Cloud Foundry tasks. This explains the 404 you are currently receiving. We will expose this feature of Cloud Foundry in an upcoming release of Swisscom Application Cloud.
In the meantime, maybe you can find a way to execute your one-off tasks (cache warming, DB migrations) at application startup.
As mentioned by #Mathis Kretz Swisscom has gotten around to enable cf run-task since this question was posted. They send out e-mails on 22. November 2018 to announce the feature.
As discussed on your linked documentation you use the following commands to manage tasks:
cf tasks [APP_NAME]
cf run-task [APP_NAME] [COMMAND]
cf terminate-task [APP_NAME] [TASK_ID]
Related
I have Apache2 installed in one of VMs in Google Cloud Platform. I installed Ops Agent and configured it like below per the docs:
logging:
receivers:
mywebserver:
type: files
include_paths:
- /var/log/apache*/access_log
- /var/log/apache*/error_log
service:
pipelines:
default_pipeline:
receivers:
- mywebserver
But then the Logs in GCP isn't showing the logs of this web-server. I don't see the service mywebserver as filter option in the logs dropdown even for this VM instance.
OS: Ubuntu 18.x LTS
Ops Agent Version : Latest as of today
What am I missing? Your help is much appreciated.
When I tried to debug using the command cat /var/log/google-cloud-ops-agent/subagents/*.log | grep apache it returned nothing. It should show something similar to below:
[ info] [input:tail:tail.0] inotify_fs_add(): inode=268631 watch_fd=1 name=/var/log/apache2/access.log
[input:tail:tail.0] inotify_fs_add(): inode=268633 watch_fd=2 name=/var/log/apache2/error.log
This prompted me to get back to logs and realized that the google docs had a typo and I ended up copy-pasting the lines in good faith. Basically if you note my configuration instead of access.log the line contains access_log.
As trivial as it sounds, this killed a good deal of hours of mine. :Facepalm:
Lesson: Even Google Docs can have errors something as trivial as this that can kill your hours in debugging.
Google Cloud Run allows for using Cloud SQL. But what if you need Cloud SQL when building your container in Google Cloud Build? Is that possible?
Background
I have a Next.js project, that runs in a Container on Google Cloud Run. Pushing my code to Cloud Build (installing the stuff, generating static pages and putting everything in a Container) and deploying to Cloud Run works perfectly. đź‘Ś
Cloud SQL
But, I just added some functionality in which it also needs to some data from my PostgreSQL instance that runs on Google Cloud SQL. This data is used when building the project (generating the static pages).
Locally, on my machine, this works fine as the project can connect to my CloudSQL proxy. While running in CloudRun this should also work, as Cloud Run allows for connecting to my Postgres instance on Cloud SQL.
My problem
When building my project with Cloud Build, I need access to my database to be able to generate my static pages. I am looking for a way to connect my Docker cloud builder to Cloud SQL, perhaps just like Cloud Run (fully managed) provides a mechanism that connects using the Cloud SQL Proxy.
That way I could be connecting to /cloudsql/INSTANCE_CONNECTION_NAME while building my project!
Question
So my question is: How do I connect to my PostgreSQL instance on Google Cloud SQL via the Cloud SQL Proxy while building my project on Google Cloud Build?
Things like my database credentials, etc. already live in Secrets Manager, so I should be able to use those details I guess 🤔
You can use the container that you want (and you need) to generate your static pages, and download cloud sql proxy to open a tunnel with the database
- name: '<YOUR CONTAINER>'
entrypoint: 'sh'
args:
- -c
- |
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
chmod +x cloud_sql_proxy
./cloud_sql_proxy -instances=<my-project-id:us-central1:myPostgresInstance>=tcp:5432 &
<YOUR SCRIPT>
App engine has an exec wrapper which has the benefit of proxying your Cloud SQL in for you, so I use that to connect to the DB in cloud build (so do some google tutorials).
However, be warned of trouble ahead: Cloud Build runs exclusively* in us-central1 which means it'll be pathologically slow to connect from anywhere else. For one or two operations, I don't care but if you're running a whole suite of integration tests that simply will not work.
Also, you'll need to grant permission for GCB to access GCSQL.
steps:
- id: 'Connect to DB using appengine wrapper to help'
name: gcr.io/google-appengine/exec-wrapper
args:
[
'-i', # The image you want to connect to the db from
'$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME:$SHORT_SHA',
'-s', # The postgres instance
'${PROJECT_ID}:${_POSTGRES_REGION}:${_POSTGRES_INSTANCE_NAME}',
'-e', # Get your secrets here...
'GCLOUD_ENV_SECRET_NAME=${_GCLOUD_ENV_SECRET_NAME}',
'--', # And then the command you want to run, in my case a database migration
'python',
'manage.py',
'migrate',
]
substitutions:
_GCLOUD_ENV_SECRET_NAME: mysecret
_GCR_HOSTNAME: eu.gcr.io
_POSTGRES_INSTANCE_NAME: my-instance
_POSTGRES_REGION: europe-west1
* unless you're willing to pay more and get very stung by Beta software, in which case you can use cloud build workers (at the time of writing are in Beta, anyway... I'll come back and update if they make it into production and fix the issues)
The ENV VARS (including DB connections) are not available during build steps.
However, you can use ENTRYPOINT (of Docker) to run commands when the container runs (after completing the build steps).
I was having the need to run DB migrations when a new build was deployed (i.e. when the container starts running) and using ENTRYPOINT (to a file/command) was able to run migrations (which require DB connection details, not available during the build-process).
"How to" part is pretty brief and is located here : https://stackoverflow.com/a/69088911/867451
We recently met a known issue on airflow:
Airflow "This DAG isnt available in the webserver DagBag object "
Now we used a temporary solution to restart whole environment by changing configurations but this is not an efficient method.
The best workaround now we think is to restart webservers on cloud composer, but we didn't find any command to restart webserver. Is it a possible action?
Thanks!
For those who wander and find this thread: currently the for versions >= 1.13.1 Composer has a preview for a web server restart
Only certain types of updates will cause the webserver container to be restarted, like adding, removing, or upgrading one of the PyPI packages or like changing an Airflow setting.
You can do for example:
# Set some arbitrary Airflow config value to force a webserver rebuild.
gcloud composer environments update ${ENVIRONMENT_NAME} \
--location=${ENV_LOCATION} \
--update-airflow-configs=dummy=true
# Remove the previously set config value.
gcloud composer environments update ${ENVIRONMENT_NAME} \
--location=${ENV_LOCATION} \
--remove-airflow-configs=dummy
From Google Cloud Docs:
gcloud beta composer environments restart-web-server ENVIRONMENT_NAME --location=LOCATION
I finally found an alternative solution!
Based on this document:
https://cloud.google.com/composer/docs/how-to/managing/deploy-webserver
We can build airflow webservers on kubernetes (Yes, please throw away built-in webserver). So we can kill webserver pods to force restart =)
From console dags can be retrieved, we can list all dag present. There other commands too.
I have started a peer and membersrvc container with docker compose. They have started successfully. I deploying example02 chaincode from CLI (tried REST also). I get a success message. When i try to query the chaincode, i am getting Error when querying chaincode: Error:Failed to launch chaincode spec(Could not get deployment transaction for mycc - LedgerError - ResourceNotFound: ledger: resource not found)"
If you are trying to deploy the chaincode in dev mode, you first need to register the chaincode.
(Registration is only required in dev mode and not for production mode)
To register your chaincode on windows 10 machine in docker container :
open command prompt and go to bash shell using docker command
docker exec -it [peer container id] /bin/bash
Browse to chainocde directory and register it using
CORE_CHAINCODE_ID_NAME=mycc CORE_PEER_ADDRESS=127.0.0.1:7051 ./chaincode_example02
Now you would see register successful message : “Received REGISTERED, ready for invocations” and is ready to deploy, invoke and query in dev mode
Note: Leave the window as with register handler open, closing it would deregister the chaincode.
Waiting for a few minutes after the chaincode deployment might produce different results when querying. As described here, it could take a couple of minutes for chaincode to deploy. Another suggestion mentioned is to review the chaincode container log to determine if there are problems communicating with a peer.
It is also possible that the chaincode deployment was not successful. The log for the peer where the chaincode deployment was initiated could be reviewed to determine if this provides any insight.
There are also a couple of prior posts that are similar and might help.
How to debug chaincode? LedgerError - ResourceNotFound
Hyperledger : Deploying chaincode successful. But, cannot query - says ResourceNotFound
I have a problem with my application logs on my Cloudfoundry deployment.
I've deployed Cloudfoundry in a something minimized design based on the tiny-aws deployment of https://github.com/cloudfoundry-community/cf-boshworkspace.
I further minimized the deployment and put everything from the VMs "api", "backbone", "health" and "services" together on the api-machines.
So I have the following VMs:
api (2 instances)
data (1 instance)
runner (2 instances)
haproxy (1 public and 1 private proxy)
Cloudfoundry version is 212.
The deployment itself seems to work. I can deploy apps and they start up.
But the logs from my applications don't show up when I run
"cf logs my-app --recent"
I've tried several log-configurations in my spring-boot-app.
standard without modifications which should log to STDOUT according to spring-boot documentation
exlicitly set a log4j.properties file which was configured to log to STDOUT as well
a log4j-2 configuration for logging on STDOUT
a spring-boot configuration which logs to a file
In the last configuration, the file was created and my logs was shown when I ran "cf files my-app log/my-app.log"
I tried to debug where my logs are lost, but I couldn't find something.
The dea_logging_agent seems to run and has the correct NATS location configured, the dea itself too.
Loggregator seems to run well on the api-host too and seems to be connected to NATS too.
So my question is: In which locations should I search to find out where my logs go?
Thank you very much.