I created a Redis at GCP. Note it is GCP service, not a Redis created using compute engine. But when I run redis-cli and then run CLIENT command, like "client list", "client kill", etc. it showed:
(error) ERR unknown command 'client', with args beginning with:
Look like it does not support command about CLIENT, and maybe some other commands as well.
If I run "info" in the redis-cli, it shows the version is: redis_version:4.0.14
How to have the GCP managed Redis run command of "client"? I need to disconnect with all clients so I want to run "client kill".
if "client kill" is not available, can I use "shutdown" as a workaround? I am not sure if shutdown will terminate the redis or just stop it. if it just stops the redis and no data is lost, then I can use it as well.
Your issue is with CLIENT LIST command which currently is not available in Memorystore for Redis. Memorystore for Redis is a managed service and comes with some constraints. So some commands that interfere with a managed Redis service are blocked as per documentation.
As a workaround you can use the MONITOR command which is available for instances created after November 4, 2019. From the Redis documentation the MONITOR is a debugging command that streams back every command processed by the Redis server.
MONITOR will show the clients sending traffic to Redis server, as mentioned in this SO answer.
There is a PIT for this issue as Feature request. Please feel free to post there if you have further concern on that issue.
Related
I am working on a project where I can see all of the dags are queued up and not moving (appx over 24H or more)
Looks like its scheduler is broken but I need to confirm that.
So here are my questions
How to see if scheduler is broken
How to reset my airflow (web server) scheduler?
Expecting some help regarding how to reset airflow schedulers
The answer will depend a lot on how you are running Airflow (standalone, in Docker, Astro CLI, managed solution...?).
If your scheduler is broken the Airflow UI will usually tell you the time since the last heartbeat like this:
There is also an API endpoint for a scheduler health check at http://localhost:8080/health (if Airflow is running locally).
Check the scheduler logs. By default they are in a file at $AIRFLOW_HOME/logs/scheduler.
You might also want to look at how to do health checks in Airflow in general.
In terms of resetting it is usually best to restart the scheduler and again this will depend on how you started it in the first place. If you are using a standalone instance and have the processes in the foreground simply do ctr+c or close the terminal to stop it. If you are running airflow in docker restart the container, for the Astro CLI there is astro dev restart.
I'm trying to run my application on GCE VM. It uses nodeJs as frontend and a Java backend. I use this server to communicate with my local computer using MQTT. This is working but after some time (one hour and a half), the server seems to go to sleep (or the ports close ?).
Both MQTT and ssh terminal interface connections are lost.
When I connect back, the application is not running anymore, it seems like the VM restarted.
Do you have any idea on how to keep my server alive ? I can give further details.
Answering my own question as John Hanley explained the solution in comments:
"By what method are you running your frontend and backend code. VMs do not go to sleep. If you are starting your software via an SSH session, that will be an issue. Your code needs to run as a service. Edit your question with more details."
I was indeed running my application via the ssh terminal which caused the problem. The solution for me was to remotely access the VM via vncserver and to launch the application using the VM's terminal.
I am attempting to take my existing cloud composer environment and connect to a remote SQL database (Azure SQL). I've been banging at my head at this for a few days and I'm hoping someone can point out where my problem lies.
Following the documentation found here I've spun up a GKE Service and SQL Proxy workload. I then created a new airflow connection as show here using the full name of the service azure-sqlproxy-service:
I test run one of my DAG tasks and get the following:
Unable to connect: Adaptive Server is unavailable or does not exist
Not sure on the issue I decide to remote directly into one of the workers, whitelist that IP on the remote DB firewall, and try to connect to the server. With no command line MSSQL client installed I launch python on the worker and attempt to connect to the database with the following:
connection = pymssql.connect(host='database.url.net',user='sa',password='password',database='database')
From which I get the same error above with both the Service and the remote IP entered in as host. Even ignoring the service/proxy shouldn't this airflow worker be able to reach the remote database? I can ping websites but checking the remote logs the DB doesn't show any failed logins. With the generic error and not many ideas on what to do next I'm stuck. A few google results have suggested switching libraries but I'm not quite sure how, or if I even need to, within airflow.
What troubleshooting steps could I take next to get at least a single worker communicating to the DB before moving on the the service/proxy?
After much pain I've found that Cloud composer uses ubuntu 1804 which currently breaks pymssql as per here:
https://github.com/pymssql/pymssql/issues/687
I tried downgrading to 2.1.4 to no success. Needing to get this done I've followed the instructions outlined in this post to use pyodbc.
Google Composer- How do I install Microsoft SQL Server ODBC drivers on environments
I reset the instance after some codes hung up the server.
and then I was not able to login with the ssh tool
In the serial port log, I found this:
.......................
[K[ [31m*[0m] A start job is running for /etc/rc....atibility (12hours 57s / no limit)
I have one serial port console now. but which commands should I use?
Please help me.
Port 22 is listening but I have not set any passwords or ssh keys. So I only can login by the google ssh web tool
This could be caused if you put long-running commands in rc.local so it is expected that the server will take some time to boot. See the Stack Exchange question in [1]
I would give the VM some time (minutes) and let it finish the processes.
If the issue persists you should contact Google Cloud Support to assist you with your VM instance. See link [2] for information on how to contact support.
[1] https://askubuntu.com/questions/616757/a-start-job-is-running-for-etc-rc-local-compatibility-how-to-fix
[2] https://support.google.com/cloud/answer/6282346?
Due to some technical issues, we stopped the AWS server and when we started the server all delayed jobs are showing in the queue, none of the delayed jobs is running on the server, so I need to start the delayed job server. When I used the following command I got some issues which are shown in this picture: