Redis(Memorystore) clearing out keys - google-cloud-platform

I am using express-sessions and Redis(Memorystore) 5.0 to save sessions and it appears Redis is clearing out all the keys(randomly and not at intervals) way before the TTL on a key runs out. Leaving only a couple of backup entries
This entry should be valid for another week judging from the TTL.
I have never configured a Redis instance before and it is likely I misconfigured this one, some insight would be appreciated.
Also, this is what I get when I run monitor

Avoid exposing your Memorystore instance through the external IP of your Compute Engine. Combined with the fact that
Memorystore instances currently requires no authentication, this will lead to a vulnerability that
will allow anyone to read, write and execute scripts on your instance. On the worst case scenario, it will allow someone to mine cryptocurrency using your instance, hence causing a suspension on your project or account unfortunately.
There are multiple guides online to connect to Memorystore remotely. I suggest connecting through SSH by following this thread: Accessing GCP Memorystore from local machines
For reference, I will paste the answer here (credits to the person who posted it):
"If your Redis machine has internal IP address 10.0.0.3 you'd do:
gcloud compute instances create redis-forwarder --machine-type=f1-micro
gcloud compute ssh redis-forwarder -- -N -L 6379:10.0.0.3:6379
As long as you keep the ssh tunnel open you can connect to localhost:6379"

Related

How can I deploy and connect to a postgreSQL instance in AlloyDB without utilizing VM?

Currently, I have followed the google docs quick start docs for deploying a simple cloud run web server that is connected to AlloyDB. However, in the docs, it all seem to point towards of having to utilize VM for a postgreSQL client, which then is connected to my AlloyDB cluster instance. I believe a connection can only be made within the same VPC and/or a proxy service via the VM(? Please correct me if I'm wrong)
I was wondering, if I only want to give access to services within the same VPC, is having a VM a must? or is there another way?
You're correct. AlloyDB currently only allows connecting via Private IP, so the only way to talk directly to the instances is within the same VPC. The reason all the tutorials (e.g. https://cloud.google.com/alloydb/docs/quickstart/integrate-cloud-run, which is likely the quickstart you mention) talk about a VM is that in order to create your databases themselves within the AlloyDB cluster, set user grants, etc, you need to be able to talk to it from inside the VPC. Another option for example, would be to set up Cloud VPN to some local network to connect your LAN to the VPC directly. But that's slow, costly, and kind of a pain.
Cloud Run itself does not require the VM piece, the quickstart I linked to above walks through setting up the Serverless VPC Connector which is the required piece to connect Cloud Run to AlloyDB. The VM in those instructions is only for configuring the PG database itself. So once you've done all the configuration you need, you can shut down the VM so it's not costing you anything. If you needed to step back in to make configuration changes, you can spin the VM back up, but it's not something that needs to be running for the Cloud Run -> AlloyDB connection.
Providing public ip functionality for AlloyDB is on the roadmap, but I don't have any kind of timeframe for when it will be implemented.

AWS ECS Task can't connect to RDS Database

I'm a newer AWS user and today I got stuck while working on a sample project. I successfully created a docker container that runs a simple R script that connects to my AWS RDS MySQL Database and creates & writes some basic files to it. I built a public ECR repository, pushed my docker image there, and built a ECS cluster & task choosing Fargate and using the container image from my repository. My task ran and I could see the R code being executed when I went through the logs, but it was never able to connect to the SQL Database and exited afterwards.
I've had to whitelist my own IP address in the security group for the RDS Database so that I can connect to it, so I'm aware I probably have to do that for my ECS task to establish that connection too. But won't that IP address constantly change because I won't have a static IP for the Fargate Server that is executing my task? I'm trying to stay on the free tier so I'm not sure I want to setup an elastic IP address for this server.
These 2 articles seem close if not the same issue I'm having but I can't figure out a solution. I haven't found any other info.
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-task-database-connection/
https://aws.amazon.com/premiumsupport/knowledge-center/ecs-fargate-static-elastic-ip-address/
The end goal is to get this sample project successfully running on a scheduled fixed interval, and then running actual scripts on there to help automate things and make my life easier, so this sample project is a first step towards that. Any help or info on the questions I'm having would be appreciated !
Yes, your task is ephemeral (whether you launch it manually or as part of an ECS service) and its private/public ip address may change over time if it gets replaced. The way you'd make the connectivity rules to stick is to assign a security group to the task (that may have inbound access on a specific port you need I assume and outbound to everything) and assign another security group to the RDS db that has inbound access on port 3306 for the security group you assigned to the task (this is the trick, the SG will not change and you are telling RDS to allow access to ALL traffic coming from that SG). I see the first article you posted doesn't talk about this part (it should).

Google Cloud Redis - IP Address changed without warning

TLDR: I could use some advice on how to setup Redis for production use on GPC, it just switched IP addresses on us randomly, and there is nothing in the documentation about that / I have no idea how to build a stable solution with that possibility.
Background:
We've been using google cloud for a few years and had a stable Redis Memorystore instance on the 'Standard' Tier.
In the past few days, our web servers started slowly crashing every so often. After investigating, something was locking up when connecting to celery / Redis, and we found that all our config files had 10.0.0.3 as the Redis instance, and the IP address for the server was listed as 10.0.0.4. This hasn't changed ever, and our configs are in git so we're sure they were unchanged.
Since Celery won't boot up with a bad connection we know it was correct on Tuesday when we pushed up new code. It seems like the server failed over and somehow issued an IP address change on us. As evidence,
Our graphical usage bizarrely change color at a specific point
Which matches our error logs "[2020-06-16 03:09:21,873: ERROR/MainProcess] Error in timer: ReadOnlyError("You can't write against a read-only slave.",)"
All the documentation we have found says the IP address would stay the same, but given that didn't happen, I'm hoping for some feedback on how one would work around a non-static IP in this case on GPC
Memorystore does not support static IP address. Some scenarios where IP address change can occur are restarts or when connection modes are changed.
From review of the Memorystore for Redis networking page, when using direct access connection via IP address your project will set up a VPC network peering connection with Google's internal project, where the instance is managed. This will create an allocated IP range for Memorystore to use for the instances, this can either be provided by you or picked from the available space (will be a /29 block by default).
On the other hand, Memorystore for Redis exposes the uptime as a metric that is available through Cloud Monitoring (formally Stackdriver). This can be used as a health check for the instance as you will be able to determine if there has been a restart or points of unavailability.
Following the point above, you are able to set up an alert on the uptime metric directly in Cloud Monitoring. Unfortunately there is nothing specific to IP address changes though.

Redis cli unable to connect to AWS ElastiCache due to redis memory usage but application still able to communicate

Is it possible that redis cli is given less priority to connect when memory consumption is high but application is allowed to communicate?
I am unable to connect via cli so can't check anything. Also, don't have the redis server access.
We connect without authentication -
redis-cli -h <hostname>
I ran a process which inserted too many redis keys and that caused this situation. Now, I am not able to delete those keys. I am afraid, the other necessary keys would get evicted as old and system would start doing processing for things not available in redis.
Not able to connect via telnet as well.
Is it possible to connect via a Python script at this point?
If I restart the Java application, will it be able to connect anymore?
Will redis server access via AWS console be able to delete any of the key patterns? I don't have the access currently, so not able to confirm myself. Never used via it also.
Update
Following are graphs taken from AWS console, over the last 1 day since this issue happened -
Update
I went through the FAQ of elasticache, but did not find any mention of being able to manage the data at key value pair level or presence of some special privilege users like root in case of MySql which is able to connect when no other users are able to connect.
All I found is cluster level management capabilities.
From the question it is not clear whether the redis-cli -h <host> command you're running is from within the EC2 or it is from you local machine (Outside AWS VPC).
Accessing from EC2
You will have to ensure following points:
Both the EC2 instance and Redis Instance are on the same VPC.
The security group on EC2 should be allowing port 6379 (It should already be if an application is able to access Redis on the same EC2)
Accessing from outside Amazon VPC
This is not something that's preconfigured and I will suggest that you go through the Accessing Your Cluster docs under the heading "How to Access ElastiCache Resources from Outside AWS".
First, check the connectivity from source(Generally Ec2 instance) to Target (Redis Host).
We can use simple command for that like
#curl -v hostIP(or dnsName):Port
#curl -v myredis.com:6379 or curl -v 192.17.37.42:6379
If you see "Connected" then there is no issue with the network otherwise you have to look into network configurations like firewalls.
Next, you can connect to Redis using redis-cli with the below command:
#redis-cli -h myredis.com -p 6379

Unable to create server groups(create without tempplate) in gcp

I am following the code labs https://www.spinnaker.io/guides/tutorials/codelabs/gce-source-to-prod/ but I am facing a problem in creating server group in STEP-1. (Deploy stage). The page is getting loaded for infinite time. I think I can say It is not going beyond that point. I am able to come up to this step only if I am using AZURE or any other local machines. If I use Google Cloud Instance to do SSH tunneling I am not even able to create a application. Can you please help me.
You might need to do some additional troubleshooting to determine where the problem is. For example, run netstat on the machine to see whether it's listening on port 9000. See if you can create a firewall rule allowing inbound traffic on that port and then try to connect directly without using the SSH tunnel.