Cloud foundry service down - cloud-foundry

I have an issue with postgres database in cloud foundry. Created service using cloud foundry market place. Application worked good until yesterday, and today not able to connect to database or ping it. I am not sure where to check service logs or how to restart postgres service in cloud foundry. Any one have some ideas please respond to this thread. Thanks. I can see below error in application logs.
Caused by: org.postgresql.util.PSQLException: The connection attempt failed.
Tried to ping database host IP address, but shows down.

Related

dataproc hadoop/spark job can not connect to cloudSQL via Private IP

I am facing this issue of setting up private ip access between dataproc and cloud sql with vpc network and peering setup, would really appreciate help since not able to figure this our since last 2 days of debugging, after following pretty much all the docs.
so far the setup i tried ( with internal IP only )
enabled "private google access" to default subnet and used the default subnetwork for the dataproc and SQL.
created the new VPX network/subnetwork and used that to create dataproc and updated cloud sql to use that network.
created ip range and "private service connection" to "google cloud platform" service provider -- enabled it as well. Along with vpc network peering to "servicenetworking"
explicitly added sql client role to default dataproc compute service account ( event though I didnt needed this for other VM connectivity to cloud sql, using the same role, because its a admin ("editor") role anyway. )
All according to the doc : https://cloud.google.com/sql/docs/mysql/private-ip and other links there
Problem:
when I submit spark job on dataproc that connects to this cloud sql, it fails with following error: Communications link failure....
Caused by: java.net.ConnectException: Connection refused (Connection refused)
Test & debug:
connectivity test all passes from the exact internal IP address on both side ( dataproc node and cloud sql node )
mysql command line client can connect fine from dataproc master node
checked cloud logging does not show any deny or issue in connecting mysql
screenshot for the connectivity test on both default and new vpc network.
other stackoverflow questions I referred on using private ip:
Cannot connect to Cloud SQL from Cloud Run after enabling private IP and turning off public iP
How to access Cloud SQL from dataproc?
ps: I want to avoid cloud proxy route to connect to cloud SQL from dataproc so dont want to install cloud_proxy service via initialization.
A "Connection refused" normally means that nothing is listening on the other end. The logs also contain hints that the database connection is attempted to localhost, port 3307. This is the right port for the CloudSQL proxy, one higher than the usual MySQL port.
Check whether the metadata configuration for your cluster is correct:
Workaround 1 :
Check the proxy is a different version in the cluster that is having issues version 1.xx. The difference in SQL proxy version seems to be in this issue. You can pin the suitable version of Cloud SQL proxy to 1.xx.
Workaround 2:
Run the command : journalctl -r -u cloud-sql-proxy.service | grep -i err,
Based on the logs check which sql proxy causes issues.
Check if the root cause may be the Data project was hitting "sql query per 100 sec per user" quota.
Actions:
Increase the Quota and restart the affected cloud sql proxy services (by monitoring jobs running on the master nodes that failed)
this is similar to the link but with the quota error preventing the startup instead of network errors in the link. With the updated quota, the cloud sql proxy should not have this reoccur.
here's a recommended set of next steps:
Reboot any nodes that appear to have a defunct/broken cloudsql proxy -- systemd won't report the truth, but running "mysql --host ... --port ..." trying to connect to the cloudsql proxy on the bad nodes would detect this.
Bump up API quota immediately - in Cloud Console just go to "IAM and Admin", go to "Quotas", search for the "Cloud SQL Admin API", click through it: then click on the pencil to "edit" and should be able to bump to 300 as self service without approval needed. If you want it to be more than 300 per 100s you might need to file an approval request.
If you look at the quota usage, if it's approaching 100 per 100s from time to time, update the quota to 300.
It's possible that the extra cloudsql proxy instances on the worker nodes are causing more load than is necessary just running cloudsql proxy on the master node. If the cluster is only using a driver that runs on a master node, then the other worker nodes don't need to run the proxy.
To find the nodes which are broken, you can see which are responding to the cloud sql proxy port.
You can loop over each hostname and ssh to it and run this command:
nc -zv localhost 3307 || sudo systemctl restart cloud-sql-proxy
or you could check the logs on each to see which ones have logged a quota message like this:
grep cloud_sql_proxy /var/log/syslog | tail
and see if the very last message they see says "Error 429: Quota exceeded for quota group 'default' and limit 'USER-100s' of service
'sqladmin.googleapis.com' for consumer ..."
The nodes which aren't running cloud sql proxy could be rebooted to start from scratch, or restart the proxy with this command on each:
"sudo systemctl restart cloud-sql-proxy"

Is there any way to get the DNS address of an instance of an aws ubuntu server if I do not have the login to the aws account, but the SSH key for it

I got a task to deploy a static website on an AWS Ubuntu Server, I was given the username and the SSH key for it. Using PuTTy I got access to the server, setup django, postgres nginx and gunicorn. However now I need to check the progress and whichever tutorial I looked up, I found them checking their deployment progress with a dns address, but since I have connected to the server remotely, I do not have that. So please help me check my deployment status. I am attaching some screenshots of the PuTTy terminal below
Image of the final Gunicorn command to finish the deployment

Connecting to Cloud SQL private IP from GCE VM application

I am checking Cloud SQL Private IP connections from different types of clients. I could successfully make a connection from an application hosted in a GKE cluster which was created as a VPC-native cluster as described here. Having already done this, I was expecting it would be easier to connect to the Private IP from the same application (which is a simple Spring Boot application) hosted in a GCE VM. Contrary to my expectations this does not appear to be so. It is the same Spring Boot application that I am trying to run inside a VM. But it does not seem to be able to connect to the database. I was expecting some connection error but nothing shows up - no exception thrown. What is strange is I am able connect to the Cloud SQL Private IP via mysqlcommand line from the same VM but not from within the Spring Boot application. Anyone out there who faced this before?
Issue was not related Cloud SQL Private IP. As mentioned in my earlier comment, I was passing active profile info via Kubernetes pod configuration. So the Dockerfile did not have this info. To fix the issue, I had to pass active profile info when the program was initialized outside Kubernetes. This has a lot of helpful answers how to do this. If the program is being started via a docker run command the active profile info can be passed as a command line argument. See here for a useful reference.
So to summarize, Cloud SQL Private IP works fine from a CE VM. No special configuration required at GCE VM end to get this working.

Unable to login using cf login in sap cloud foundry

I am getting started to use cloud foundry , i have taken a trial account in SAP Cloud Foundry . I have installed cf cli on my system but unfortunately i am not able to login, whenever i try to login or even try to cf api i get this error:-
Request error: Get https://api.xx.xxxx.xxxx.xxxxxxxx.com/v2/info: dial tcp: i/o timeout
TIP: If you are behind a firewall and require an HTTP proxy, verify the https_proxy environment variable is correctly set. Else, check your network connection.
I have tried setting and unsetting the proxy but it doesn't help.
Any Suggestions?
Vishesh.
This is happening as SAP CF is not available publicly and requires you to configure special proxy settings over.
You can check for SAP cloud foundry JAM Page for more information.

Deploying cloud foundry on openstack: no route to host

I've been following the instructions for deploying cloud foundry on openstack and was having a problem with step that uploads the bosh stemcell:
$ bosh upload stemcell http://bosh-jenkins-artifacts.s3.amazonaws.com/bosh-stemcell/openstack/bosh-stemcell-latest-openstack-kvm-ubuntu.tgz
...
Error 100: Unable to connect to the OpenStack Compute API. Check task debug log for details.
...
E, [2013-09-21T09:02:11.359958 #2587] [task:1] ERROR -- : No route to host - connect(2) (Errno::EHOSTUNREACH) (Excon::Errors::SocketError)
I can ssh to the instance running Micro Bosh and confirm that it can ping the compute host, but it can't connect via tcp/http.
I've described the error in more detail here:
http://openstack.redhat.com/forum/discussion/625/ingress-issue-from-spawned-instance-to-compute-host#Item_1
It appears to basically be an openstack firewall/iptables configuration issue between spawned instance running Micro Bosh and controller/compute host running the compute API, which I can only temporarily fix via iptables. But I was surprised not to find any other cloud foundry related posts pointing to this issue, and was wondering if anyone has seen this issue and found a workaround?