CLI plugin installation problem on Grafana - google-cloud-platform

I'm getting the following error when trying to install a plugin from the Grafana CLI installed on Kubernetes. I deleted and rebuilt the pod, it works fine but the error persists. Other Grafana features are working fine. What can I do?
Failed to send requesterrorGet "https://grafana.com/api/plugins/repo": context deadline exceeded (Client.Timeout exceeded while awaiting headers)Error: ✗ Failed to send request: Get "https://grafana.com/api/plugins/repo": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
BR

You say there is no problem with Grafana working. Kubernetes also does not give an error with the pod. In the error content, it says "Failed to send request: Get". Most likely it can access the internet but not dns resolution. If ping 8.8.8.8 is working but ping google.com is not working, you need to add nameserver.
For this, you can add something like the following into the /etc/resolv.conf file.
nameserver 8.8.8.8

Related

gcloud init ERROR: gcloud crashed (ConnectionError)

Google Cloud SDK installer is downloaded successfully.
After successful installation, it runs gcloud init command
It asked for sign-in
After providing signing in details, the following error occurs
ERROR: gcloud crashed (ConnectionError):
HTTPSConnectionPool(host='oauth2.googleapis.com', port=443):
Max retries exceeded with url: /token (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000016A45426A08>:
Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))
How to handle this error?
I was trying gcloud init on a remote desktop, and received this error: gcloud crashed (ConnectionError)
The following solution worked for me.
I used the command gcloud init --console-only instead of gcloud init
SSL verification fails mostly if you are behind proxy. You can disable ssl auth for gcloud client using below cmd:
gcloud config set auth/disable_ssl_validation True
Google will eventually block your IP when you exceed a certain amount of requests.
You can try to create another superuser form command line may be it will resolve the issue.
When i was trying with broadband it was giving same error, but when i switched to mobile data, everything is working fine for me.

Cannot reach containers from codebuild

I've been having issue reaching containers from within codebuild. I have an exposed GraphQL service with a downstream auth service and a postgresql database all started through Docker Compose. Running them and testing them works fine locally, however I cannot get the right comination of host names in codebuild.
It looks like my test is able to run if I hit the GraphQL endpoint at 0.0.0.0:8000 however once my GraphQL container attempts to reach the downstream service I will get a connection refused. I've tried reaching the auth service from inside the GraphQL service at auth:8001, 0.0.0.0:8001, with port 8001 exposed, and by setting up a briged network. I am always getting a connection refused error.
I've attached part of my codebuild logs.
Any ideas what I might be missing?
Container 2018/08/28 05:37:17 Running command docker ps CONTAINER ID
IMAGE COMMAND CREATED STATUS PORTS NAMES 6c4ab1fdc980
docker-compose_graphql "app" 1 second ago Up Less than a second
0.0.0.0:8000->8000/tcp docker-compose_graphql_1 5c665f5f812d docker-compose_auth "/bin/sh -c app" 2 seconds ago Up Less than a
second 0.0.0.0:8001->8001/tcp docker-compose_auth_1 b28148784c04
postgres:10.4 "docker-entrypoint..." 2 seconds ago Up 1 second
0.0.0.0:5432->5432/tcp docker-compose_psql_1
Container 2018/08/28 05:37:17 Running command go test ; cd ../..
Register panic: [{"message":"rpc error: code = Unavailable desc = all
SubConns are in TransientFailure, latest connection error: connection
error: desc = \"transport: Error while dialing dial tcp 0.0.0.0:8001:
connect: connection refused\"","path":
From the "host" machine my exposed GraphQL service could only be reached using the IP address 0.0.0.0. The internal networking was set up correctly and each service could be reached at <NAME>:<PORT> as expected, however, upon error the IP address would be shown (172.27.0.1) instead of the host name.
My problem was that all internal connections were not yet ready, leading to the "connection refused" error. The command sleep 5 after docker-compose up gave my services time to fully initialize before testing.

cf create-service-broker fails with connection refused

I'm experimenting with CF in my local bosh-lite setup.
The apps that I deploy into if work well. I am now trying to follow the steps here
https://github.com/cf-platform-eng/cf-community-workshop/blob/master/demos/service-broker-lab.adoc
to try out the custom service broker setup.
The https://github.com/mstine/haash-broker application starts and is running fine:
$ cf apps
name requested state instances memory disk urls
haash-broker started 1/1 768M 1G haash-broker.vbox.mojito, haash-broker.192.168.50.6.xip.io
I can access it from my host machine browser well:
http://haash-broker.192.168.50.6.xip.io/v2/catalog
But when I execute the
cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.6.xip.io
I get
$ cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.6.xip.io
Creating service broker haash-broker as admin...
FAILED
Server error, status code: 502, error code: 10001, message: The service broker could not be reached: http://haash-broker.192.168.50.6.xip.io/v2/catalog
When I log in into the CC VM:
$ bosh -e vbox -f cf ssh api/eb4cec99-bab1-4513-a980-fb92775ac2d8
I can ping the hostname:
api/eb4cec99-bab1-4513-a980-fb92775ac2d8:~$ sudo ping haash-broker.192.168.50.6.xip.io
PING haash-broker.192.168.50.6.xip.io (192.168.50.6) 56(84) bytes of data.
64 bytes from 192.168.50.6: icmp_seq=1 ttl=64 time=0.080 ms
But wget connection gets refused:
api/eb4cec99-bab1-4513-a980-fb92775ac2d8:~$ wget http://warreng:natedogg#haash-broker.192.168.50.6.xip.io/v2/catalog
--2018-04-06 04:19:05-- http://warreng:*password*#haash-broker.192.168.50.6.xip.io/v2/catalog
Resolving haash-broker.192.168.50.6.xip.io (haash-broker.192.168.50.6.xip.io)... 192.168.50.6
Connecting to haash-broker.192.168.50.6.xip.io (haash-broker.192.168.50.6.xip.io)|192.168.50.6|:80... failed: Connection refused.
The firewall permits everything on that VM (sudo iptables -L).
The hostname gets resolved properly. The ping works and the 80 port is open on the target IP, since I can reach it from my host browser.
How can that be that the wget doesn't work in such situation?
This also seems to be the reason for me failing to create a service broker cf create-service-broker
UPDATE
I've managed to to execute the cf create-service-broker command with URL of an nginx reverse proxy running outside of my bosh-lite environment. The proxy redirects to the same initial URL http://haash-broker.192.168.50.6.xip.io
and the command succeeds in this way.
But the subsequent
cf create-service-broker haash-broker warreng natedogg http://haash-broker.192.168.50.1.xip.io:9999
cf enable-service-access haash
cf create-service HaaSh basic my-hash
(where haash-broker.192.168.50.1.xip.io:9999 is my nginx proxy) fails with
Server error, status code: 502, error code: 10001, message: The service broker rejected the request to http://haash-broker.192.168.50.1.xip.io:9999/v2/service_instances/4ef19154-d238-4cb3-8003-803fba53af3f?accepts_incomplete=true. Status Code: 400 Bad Request, Body: {"timestamp":1523008856993,"error":"Bad Request","status":400,"message":""}
I can see in both nginx and broker app logs that the the request reaches the broker and it answers with 400.
Debugging now why.
Can you post the result of --server-response option used with wget? Also what happens when you try to curl the broker?
Broker requires credentials, but it is interesting if it responds with 401 or 500 on the first request that wget makes without credentials.

*10 upstream timed out (110: Connection timed out) while reading response header from upstream with uwsgi

I currently have a server setup with nginx and uwsgi with django
This error doesn't happen until I try to change my rds instance
my fully error message is
*10 upstream timed out (110: Connection timed out) while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: xxx.xxx.xxx.xxx, request: "GET /load/ HTTP/1.1", upstream: "uwsgi://unix:/tmp/load.sock", host: "example.com", referrer: "https://example.com/"
I was using aws rds (postgres) which works perfectly fine. The only change I made is changing from regular postgres service to aurora postgres I didn't upgrade the db, from regular to aurora. I created a new aurora postgres. I got everything setup...changed host and everything in my django db setting. runserver locally works fine. It does connect to db with read and write. Works perfectly. But when I deploy to server, open up my domain. Anything ui related looks fine but db related, NO. Took awhile then of course the 504 gateway timeout. I went to checkout the nginx error log. That's the error message I found. Googled, tried a few settings other stackoverflow posts suggested such as addingsingle-interpreter = true into uwsgi.ini file. No luck.
Can someone please give me an idea where I should look into for this problem?
Thanks in advance.
try going to your rds instance, check its' security group setting. Happened to me once, too me a while to find out that the security group setting is the problem. I didn't recall setting up the security group but it restricted with local IP

Chef Server Installation error

I am a newbie to chef. I am trying to install chef server on an ec2 centos instance.
I am following this link to install the chef server.
But I am getting error at step 8 of installation.
[root#ip-10-105-203-174 ~]# knife configure -i
/usr/lib/ruby/gems/1.8/gems/ohai-7.0.4/lib/ohai/loader.rb:188: warning: character class has `[' without escape
/usr/lib/ruby/gems/1.8/gems/ohai-7.0.4/lib/ohai/loader.rb:188: warning: regexp has `]' without escape
Overwrite /root/.chef/knife.rb? (Y/N)Y
Please enter the chef server URL: [https://ip-10-105-23-174:443] https://10.105.23.174
Please enter a name for the new user: [root]
Please enter the existing admin name: [admin]
Please enter the location of the existing admin's private key: [/etc/chef-server/admin.pem]
Please enter the validation clientname: [chef-validator]
Please enter the location of the validation key: [/etc/chef-server/chef-validator.pem] /root/.chef/chef-server/chef-validator.pem
Please enter the path to a chef repository (or leave blank):
Creating initial API user...
Please enter a password for the new user:
ERROR: Connection refused connecting to https://10.105.203.174/users, retry 1/5
ERROR: Connection refused connecting to https://10.105.203.174/users, retry 2/5
ERROR: Connection refused connecting to https://10.105.203.174/users, retry 3/5
ERROR: Connection refused connecting to https://10.105.203.174/users, retry 4/5
ERROR: Connection refused connecting to https://10.105.203.174/users, retry 5/5
ERROR: Network Error: Connection refused - Connection refused connecting to https://10.105.23.174/users, giving up
Check your knife configuration and network settings
Is this the right tutorial. Please help me resolving the issue.
You seem to be doing the right thing for chef, but it looks like a fundamental networking issue. If I understand properly, you are logged into the chef server and trying to set up knife on the same box to talk contact itself. Is Chef server actually running? Use 'netstat -an' and verify that something is waiting on the :443 port. Can you use a browser from another host to contact it? You could also consider installing "knife" on another machine and running the same thing.
Are you providing a valid chef-server url. At first look it looks like you are giving the workstation ip address.