My website is bannedadsense.com, i was able to access it yesterday. Now I can't acccess to it from my laptop using chrome and firefox.
This is the notice from chrome:
bannedadsense.com refused to connect.
But ping bannedadsense.com is still running good.
[dang#centos-512mb-nyc3 ~]$ ping bannedadsense.com PING
bannedadsense.com (45.55.213.4) 56(84) bytes of data. 64 bytes from
centos-512mb-nyc3-01 (45.55.213.4): icmp_seq=1 ttl=64 time=0.062 m
s 64 bytes from centos-512mb-nyc3-01 (45.55.213.4): icmp_seq=2 ttl=64
time=0.137 m s 64 bytes from centos-512mb-nyc3-01
(45.55.213.4): icmp_seq=3 ttl=64 time=0.140 m s 64
bytes from centos-512mb-nyc3-01 (45.55.213.4): icmp_seq=4 ttl=64
time=0.067 m s 64 bytes from centos-512mb-nyc3-01
(45.55.213.4): icmp_seq=5 ttl=64 time=0.103 m s 64
bytes from centos-512mb-nyc3-01 (45.55.213.4): icmp_seq=6 ttl=64
time=0.124 ms 64 bytes from centos-512mb-nyc3-01 (45.55.213.4):
icmp_seq=7 ttl=64 time=0.078 ms 64 bytes from centos-512mb-nyc3-01
(45.55.213.4): icmp_seq=8 ttl=64 time=0.133 ms 64 bytes from
centos-512mb-nyc3-01 (45.55.213.4): icmp_seq=9 ttl=64 time=0.139 ms 64
bytes from centos-512mb-nyc3-01 (45.55.213.4): icmp_seq=10 ttl=64
time=0.127 ms 64 bytes from centos-512mb-nyc3-01 (45.55.213.4):
icmp_seq=11 ttl=64 time=0.103 ms 64 bytes from centos-512mb-nyc3-01
(45.55.213.4): icmp_seq=12 ttl=64 time=0.191 ms
Accessing to ftp is still good.
I don't know what is the problem.
Checked from proxy:
The requested resource could not be loaded. libcurl returned the
error: Failed to connect to bannedadsense.com port 80: Connection
refused
I have stopped firewall from the server. But the problem is still there.
I see the problem now:
I can't restart apache, the problem is here.
sudo apachectl restart
Please help me, thank you very much.
I see the problem now: I can't restart apache, the problem is here.
sudo apachectl restart
apachectl status
this is a part of the result from the command line:
Dec 22 05:31:55 centos-512mb-nyc3-01 httpd[728]: (2)No such file or directory: AH02291: Cannot access directory '/var/www/xosopowerball.net/' for error log ...et.conf:1
Dec 22 05:31:55 centos-512mb-nyc3-01 httpd[728]: (2)No such file or directory: AH02291: Cannot access directory '/var/www/xosomegamillions.com/' for error l...om.conf:1
And i only create new 2 folders with the name are xosopowerball.net and xosomegamillions.com and restart apache again, and all problems fixed done.
New solution:
remove files in /etc/httpd/sites-available/ and /etc/httpd/sites-enabled/ and the restart. And then you can remove 2 folders in /var/www. I created them when i setup the virtual hosts, check here Create New Virtual Host Files
Thank you for reading my problem.
Related
I am trying to connect two Erlang VMs (running on Centos 8, Erlang/OTP 23), one located on GCP us-east1-b and other GCP europe-west6, both inside same VPC, running on separate subnets, us-east 10.33.0.0/16 eur-west on 10.88.0.0/16. GCP routes and firewalls should be set to allow traffic across those subnets and throughout the VPC. Ping works from VM to VM (see below). Telnet works to erlang epmd port 4369. ISSUE - when connecting machine to machine using erlang ping utility net_adm:ping()/1 - returns a "pang", meaning does not connect.
Any suggestions or thoughts on what might be the issues, much appreciated !!!
Here is additional research and facts RE the setups.
NOTE - the GCP firewall rules, note the GCP network "block" ing result on the test connection, and note the TELNET responses for ports 35539 and 42257 do not connect (which maybe explains why the VM's return a "pang" or can't connect)
[g#app-server1-east ~]$ erl -name ack1#10.33.0.2 -setcookie whale
Erlang/OTP 23 [erts-11.1.3] [source] [64-bit] [smp:2:2] [ds:2:2:10] [asy
nc-threads:1] [hipe]
Eshell V11.1.3 (abort with ^G)
(ack1#10.33.0.2)1> net_adm:ping('ack2#10.88.0.2').
pang
(ack1#10.33.0.2)2>
[g#app-server1-east ~]$ ping 10.88.0.2
PING 10.88.0.2 (10.88.0.2) 56(84) bytes of data.
64 bytes from 10.88.0.2: icmp_seq=1 ttl=64 time=105 ms
64 bytes from 10.88.0.2: icmp_seq=2 ttl=64 time=103 ms
64 bytes from 10.88.0.2: icmp_seq=3 ttl=64 time=104 ms
64 bytes from 10.88.0.2: icmp_seq=4 ttl=64 time=103 ms
64 bytes from 10.88.0.2: icmp_seq=5 ttl=64 time=103 ms
^C
--- 10.88.0.2 ping statistics ---
6 packets transmitted, 5 received, 16.6667% packet loss, time 12ms
rtt min/avg/max/mdev = 103.290/103.749/105.243/0.754 ms
[g#app-server1-east ~]$ epmd -names
epmd: up and running on port 4369 with data:
name ack1 at port 35539
[gbaird#app-server1-east ~]$ telnet 10.88.0.2 4369
Trying 10.88.0.2...
Connected to 10.88.0.2.
Escape character is '^]'.
exit
Connection closed by foreign host.
[g#app-server1-east ~]$ telnet 10.88.0.2 42257
Trying 10.88.0.2...
telnet: connect to address 10.88.0.2: Connection timed out here
[g#app-server2-eur ~]$ erl -name ack2#10.88.0.2 -setcookie whale
Erlang/OTP 23 [erts-11.1.3] [source] [64-bit] [smp:2:2] [ds:2:2:10] [asyn
c-threads:1] [hipe]
Eshell V11.1.3 (abort with ^G)
(ack2#10.88.0.2)1> node
(ack2#10.88.0.2)1> .
node
(ack2#10.88.0.2)2>
g#app-server2-eur ~]$ ping 10.33.0.2
PING 10.33.0.2 (10.33.0.2) 56(84) bytes of data.
64 bytes from 10.33.0.2: icmp_seq=1 ttl=64 time=105 ms
64 bytes from 10.33.0.2: icmp_seq=2 ttl=64 time=103 ms
64 bytes from 10.33.0.2: icmp_seq=3 ttl=64 time=103 ms
64 bytes from 10.33.0.2: icmp_seq=4 ttl=64 time=103 ms
^C
--- 10.33.0.2 ping statistics ---
5 packets transmitted, 4 received, 20% packet loss, time 10ms
rtt min/avg/max/mdev = 103.194/103.601/104.685/0.666 ms
[g#app-server2-eur ~]$ epmd -names
epmd: up and running on port 4369 with data:
name ack2 at port 42257
[gbaird#app-server2-eur ~]$ telnet 10.33.0.2 4369
Trying 10.33.0.2...
Connected to 10.33.0.2.
Escape character is '^]'.
exit
Connection closed by foreign host.
[g#app-server2-eur ~]$ telnet 10.33.0.2 35539
Trying 10.33.0.2...
telnet: connect to address 10.33.0.2: Connection timed out
[gbaird#app-server2-eur ~]$
Erlang's EPMD (Erlang Port Mapper Daemon) is the one listening in port 4369, but the node listens in a random port.
When setting up a cluster, the node registers it's port in the local host's EPMD and contacts the remote host's EPMD to query the actual port for the remote node, then the traffic goes directly to that port. EPMD is not a relay.
You can control the range of ports for the nodes to listen through kernel configuration, particularly with inet_dist_listen_min and inet_dist_listen_max, allowing them in the FW
I've built a python application that runs smoothly when deployed directly on the OS (RaspOS). I decided to dockerize the application and deployed it to about 20 endpoints with no problem. But now, in one single endpoint, when the app is uploading a file to S3, usually the first upload occurs as it should but from the second file onwards I get this error message.
"Error:Connection was closed before we received a valid response from endpoint URL: ..."
I have already checked the container and it's internet link seems to be ok, it's even pinging the endpoint which the data is supposed to be uploaded.
I have a second application that streams video to KVS, and it's not connecting as well from inside the container.
Checking on Docker network configurations seems to be ok as well.
Where or what should I look from here? Almost 2 days stuck on this, I don't even know what to google anymore.
I guess it seems to be some configuration on Docker daemon or OS...
I did some testing from inside and outside the container for comparison, does it bring any clue?
INSIDE CONTAINER
root#mycontainer# busybox ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=118 time=10.943 ms
64 bytes from 8.8.8.8: seq=1 ttl=118 time=8.586 ms
64 bytes from 8.8.8.8: seq=2 ttl=118 time=8.108 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 8.108/9.212/10.943 ms
root#mycontainer# wget google.com
--2020-10-01 13:04:41-- http://google.com/
Resolving google.com (google.com)... 172.217.30.46, 2800:3f0:4001:809::200e
Connecting to google.com (google.com)|172.217.30.46|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.com/ [following]
--2020-10-01 13:04:41-- http://www.google.com/
Resolving www.google.com (www.google.com)... 172.217.162.164, 2800:3f0:4001:810::2004
Connecting to www.google.com (www.google.com)|172.217.162.164|:80... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.
--2020-10-01 13:04:43-- (try: 2) http://www.google.com/
Connecting to www.google.com (www.google.com)|172.217.162.164|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: 'index.html'
index.html [ <=> ] 12.27K --.-KB/s in 0s
2020-10-01 13:04:43 (45.9 MB/s) - 'index.html' saved [12567]
OUTSIDE CONTAINER
pi#raspberrypi:~/ $ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=119 time=7.55 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=119 time=8.09 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=119 time=7.78 ms
^C
--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 5ms
rtt min/avg/max/mdev = 7.549/7.807/8.090/0.221 ms
pi#raspberrypi:~/ $ wget google.com
--2020-10-01 13:05:51-- http://google.com/
Resolving google.com (google.com)... 172.217.30.46, 2800:3f0:4001:809::200e
Connecting to google.com (google.com)|172.217.30.46|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.com/ [following]
--2020-10-01 13:05:51-- http://www.google.com/
Resolving www.google.com (www.google.com)... 172.217.28.4, 2800:3f0:4001:810::2004
Connecting to www.google.com (www.google.com)|172.217.28.4|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
index.html [ <=> ] 12.27K --.-KB/s in 0s
2020-10-01 13:05:52 (60.0 MB/s) - ‘index.html’ saved [12566]
Note, this works perfectly when testing locally with Functions Framework.
I just deployed a function
Deploying function...
gcloud functions deploy quantumjs-api --runtime nodejs10 --trigger-http --project qunatumvue --region europe-west2
Deploying function (may take a while - up to 2 minutes)...done.
availableMemoryMb: 256
entryPoint: quantumjs-api
environmentVariables:
location: production
httpsTrigger:
url: https://europe-west2-qunatumvue.cloudfunctions.net/quantumjs-api
labels:
deployment-tool: cli-gcloud
name: projects/qunatumvue/locations/europe-west2/functions/quantumjs-api
runtime: nodejs10
Edit ---- thanks Doug Stevenson for the ping pointer
However, when posting data to it, I get no response back, just this error:
"Error: Network Error
at createError (webpack-internal:///./node_modules/axios/lib/core/createError.js:16:15)
at XMLHttpRequest.handleError (webpack-internal:///./node_modules/axios/lib/adapters/xhr.js:87:14)"
You can't ping a URL. You ping a hostname. The hostname in the URL you've given is "europe-west2-qunatumvue.cloudfunctions.net". When I ping that, it's fine:
user#host 18:26 $ ping europe-west2-qunatumvue.cloudfunctions.net
PING www3.l.google.com (173.194.202.138) 56(84) bytes of data.
64 bytes from pf-in-f138.1e100.net (173.194.202.138): icmp_seq=1 ttl=42 time=29.3 ms
64 bytes from pf-in-f138.1e100.net (173.194.202.138): icmp_seq=2 ttl=42 time=29.3 ms
64 bytes from pf-in-f138.1e100.net (173.194.202.138): icmp_seq=3 ttl=42 time=29.3 ms
If you want to check if the URL works, you should instead access it with curl or some HTTP library.
I figured out my issue, I had to redirect all traffic to https has thats what my domain stared with in the cors policy file
My hard drive crashed earlier. It looks like my Clojure install was affected. When I try to do lein repl I am greeted with the following error.
$ lein repl
Exception in thread "Thread-5" java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:382)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:241)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:228)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:431)
at java.net.Socket.connect(Socket.java:527)
at java.net.Socket.connect(Socket.java:476)
at java.net.Socket.<init>(Socket.java:373)
at java.net.Socket.<init>(Socket.java:187)
at clojure.tools.nrepl$connect.doInvoke(nrepl.clj:184)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.tools.nrepl.ack$send_ack.invoke(ack.clj:47)
at clojure.tools.nrepl.server$start_server.doInvoke(server.clj:146)
at clojure.lang.RestFn.invoke(RestFn.java:619)
at user$eval597.invoke(NO_SOURCE_FILE:0)
at clojure.lang.Compiler.eval(Compiler.java:6703)
at clojure.lang.Compiler.eval(Compiler.java:6693)
at clojure.lang.Compiler.eval(Compiler.java:6666)
at clojure.core$eval.invoke(core.clj:2927)
at leiningen.core.eval$fn__4815.invoke(eval.clj:314)
at clojure.lang.MultiFn.invoke(MultiFn.java:231)
at leiningen.core.eval$eval_in_project.invoke(eval.clj:337)
at clojure.lang.AFn.applyToHelper(AFn.java:160)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invoke(core.clj:626)
at leiningen.repl$server$fn__8776.invoke(repl.clj:203)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.core$apply.invoke(core.clj:624)
at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1862)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:628)
at clojure.core$bound_fn_STAR_$fn__4140.doInvoke(core.clj:1884)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at clojure.lang.AFn.run(AFn.java:22)
at java.lang.Thread.run(Thread.java:695)
REPL server launch timed out.
My leiningen install is via homebrew, so I tried uninstalling and then reinstalling.
$ brew rm --force leiningen
Uninstalling leiningen...
$ brew install leiningen
==> Downloading https://github.com/technomancy/leiningen/archive/2.4.2.tar.gz
Already downloaded: /Library/Caches/Homebrew/leiningen-2.4.2.tar.gz
==> Downloading https://github.com/technomancy/leiningen/releases/download/2.4.2/leining
Already downloaded: /Library/Caches/Homebrew/leiningen--jar-2.4.2.jar
==> Caveats
Dependencies will be installed to:
$HOME/.m2/repository
To play around with Clojure run `lein repl` or `lein help`.
Bash completion has been installed to:
/usr/local/etc/bash_completion.d
zsh completion has been installed to:
/usr/local/share/zsh/site-functions
==> Summary
🍺 /usr/local/Cellar/leiningen/2.4.2: 8 files, 13M, built in 2 seconds
No dice. I still get the same error. What broke and how do I fix it?
Edit:
Additional diagnostic info:
$ lein repl :headless
nREPL server started on port 56785 on host 127.0.0.1 - nrepl://127.0.0.1:56785
$ lein repl :connect localhost:56785
Connecting to nREPL at localhost:56785
ConnectException Connection refused
java.net.PlainSocketImpl.socketConnect (PlainSocketImpl.java:-2)
java.net.PlainSocketImpl.doConnect (PlainSocketImpl.java:382)
java.net.PlainSocketImpl.connectToAddress (PlainSocketImpl.java:241)
java.net.PlainSocketImpl.connect (PlainSocketImpl.java:228)
java.net.SocksSocketImpl.connect (SocksSocketImpl.java:431)
java.net.Socket.connect (Socket.java:527)
java.net.Socket.connect (Socket.java:476)
java.net.Socket.<init> (Socket.java:373)
java.net.Socket.<init> (Socket.java:187)
clojure.tools.nrepl/connect (nrepl.clj:184)
clojure.core/apply (core.clj:624)
clojure.tools.nrepl/add-socket-connect-method!/fn--5686 (nrepl.clj:226)
Bye for now!
$ ping localhost
PING localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.059 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.141 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.149 ms
64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.119 ms
64 bytes from 127.0.0.1: icmp_seq=5 ttl=64 time=0.078 ms
^C
--- localhost ping statistics ---
6 packets transmitted, 6 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.059/0.113/0.149/0.033 ms
$ lein repl :connect 127.0.0.1:56785
Connecting to nREPL at 127.0.0.1:56785
REPL-y 0.3.1
Clojure 1.6.0
Docs: (doc function-name-here)
(find-doc "part-of-name-here")
Source: (source function-name-here)
Javadoc: (javadoc java-object-or-class-here)
Exit: Control+D or (exit) or (quit)
Results: Stored in vars *1, *2, *3, an exception in *e
I wanted to use ssl support provided by thrift in my c++ server and client. My own thrift client was always hanging in SSL_connect after it passes through "transport->open()"
. So I built the official thrift\thrift-0.9.1\test\cpp\src\TestServer.cpp and testclient.cpp for windows. Same thing happened to me here as well.
I really could use any help or pointers.
Update:
I also tried using the latest sources at https://github.com/apache/thrift
Before I was working with 0.9.1
Since I saw testserver.cpp was doing the following
sslSocketFactory->loadCertificate("./server-certificate.pem");
sslSocketFactory->loadPrivateKey("./server-private-key.pem");
sslSocketFactory->ciphers("ALL:!ADH:!LOW:!EXP:!MD5:#STRENGTH");
and the testclient.cpp was doing the following
factory->ciphers("ALL:!ADH:!LOW:!EXP:!MD5:#STRENGTH");
factory->loadTrustedCertificates("./trusted-ca-certificate.pem");
factory->authenticate(true);
So I took following steps to build the certs
openssl genrsa -out ca-private-key.pem 2048
openssl req -new -x509 -nodes -days 3600 -key ca-private-key.pem -out ca-certificate.pem
openssl req -newkey rsa:2048 -days 3600 -nodes -keyout server-private-key.pem -out server-request.pem
openssl rsa -in server-private-key.pem -out server-private-key.pem
openssl x509 -req -in server-request.pem -days 3600 -CA ca-certificate.pem -CAkey ca-private-key.pem -set_serial 01 -out server-certificate.pem
Output for different test cases-
TestServer.exe --ssl
TestClient.exe --host 192.168.0.4 --ssl
I saw TestClient.exe hang on SSL_connect while running
testClient.testVoid();
During hang server side callstack
During hang client side callstack. Clearly both sides are stuck reading!
Wireshark debug trace for the above mentioned client server communication.
Debug output through "openssl s_client" run against thriftserver-
openssl s_client -connect 192.168.0.4:9090 -state -debug
Loading 'screen' into random state - done
CONNECTED(00000100)
SSL_connect:before/connect initialization
write to 0x1e2b5c0 [0x1e2bf50] (321 bytes => 321 (0x141))
0000 - 16 03 01 01 3c 01 00 01-38 03 03 52 dc 25 39 ad ....<...8..R.%9.
SSL_connect:SSLv2/v3 write client hello A
TestServer.exe --ssl --server-type nonblocking
TestClient.exe --ssl
I saw TestClient.exe failed on SSL_connect (10054) while running
testClient.testVoid();
Server stderr was saying
Thrift: Sat Jan 18 19:31:21 2014 TNonblockingServer: frame size too large (369295616 > 268435456)
from client <Host: ::1 Port: 22869>. Remote side not using TFramedTransport?
openssl.exe s_client -connect localhost:9090 -state -debug
Loading 'screen' into random state - done
CONNECTED(0000018C)
SSL_connect:before/connect initialization
write to 0x6db5c0 [0x6dbf50] (321 bytes => 321 (0x141))
0000 - 16 03 01 01 3c 01 00 01-38 03 03 52 db 4b 8a dd ....<...8..R.K..
SSL_connect:SSLv2/v3 write client hello A
read from 0x6db5c0 [0x6e14b0] (7 bytes => -1 (0xFFFFFFFF))
SSL_connect:error in SSLv2/v3 read server hello A
write:errno=10054
TestServer.exe --ssl --server-type nonblocking --transport framed
TestClient.exe --ssl --transport framed
Server stderr was saying
Thrift: Sat Jan 18 19:36:01 2014 TNonblockingServer: frame size too large (36929
5616 > 268435456) from client <Host: ::1 Port: 23087>. Remote side not using TFramedTransport?
By stepping through I definitely confirmed that testclient was using framed transport.
I think I know what may be going on and might have discovered the bug.
After debugging further I saw that virtual function createSocket is declared with parameter "int"
boost::shared_ptr createSocket(int socket);
https://github.com/apache/thrift/blob/master/lib/cpp/src/thrift/transport/TSSLServerSocket.h
https://github.com/apache/thrift/blob/master/lib/cpp/src/thrift/transport/TSSLServerSocket.cpp
However the base class TServerSocket.h declare it as "THRIFT_SOCKET" which on windows is ULONG_PTR
virtual boost::shared_ptr createSocket(THRIFT_SOCKET client);
https://github.com/apache/thrift/blob/master/lib/cpp/src/thrift/transport/TServerSocket.h
Hence correct createSocket was not being called from the guts.
After making this change I am able to move forward which I confirmed again with openssl s_client -connect localhost:9090 -state -debug
I will send my patch to thrift dev in case they would like to accept it.