Connection timeout expired in centos 7 without load - centos7

I have a vps on which I run my telegram bot. My bot has like 10 messages a second. regularly my cpu load is between 50 and 70 and it works fine. sometimes, like now, cpu load drops on 0% or 1% (Like there's nothing to do, While there's a queue of 10K messages pending) and doesn't accept any new request. even if I enter my domain address it won't open my home page (May be it opens it after 60 seconds). What's the problem for? How can I solve it?
By the way, I contacted the company that I bought my server from, And they said there's no connection issues and they're right, Because at the same time I can open netstat page or kloxo control panel and I have no problem with that.
webserver: apache
Php: 5.6
Dns server: bind
Thanks

Related

Access to Amazon EC2 takes LONG time

I have a t2 small machine with a weird problem that cripples my site.
Access to even getting a single small logo image can take from less than a second to more than a minute. I just do F5 refresh on the browser and it takes various times to get a small png!!
I have more than 100 cpu credits.
No errors on apache error log
In my tests, I'm accessing it with the IP address to bypass the ELB, but still some refreshes in the browser takes random time from immediate to a minute! sometimes it returns 504 error because it was more than 60 seconds.
It is an Ubuntu machine that used to work ok. Apache 2.4 with KeepAliveTimeout=5
Any ideas?
Thanks
This is assuming that the site "used to work ok"
1) Check for any changes you have made to the site's configuration. For example you might have installed a misconfigured module into apache. If you have an old snapshot then resurrect it and compare with what you have now
2) look in the /var/log/syslog. If there are any mysterious messages that look like a potential hardware fault then do a EC2 stop and start to move the vm to a different physical host

How to Resolve too many connection in filezilla with out setting up site manager manually?

How to Resolve too many connection in filezilla with out setting up site manager manually??
Response: 421 Too many connections (8) from this IP
Error: Could not connect to server
I was able to resolve my issue by changing the encryption setting in site manager under the general tab to only use plain ftp (insecure). Prior to that no matter what settings I selected I would get a time out error. I had been able to connect once or twice but while transferring would get a "421 Too many connections (8) from this IP Error"
Sometimes FTP connections do not terminate properly and must be manually disconnected. We can do this in the cPanel under FTP Session control.
Instructions:
Login to your cPanel
Scroll down the menu until you find "FTP Session Control" or "FTP Connections".
You will see a list of your connections.
Click on the button in the
DISCONNECT column as long as the status of the connection states
IDLE. You might have to click on the DISCONNECT button several times
to remove all of the connections.
Just make sure that you clear out all of the connections listed and then try logging again. I did just go in myself and clear the connections. The connections should clear out when you disconnect from FTP.
Also, see a full article on the 421 Too many connections error. It explains how to view, close, and limit simultaneous connections within FileZilla.
When you are getting this mesaage " Response: 421 Too many connections (8) from this IP
Error: Could not connect to server"
in Filezilla it means that multiple connection are accessing in your FTP.
To avoid this problem you need to change the Filezilla settings.
Follow this steps:
Open FileZilla.
Access the Site Manager.
Click on Transfer Settings tab
Check the box for Limit number of simultaneous connections, and set the Maximum number of connections to 10 or more.
Click the OK button. Now when you are using FileZilla, it will not allow you to go over your limitation of 8 simultaneous FTP connections.
you can also check this link for solution
click here
Most probably there's some firewall or NAT router interfering with orderly connection shutdown, silently dropping connections as opposed to informing both peers.
Go to the site manager, click the "transfer settings" tab, the check "limit number of simultaneous connections". Make sure the maximum number of connections is larger than 1 and smaller than 8.
The problem may be something like Romano mentioned. In my case, it happened while trying to connect to the FTP after multiple "20 seconds of inactivity" timeouts. It might have kept all those attempts as a connection and it said "too many connections."
The solution I found was to unplug the data cable and reconnect it. That closed all FTP connections "stuck" behind the scenes. I am in no way a pro in this field to explain exactly what happened, but it makes sense and it worked.
There may be other solutions. Here is a reference link.
Edit: It kept saying "20 second..." timeout, so I tried different settings. The one that worked was "Use plain FTP (insecure)" option. The host this website is using is Bluehost.
Dropping the internet connection cleared the problem. I power cycled the wifi router and it all worked fine.
FileZilla 3.49.1
I know this is an old thread, but I thought I would provide the following update based on a more recent version of FileZilla.
This is how I ended up resolving the issue:
Edit > Settings > Transfers > Concurrent transfers >
Maximum simultaneous transfers: 8
Limit for concurrent downloads: 8
Limit for concurrent uploads: 8
I found that only setting the Maximum simultaneous transfers to 8 did not resolve the issue when uploading files. My previous settings were:
Edit > Settings > Transfers > Concurrent transfers >
Maximum simultaneous transfers: 8
Limit for concurrent downloads: 0 (no limit)
Limit for concurrent uploads: 0 (no limit)
My understanding of these settings is that the above configuration should resolve the issue as others have stated, but it's possible there is a bug or miscommunication of how this feature performs in the version I am using.
I'm not sure what other test conditions such as downloading and uploading simultaneously would do, but I believe the most reliable settings would likely be the following:
Edit > Settings > Transfers > Concurrent transfers >
Maximum simultaneous transfers: 8
Limit for concurrent downloads: 4
Limit for concurrent uploads: 4
Let me know if I'm wrong, I'm just reporting as to what seemed to have worked for me, and assumptions I've made based on those experiences.
I was facing same issue.
As it is showing new software available.
Error resolved after installing new update, that is 3.22.2.2
Just change your ip via proxy or if you are using dynamic ip, just restart your internet device .. No need to change setting of filezilla or hosting server. :)
You can update "MaxClientsPerIP" in pure-ftpd config file which is situated on "/usr/local/apps/pureftpd/etc/pure-ftpd.conf" as many you want. Don't forget to restart your pureftpd service.
It means, that 8 users have connected to the same FTP-account. As far as the limit for simultaneous users connection is 8, everything which is more than 10 is blocked by an FTP-server. As a result - you can`t connect to the FTP-server

CF Service continuously restarts every 25 seconds

After my laptop froze yesterday I rebooted, and now the ColdFusion 9 Application Server service starts and restarts endlessly! Although it's set to start automatically, upon bootup it does not start. So I start it manually. Then I look at the Windows Event Viewer which reports:
The ColdFusion 9 Application Server service for the "coldfusion" server was started. PID is 5788.
That's good. But then, about 9 seconds later, Event Viewer reports:
The ColdFusion 9 Application Server service for the "coldfusion" server is restarting.
16 seconds later, the Event Viewer reports that the service has, in fact, started again. And 9 seconds after that, it reports that the service is starting again.
Unless I manually stop the service, this cycle continues, with the CF service restarting itself every 25 seconds, give or take a second. Needless to say, I can't use ColdFusion. When I try to reach a page, I get error 500: There is no web application configured to service your request
I am running the developer edition of CF9, Windows. Computer behaves normally otherwise.
[Edit]
The coldfusion-out.log includes this:
Server coldfusion ready (startup time: 9 seconds)
A fatal error has been detected by the Java Runtime Environment:
EXCEPTION_IN_PAGE_ERROR (0xc0000006) at pc=0x6d6e2424, pid=5456, tid=7724
JRE version: 6.0_17-b04
Java VM: Java HotSpot(TM) Server VM (14.3-b01 mixed mode windows-x86 )
Problematic frame:
C [nio.dll+0x2424]
An error report file with more information is saved as:
C:\ColdFusion9\runtime\bin\hs_err_pid5456.log
If you would like to submit a bug report, please visit:
http://java.sun.com/webapps/bugreport/crash.jsp
The crash happened outside the Java Virtual Machine in native code.
See problematic frame for where to report the bug.
The detailed error report, C:\ColdFusion9\runtime\bin\hs_err_pid5456.log, has a ton of information, but I don’t understand much of it. I’ll be happy to post it all if you think you might be able to make heads or tails of it.
In the event that the only solution is to reinstall CF, can you tell me where to find the config files? I know I will need jrun-web.xml (and I know where it is), but where, for example, are the datasource definitions found? I can’t seem to find any folder or file with the settings from the CF Admin. (The CF Admin won’t run, so I can’t view them that way.)

How would Zeus(load balancer) handle the closed connection

Currently, we run into one problem with timeout issue. Our application is based on Jetty and uses Zeus as load balancing. The maxIdleTime is set as default value 30000 in jetty.xml. When a request/connection exceeds 30 seconds, the connection status will change to TIME_WAIT, but we get the HTTP 500 Internal Error in the browser side.
I guess the HTTP 500 error comes from Zeus but I want to confirm this: how would Zeus handle the closed connection?
OR
The jetty service sends 500 to Zeus? If so, how can I confirm this?
The surefire way to iron out what is happening here is to sniff the packets using something like ethereal or tcpdump between the load balancer and the jetty server, and you can use the network tooling in something like firebug or the chrome developer tools to see what is happening on that side of the connection. You can also turn on debug on the jetty side to see what it is doing specifically.
Regardless, if your hitting your timeout settings then you need to either increase those settings or decided on a proper strategy to deal with them to avoid this sort of issue, assuming you don't want that 500 error on the browser.

Winsock IOCP Server Stress Test Issue

I have a winsock IOCP server written in c++ using TCP IP connections. I have tested this server locally, using the loopback address with a client simulator. I have been able to get upwards of 60,000 clients no sweat. The issue I am having, is when I run the server at my house and the client simulator at a friends house. Everything works fine up until we hit around 3700 connections, after that every call to connect() fails from the client side with a return of 10060 (this is the winsock timed out error). Last night this number was 3700, but it has been around 300 before, and we also saw it near 1000. But whatever the number is, every time we try to simulate it, it will fail right around that number (within 10 or so).
Both computers are using Windows 7 Ultimate. We have also both modified the TCPIP registry setting MaxTcpConnections to around 16 million. We also changed the MaxUserPort setting from its 5000 default to 65k. No useful information is showing up in the event viewer. We also both watched our resource monitor, and we havent even gotten to 1% network utilization, the CPU is also close to 0% usage as well.
We just got off the phone with our ISP, and they are saying that they are not limiting us in any way but the guy was kinda unsure and ended up hanging up on us anyway after a 30 minute hold time...
We are trying everything to figure this issue out, but cannot come up with the solution. I would be very greatful if someone out there could give us a hand with this issue.
P.S. Both computers are on Verizon FIOS with the same verizon router. Another thing to note, the server is using WSAAccept and NOT AcceptEx. The client simulator is attempting to connect over many seconds though, so I am pretty sure the connects are not getting backlogged. We have tried to change the speed at which the client simulator connects, and no matter what speed it is set to it fails right around the same number each time.
UPDATE
We simulated 2 separate clients (on 2 separate machines) on network A. The server was running on network B. Each client was only able to connect half (about 1600) connections to the server. We were initially using a port below 1,000, this has been changed to above 50,000. The router log on both machines showed nothing. We are both using the Actiontec MI424WR verizon FIOS router. This leads me to believe the problem is not with the client code. The server throws no errors and has no unexpected behavior. Could this be an ISP/Router issue?
UPDATE
The solution has been found. The verizon router we were using (MI424WR revision C) is unable to handle any more than 3700 connections, we tested this with a separate set of networks. Thanks for the help guys!
Thanks
- Rick
I would have guessed that this was a MaxUserPort issue, but you say you've changed that. Did you reboot after changing it?
Run the test on the exact same computers on your local network (this will take the computers out of the equation).
The issue could be one of your routers not being up to the job?