my snmp server is using 3% of CPU and about 600 kbit/s of bandwidth.
Using "iftop" my server is sending data to an unknown IP, in HTTP port, but the destination IP does not ping and has no HTTP port open.
myhostname.com.br:snmp => 144.168.68.43:http 520Kb 487Kb 487Kb
<= 40.2Kb 37.6Kb 37.6Kb
All defaults (snmpd.conf), I just use it to local MRTG.
It's a CentOS 7 under OpenVZ. Any ideas?
Are these notifications/traps, or responses to Get requests?
Responses
Someone is polling your SNMP service, it's as simple as that.
If you don't want them to do that, firewall them (or it) off.
It's common for public services to be polled by random strangers, sometimes as a result of an automated probe but sometimes for explicitly malicious purposes. That's why we have firewalls.
Neither ICMP ping nor HTTP has anything to do with it; SNMP responses go to the same address (IP & port) that the requests came from — the choice of port is effectively arbitrary, but it would seem that the originator has specifically decided to use port 80 because this is commonly an open port, and does not draw much attention. This in itself is frankly suspicious because, unless there are strange technical constraints, an authentic and authorised SNMP manager would be using a port more conventionally allocated to SNMP traffic (like UDP 162).
Traps (notifications)
If there is no evidence of incoming requests triggering this traffic, then your SNMP Agent is doing this on its own. Did you configure it to do so? If not, you may have been hacked and somebody else has configured it this way instead.
You can still firewall it off (the firewall goes both ways!) though you really should be checking into what happened.
Otherwise
If there are no requests incoming, and your SNMP manager is not configured to send notifications to 144.168.68.43, then do you have another SNMP manager you're not aware of? Some piece of software you installed that has SNMP support? Otherwise you're really in trouble.
I think I found the problem. This IP is related to MIB, reading something from an unreachable IP.
If I block it (iptables), "journal" starts to log every 2-3 seconds:
Nov 19 10:33:44 loki snmpd[18942]: send response: Failure in sendto
Nov 19 10:33:44 loki snmpd[18942]: -- SNMPv2-MIB::sysDescr.0
Nov 19 10:33:44 loki snmpd[18942]: -- SNMPv2-MIB::sysORDescr.1
Nov 19 10:33:44 loki snmpd[18942]: -- SNMPv2-MIB::sysObjectID.0
Nov 19 10:33:44 loki snmpd[18942]: -- SNMPv2-MIB::sysORDescr.2
Nov 19 10:33:44 loki snmpd[18942]: -- DISMAN-EVENT-MIB::sysUpTimeInstance
Nov 19 10:33:44 loki snmpd[18942]: -- SNMPv2-MIB::sysORDescr.3
Nov 19 10:33:44 loki snmpd[18942]: -- SNMPv2-MIB::sysContact.0
(....)
I just don't know how to fix... at least, server is not compromissed.
Related
I have a problem with a custom (1 vCPU, 2 GB memory) compute instance running Apache and 3 python scripts which essentially waits for messages and runs some SQL queries and creates reports. Once in a while, the entire instance becomes unresponsive for Apache, SSH and even access to the serial console. It looks like the entire instance is frozen. The only solution for this is to actively log in to my Google Cloud account and restart the instance.
I have checked the disk space because Google suggested in one of their pages that it might lead to the instance freezing but I still have 6GB available disk space so it shouldn't be an issue.
I have added logs from "Serial port 1 (console)" in case it might help with diagnosing the issue.
Could someone please assist me with finding out why this is happening? Thank you in advance.
Serial console logs output:
https://pastebin.com/raw/Z9gADmCn
Nov 18 19:14:24 web-server systemd[1]: Stopping System Logging Service...
Nov 18 19:14:24 web-server systemd[1]: Stopped System Logging Service.
Nov 18 19:14:24 web-server systemd[1]: Starting System Logging Service...
Nov 18 19:14:24 web-server systemd[1]: Started System Logging Service.
Nov 18 19:14:25 web-server dhclient[558]: bound to 10.166.0.10 -- renewal in 1434 seconds.
Nov 18 19:14:25 web-server ifup[516]: bound to 10.166.0.10 -- renewal in 1434 seconds.
This question should better be asked on Serverfault to get attention from Sysadmins instead of Devs here.
Before you use the suggestion above in Kolban's comment, i'd recommend checking some simple things.
1- Check if instance was under maintenance (in instance details page you can find your maintenance window)
2- Also under instance details page, you should be able to check CPU and Memory utilization and see if there was a spike at the time of the freeze. That should put you in the right direction.
3- Check system/apps logs: I'd recommend checking /var/log/syslog and if applicable /var/log/nginx/error.log for example.
I faced the same issue in one of my google compute engine instance where it was becoming freeze after some time from starting .
When i resetting the instance ,then again it started working fine .
So the issue i found was of less CPU/RAM on the instance and the processes on that instance require more CPU/RAM .So when in changed the CPU/RAM from 1CPU/3.75 GB RAM to 4 CPU/16 GB RAM it started working fine permanently.
On the core of this issue,the machine was created from disk snapshot in which different application like tomcat,postgres was configured for high CPU/Memory etc .So it look like when the machine become fully operational then it faced less memory for required processes which lead to slowness and freeze in the instance .
I try to create an audio streaming server in java. There are several protocols to stream media like RTP, and I'm a little confused with all protocols.
What are the differences between RTP and Shoutcast? Do they use TCP, UDP or HTTP? Does anyone have a clear explanation on this point?
SHOUTcast and Icecast use a client protocol very similar to HTTP. (In fact, Icecast is compliant with HTTP as spec'ed in RFC2616, and most HTTP clients work with SHOUTcast without modification.) A request comes in for a stream, and they return the stream audio data in the same way as an HTTP response, along with some extra metadata.
GET /radioreddit/main_mp3_128k HTTP/1.1
HTTP/1.1 200 OK
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: X-Requested-With
Server: AudioPump Server/0.8.1 (http://audiopump.co)
Content-Type: audio/mpeg
Cache-Control: no-cache
Pragma: no-cache
Expires: Sat, 15 Aug 2009 22:00:00 GMT
Connection: close
icy-genre: Indie,Rock,Talk
icy-name: Radio Reddit - Main
icy-pub: 1
icy-url: http://radioreddit.com
Date: Tue, 05 Aug 2014 13:40:55 GMT
In this example, the response is purely HTTP. If this were a SHOUTcast server, instead of seeing HTTP/1.1 200 OK in the status line, you would see ICY 200 OK. The headers that start with icy- are those that describe the station. Sometimes there are more, sometimes they don't exist at all. A client would be able to play MP3 data from this station as-is.
Now, sometimes the client will request metadata to be sent in the stream. This allows the player to tell you what is playing. A client does this by sending a icy-metadata: 1 header. The server will respond with icy-metaint: 8192, which means that every 8192 bytes there will be a chunk of metadata. You can read more about the format of that metadata on a previous answer.
I should also note that this form of streaming is often called HTTP Progressive Streaming. To the client, it's no different than playing a media file as it is being downloaded... except that the file is infinite in size.
Now, RTP is a protocol used in conjunction with RTSP. RTP is the actual media data where RTSP is used for control. These protocols are much more complicated, as they are meant for true streaming. That is, if the client can't handle the bandwidth, they can drop down to a lower bitrate. If the client needs to control the remote, that can be done as well. This complexity comes at a price. The servers are complicated to implement. There isn't great client compatibility.
A couple years back when I started to create my own streaming server, I had asked myself the same question. Do I implement a real streaming protocol that has not-so-great client support and will take a long time to figure out, or do I implement a protocol that everything can play, and is easy to build for. I went the HTTP route.
These days, you should also consider HLS which is nothing more than chunking your source stream into pieces at several bitrates and serving it over HTTP. If the client wants to change bitrates due to lack of bandwidth, it can just start requesting the low bitrate chunks. HLS also doesn't have great client support, but it is getting better. I suspect it will surpass all others for media delivery on websites in time.
The comparison between RTP (a protocol) and Shoutcast (an application) is not applicable.
RTP is the Internet standard for transporting multimedia over IP.
It is an application layer protocol, so RTP can be transported over TCP or UDP, depending on the nature of the communication. Also, it needs to be implemented by an application before it can be of any use, much like HTTP has to be implemented by a browser so that a user can access web pages. When implemented by an application, it is used for specifying the media format, numerous other characteristics and basic control information (via RTSP).
Shoutcast is a streaming application created by Nullsoft (the Winamp authors). It uses its own protocols for media transportation and can run over either TCP or UDP.
There are numerous other streaming protocols, so a Google search on the subject is, as usually, a good idea. Wikipedia also hosts a comparison of streaming media systems.
during the last week I spent all my time trying to access the MCF in offline mode. I'm working behind a company network (proxy) and the MCF try to do things that conflict with the local network.
I've followed several different tutorials such as 1. Working offline with MCF and 2. Working offline with MCF. But the result keeps the same, even if I change all sort of configuration on my ubuntu.
Trying to set up the target.
vm target http: //api.mycloud.me
HTTP exception: Errno::ECONNREFUSED:Connection refused - connect(2)
The MCF console show the following information:
Identity: mycloud.me (ok)
Admin: admin#mycloud.me
IP address 10.0.x.x (network up / offline)
When I ping to the IP address, I got positive return.
PING 10.0.x.x (10.0.x.x) 56(84) bytes of data.
64 bytes from 10.0.x.x: icmp_req=1 ttl=62 time=1.06 ms
64 bytes from 10.0.x.x: icmp_req=2 ttl=62 time=0.896 ms
64 bytes from 10.0.x.x: icmp_req=3 ttl=62 time=0.880 ms
--- 10.0.2.15 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.880/0.948/1.069/0.089 ms
But if I try to do a telnet to the port 80 or a ssh I got connection refused error.
ssh: connect to host mycloud.me port 22: Connection refused
I don't know what I need to do to fix this, if anyone have a tip that help me to figure out a solution, I'll be very thankful.
Cheers!
OK dudes! That I fixed it!
So, after some problems to understand what was happening, I could finally connect to the Micro Cloud. I'm still validating the information from the two tutorials above, because could have some conflicted data.
I didn't test if it is necessary to set a nameserver to the dhclient, but the second tutorial seems to be more reliable. Just one tip, run the ssh -L tunnel on a separate terminal, and leave it open. This wasn't so clear for people like me, that was not used to working with network administration.
Thanks for the help.
given the assigned IP address, it looks like you are using bridged networking, have you tried changing the VM configuration to use NAT instead?
This will use an interface exclusive to your local machine and the VM and shouldn't be affected by your corporate network.
Currently, we run into one problem with timeout issue. Our application is based on Jetty and uses Zeus as load balancing. The maxIdleTime is set as default value 30000 in jetty.xml. When a request/connection exceeds 30 seconds, the connection status will change to TIME_WAIT, but we get the HTTP 500 Internal Error in the browser side.
I guess the HTTP 500 error comes from Zeus but I want to confirm this: how would Zeus handle the closed connection?
OR
The jetty service sends 500 to Zeus? If so, how can I confirm this?
The surefire way to iron out what is happening here is to sniff the packets using something like ethereal or tcpdump between the load balancer and the jetty server, and you can use the network tooling in something like firebug or the chrome developer tools to see what is happening on that side of the connection. You can also turn on debug on the jetty side to see what it is doing specifically.
Regardless, if your hitting your timeout settings then you need to either increase those settings or decided on a proper strategy to deal with them to avoid this sort of issue, assuming you don't want that 500 error on the browser.
Ok, so I created a very simple WAR which serves a simple Hello World .jsp. With all the HTML it's about 200bytes.
Deployed it on my server running Jetty 7.5.x jdk 6u27
On my client computer create simple JMeter test plan with: Thread Group, HTTP Request, Response Assertion, Summary Report Client also running jdk6u27
I set up the thread group to 5 threads running for 60secs and I got 5800 requests/sec
Then I setup 10 threads and got 6800 requests/sec
The moment I disable Keep-Alive in JMeter on the HTTP Request sampler. I seem to get lots of big pauses on the client side I suppose, it doesn't seem the server is receiving anything. I get less pauses at 5 threads or barely none but at 10 threads it hangs pretty much all the time.
What does this mean exactly?
Keep in mind I'm technically creating a REST service and I was getting the same issue, so I though maybe I was doing something funky in my service, till I figured out it's a Keep-Alive issue as it's doing it pretty much on a staic web app. So in reality I will have 1 client request 1 server response. The client will not be keeping the connection open.
My guess is that since Keep-Alive is what allows HTTP Connection (and thereby, socket) reuse, you are running out of available ephemeral port numbers -- there are only 64k port numbers, and since connections must have unique client/server port combos (and server port is fixed), you can quickly go through those. Now, if ports were reusable as soon as connection was closed by one side, it would not matter: however, as per TCP spec, both sides MUST wait for configurable amount of time (default: 2 minutes) until reuse is considered safe.
For more details you can read a TCP book (like "Stevens book"); above is a simplification.