AmazonMQ RabbitMQ shows less RAM than expected - amazon-web-services

I created AmazonMQ RabbitMQ broker of mq.m5.large size. According to the documentation it must have 8Gb of memory. But when I go to the Process statistics of the node in RabbitMQ Management Console I see that memory limit is 3Gb. What's the problem here? Is there anything wrong with vm_memory_calculation_strategy?

Related

WSO2 ESB.5.0.0 Open Parallel Connection Limitation

We have wso2esb-5.0.0 and we can see that intermittently the server gets high CPU usage and starts to increase gradually and then makes the API run slow and finally stops to respond back, in order make it work we restart the ESB servers which will come back to the normal working state. Could anyone please let me know what could be the issue?
Do we have any limitation that ESB can handle only x-number of API calls/sec and can have only x-number of open connection/sec? Any inputs and suggestion would be helpful.!
Configuration -
We have 2 ESB & 2 MB running on a cluster mode. The issue is seen in both the ESB's.
ESB - 16GB RAM, cache 8GB
We can see the ESTABLISHED connection value varying from 100 to 500 based on the numbers of incoming requests.
Thanks
There are limitations to the number of requests that can be handled by the ESB server. This depends on a number of factors such as backend latency, mediation implementations, request paylods, etc.
For example, consider a scenario where you use a mediator such as a script mediator to process a large payload (which is not recommended). In this scenario, the transformation may take a considerable amount of time resulting in threads blocked at the script mediator. By default, the passthrough message processor thread pool is defined as 500. Thus it can result in a scenario where there are no threads to process new requests resulting in a delay for the responses and in the worst case an out of memory scenario.
Therefore with the available information, we are unable to determine the exact cause of the issue. But from the above description, we can suspect that there is an issue with the available threads (due to the slow response). You can capture thread dumps and thread usage in your environment and analyze the possible cause of the issue. Please refer to the documentation [1], [2] to identify how you can capture a thread dump and a thread usage. Please refer to [3] to clarify the thread usage analysis.
Also, capture and analyze a heap dump in your environment.
[1]-https://docs.wso2.com/display/CLUSTER420/Troubleshooting+in+Production+Environments
[2]-https://gist.github.com/bsenduran/02e8bf024fcaaa7707a6bb2321e097a8
[3]-https://medium.com/#prabushi/analyse-thread-dump-with-process-instructions-c5490b97e2d1

Geth service gets killed on multiple concurrent requests of web3.personal.importRawKey function

I have an ethereum-POA node setup on a VM having below mentioned configurations. Using NodeJS Web3 client, I am trying to create new wallets using the web3.personal.importRawKey function.
VM Configurations Azure VM - Standard D2s v3 (2 vcpus, 8 GiB memory)
As part of our Stress testing, I tried concurrently creating the wallet for 5-10 users, it worked. But when I try to create 15-20 wallets concurrently then the geth process gets killed abruptly and the node stops. On a 1 CPU, 4 GB memory VM, I was able to create at max 4 concurrent wallets. While on 2 vcpus, 8 GiB memory CM, I could process max 10-12 concurrent users.
My concern is the number of concurrent users wallet creation compared to the RAM seems very low and I can't understand why geth processes get killed.One thing I observed was the CPU percentage goes to 200% and then kills the geth node pr
How would I be able to handle at least 1000 concurrent requests to the above-mentioned function to create Blockchain wallets?
Any help will be appreciated.
Thanks in advance!

Changes to ignite cluster membership unexplainable

I am running a 12 node jvm ignite cluster. Eeach jvm runs on its own vmware node. I am using zookeeper to keep these ignite nodes in sync using tcp discovery. I have been seeing lot of node failures in zookeeper logs
although the java processes are running, I don't know why some ignite nodes leave the cluster with "node failed" kind of errors. Vmware uses vmotion to do something what they call as "migration".I am assuming that is some kind of filesystem sync process between vmware nodes.
I am also seeing pretty frequent "dumping pending object" and "Failed to wait for partition map exchange" kind of messages in the jvm logs for ignite.
My env setup is as follows:
Apache Ignite 1.9.0
RHEL 7.2 (Maipo) runs on each of the 12 nodes
Oracle Jdk1.8.
Zookeeper 3.4.9
Please let me know your thoughts.
TIA
There are generally two possible reasons:
Memory issues. For example, if a node goes to long GC pause, it can become unresponsive and therefore removed from topology. For more details read here: https://apacheignite.readme.io/docs/jvm-and-system-tuning
Network connectivity issues. Check if the network between your VMs is stable. You may also want to try increasing the failure detection timeout: https://apacheignite.readme.io/docs/cluster-config#failure-detection-timeout
VM Migrations sometimes involve suspending the VM. If the VM is suspended, it won't have a clean way to communicate with the rest of the cluster and will appear down.

What could be causing seemingly random AWS EC2 server to Crash? (Error couldn't establish database connection)

To begin, I am running a Wordpress site on an AWS EC2 Ubuntu Micro instance. I have already confirmed that this is NOT an error with Wordpress/mysql.
Seemingly at random the site will go down and I'll get the "Error establishing database connection" message. The server says that it is running just fine, and rebooting usually fixes the issue, however I'd like to figure out the cause and resolve the issue so this can stop happening (it's been the past 2 weeks now that it goes down almost every other day.)
It's not a spike in traffic, or at least Google Analytics hasn't shown the site as having any spikes in traffic (it averages about 300 visits per day.)
What's the cause, and how can this be fixed?
Sounds like you might be running into the throttling that is a limitation on t1.micro. If you use too much CPU cycles you will be throttled.
See http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts_micro_instances.html#available-cpu-resources-during-spikes
The next time this happens I would check some general stats on the health of the instance. You can get a feel for the high-level health of the instance using the 'top' command (http://linuxaria.com/howto/understanding-the-top-command-on-linux?lang=en). Be sure to look for CPU and memory usage. You may find a process (pid) that is consuming a lot of resources and starving your app.
More likely, something within your application (how did you come to the conclusion that this is not a Wordpress/MySQL issue?) is going out of control. Possibly there is a database connection not being released? To see what your app is doing, find the process id (pid) for your app:
ps aux | grep "php"
and get a thread dump for that process: kill -3 to get java thread dump. This will help you see where your application's threads are stuck (if they are).
Typically it's good practice to execute two thread dumps a few seconds apart and compare trends in both. If there is an issue in the application, you should see a lot of threads stuck at the same point.
You might also want to checkout what MySQL is seeing (https://dev.mysql.com/doc/refman/5.1/en/show-processlist.html).
mysql> SHOW FULL PROCESSLIST
Hope this helps, let us know what you find!

ColdFusion server crashing on hourly basis

I am facing serious ColdFusion Server crashing issue. I have many live sites on that server so that is serious and urgent.
Following are the system specs:
Windows Server 2003 R2, Enterprise X64 Edition, Service Pack 2
ColdFusion (8,0,1,195765) Enterprise Edition
Following are the hardware specs:
Intel(R) Xeon(R) CPU E7320 #2.13 GHZ, 2.13 GHZ
31.9 GB of RAM
It is crashing on the hourly bases. Can somebody help me to find out the exact issue? I tried to find it through ColdFusion log files but i do not find anything over there. Every times when it crashes, i have to reset the ColdFusion services to get it back.
Edit1
When i saw the runtime log files "ColdFusion-out165.log" so i found following errors
error ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
04/18 16:19:44 error ROOT CAUSE:
java.lang.OutOfMemoryError: GC overhead limit exceeded
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: GC overhead limit exceeded
Here are my current JVM settings:
As you can see my JVM setting are
Minimum JVM Heap Size (MB): 512
Maximum JVM Heap Size (MB): 1024
JVM Arguments
-server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib
Note:- when i tried to increase Maximum JVM Heap size to 1536 and try to reset coldfusion services, it does not allow me to start them and give the following error.
"Windows could not start the ColdFusion MX Application Server on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 2."
Should i not able to set my maximum heap size to 1.8 GB, because i am using 64 bit operating system. Isn't it?
How much memory you can give to your JVM is predicated on the bitness off your JVM, not your OS. Are you running a 64-bit CF install? It was an uncommon thing to do back in the CF8 days, so worth asking.
Basically the error is stating you're using too much RAM for how much you have available (which you know). I'd be having a look at how much stuff you're putting into session and application scope, and culling back stuff that's not necessary.
Objects in session scope are particularly bad: they have a far bigger footprint than one might think, and cause more trouble than they're worth.
I'd also look at how many inactive but not timed-out sessions you have, with a view to being far more agressive with your session time-outs.
Have a look at your queries, and get rid of any SELECT * you have, and cut them back to just the columns you need. Push dataprocessing back into the DB rather than doing it in CF.
Farm scheduled tasks off onto a different CF instance.
Are you doing anything with large files? Either reading and processing them, or serving them via <cfcontent>? That can chew memory very quickly.
Are all your function-local variables in CFCs properly VARed? Especially ones in CFCs which end up in shared scopes.
Do you accidentally have debugging switched on?
Are you making heavy use of custom tags or files called in with <cfmodule>? I have heard apocryphyal stories of custom tags causing memory leaks.
Get hold of Mike Brunt or Charlie Arehart to have a look at your server config / app (they will obviously charge consultancy fees).
I will update this as I think of more things to look out for.
Turn on ColdFusion monitor in the administrator. Use it to observe behavior. Find long running processes and errors.
Also, make sure that memory monitoring is turned off in the ColdFusion Server Monitor. That will bring down a production server easily.
#Adil,
I have same kind of issue but it wasn't crashing it but CPU usage going high upto 100%, not sure it relevant to your issue but atleast worth to look.
See question at below URL:
Strange JRUN issue. JRUN eating up 50% of memory for every two hours
My blog entry for this
http://www.thecfguy.com/post.cfm/strange-coldfusion-issue-jrun-eating-up-to-50-of-cpu
For me it was high traffic site and storing client variables in registry which was making thing going wrong.
hope this help.