How much memory RethinkDB need? - digital-ocean

I have a horizon app running in digitalocean on 1GB RAM machine.
I try to set permissions with:
hz set-schema ./.hz/schema.toml
but getting the following error:
error: rethinkdb stderr: warn: Cache size does not leave much memory for server and query overhead (available memory: 779 MB).
Tried to use "cache-size" option in rethinkdb config file, but still getting the same error (I restarted the service).
Do I need to enlarge my digitalocean machine or I can do something with the existing one?

It would be better to give RethinkDB more memory (I usually give it at least 2 gigs), but if your load is light that size should be fine. I think the recommendation is to make the cache size about half your available memory, but you can also lower it if that turns out to be unstable.

Related

How to minimize Google Cloud launch latency

I have a persistent server that unpredictably receives new data from users, needing about 10 GPU instances to crank at the problem for about 5 minutes, and I send the answer back to the users. The server itself is a cheap always-persistent single CPU Google Cloud instance. When a user request comes in, my code launches my 10 created but stopped Google Cloud GPU instances with
gcloud compute instances start (instance list)
In the rare case if the stopped instances don't exist (sometimes they get wiped) that's detected and they're recreated with
gcloud beta compute instances create (...)
This system all works fine. My only complaint is that even with created but stopped instances, the launch time before my GPU code finally starts to run is about 5 minutes. Most of this is just the time for the instance itself to launch its Ubuntu host and call my code.. the delay once Ubuntu is running to start the GPU is only about 10 seconds.
How can I reduce this 5 minute delay? I imagine most of it comes from Google having to copy over the 4GB of instance data to the target machine, but the startup time of (vanilla) Ubuntu adds probably 1 more minute. I'm not even sure if I could quantify these two numbers independently, I only can measure the combined 3-7 minutes delay from the launch until my code starts responding.
I don't think Ubuntu OS startup time is the major startup latency contributor since I timed an actual machine with the same Ubuntu and same GPU on my desk from poweron boot up and it began running my GPU code in 46 seconds.
My goal is to get results back to my users as soon as possible, and that 5 minute startup delay is a bottleneck.
Would making a smaller instance SIZE of say 2GB help? What else can I do to reduce the latency?
2GB is large. That's a heckuva big image. You should be able to cut that down to 100MB, perhaps using Alpine instead of Ubuntu.
Copying 4GB of data is also less than ideal. Given that, I suspect the solution will be more of an architecture change than a code change.
But if you want to take a whack at everything which is NOT about your 4GB of data, there is a capability to prepare a custom image for your VMs. If you can build a slim custom image that will help.
There's good resources for learning more, the two I would start with include:
- Improve GCE Boot Times with Custom Images
- Three steps to Compute Engine startup-time bliss: Google Cloud Performance Atlas

Sudden 503s on OpenShift Django. Need help debugging

Today accessing my Django 1.4 app on OpenShift started throwing 503 errors 99% of time when accessing it (yeah, ~1% of the time it loads fine). htop doesn't show any huge workload and the logs don't show any errors.
Any recommandations on how to debug this?
./manage.py shell works fine on the server and even theh PostgreSQL 9.2 db is fine.
I know you mentioned that there's nothing in the logs, but I would anyway try tailing all the logs rhc tail <yourApp> and watching in real time for any clues there, when the 503's are returned.
To check whether your gear is not restarting due to insufficient memory, I recommend this.
Having your ssh connection closed unexpectedly may be another indicator of unexpected gear restarts.
Note that htop displays only your tasks, which take only little resources in context of the whole node; using e.g. 3% of memory of 16GB may be nearing the small gear's limits (512 MB).

ColdFusion server crashing on hourly basis

I am facing serious ColdFusion Server crashing issue. I have many live sites on that server so that is serious and urgent.
Following are the system specs:
Windows Server 2003 R2, Enterprise X64 Edition, Service Pack 2
ColdFusion (8,0,1,195765) Enterprise Edition
Following are the hardware specs:
Intel(R) Xeon(R) CPU E7320 #2.13 GHZ, 2.13 GHZ
31.9 GB of RAM
It is crashing on the hourly bases. Can somebody help me to find out the exact issue? I tried to find it through ColdFusion log files but i do not find anything over there. Every times when it crashes, i have to reset the ColdFusion services to get it back.
Edit1
When i saw the runtime log files "ColdFusion-out165.log" so i found following errors
error ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
04/18 16:19:44 error ROOT CAUSE:
java.lang.OutOfMemoryError: GC overhead limit exceeded
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: GC overhead limit exceeded
Here are my current JVM settings:
As you can see my JVM setting are
Minimum JVM Heap Size (MB): 512
Maximum JVM Heap Size (MB): 1024
JVM Arguments
-server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib
Note:- when i tried to increase Maximum JVM Heap size to 1536 and try to reset coldfusion services, it does not allow me to start them and give the following error.
"Windows could not start the ColdFusion MX Application Server on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 2."
Should i not able to set my maximum heap size to 1.8 GB, because i am using 64 bit operating system. Isn't it?
How much memory you can give to your JVM is predicated on the bitness off your JVM, not your OS. Are you running a 64-bit CF install? It was an uncommon thing to do back in the CF8 days, so worth asking.
Basically the error is stating you're using too much RAM for how much you have available (which you know). I'd be having a look at how much stuff you're putting into session and application scope, and culling back stuff that's not necessary.
Objects in session scope are particularly bad: they have a far bigger footprint than one might think, and cause more trouble than they're worth.
I'd also look at how many inactive but not timed-out sessions you have, with a view to being far more agressive with your session time-outs.
Have a look at your queries, and get rid of any SELECT * you have, and cut them back to just the columns you need. Push dataprocessing back into the DB rather than doing it in CF.
Farm scheduled tasks off onto a different CF instance.
Are you doing anything with large files? Either reading and processing them, or serving them via <cfcontent>? That can chew memory very quickly.
Are all your function-local variables in CFCs properly VARed? Especially ones in CFCs which end up in shared scopes.
Do you accidentally have debugging switched on?
Are you making heavy use of custom tags or files called in with <cfmodule>? I have heard apocryphyal stories of custom tags causing memory leaks.
Get hold of Mike Brunt or Charlie Arehart to have a look at your server config / app (they will obviously charge consultancy fees).
I will update this as I think of more things to look out for.
Turn on ColdFusion monitor in the administrator. Use it to observe behavior. Find long running processes and errors.
Also, make sure that memory monitoring is turned off in the ColdFusion Server Monitor. That will bring down a production server easily.
#Adil,
I have same kind of issue but it wasn't crashing it but CPU usage going high upto 100%, not sure it relevant to your issue but atleast worth to look.
See question at below URL:
Strange JRUN issue. JRUN eating up 50% of memory for every two hours
My blog entry for this
http://www.thecfguy.com/post.cfm/strange-coldfusion-issue-jrun-eating-up-to-50-of-cpu
For me it was high traffic site and storing client variables in registry which was making thing going wrong.
hope this help.

VisualVM and Coldfusion 8: why no memory sampling available?

We are trying to use VisualVM to track down some memory leakage in CF8, however, cannot get the tool to work 100%. Basically, everything comes up, except the Memory sampling. Says that the "JVM is not supported".
However, all the other features work (we can do CPU sampling, just not memory). Found this kind of weird that we can do everything else but the memory stuff, so am wondering if maybe we need to specify another JVM argument to allow this?
Some other info:
We are connecting locally via 127.0.0.1 or localhost.
I installed the Visual GC plugin, and it cannot connect either.
VisualVM and JRUN/CF8 are both using the same Java version (1.6.0_31), however, they are not pulled from the same location (maybe this matters). VisualVM uses the installed JDK, whereas JURN/CF8 uses just the binaries that we copied locally to the CF8 installation folder.
Installed another plugin that shows JVM properties, and it says that the JVM is not "attachable". Don't know what that means, but am just wanting to mention it.
Any help with this would be greatly appreciated. If we can just get that memory sampling, I think we can get on top of our performance issues that have plagued us here recently. Thanks in advance!
EDIT:
Also, just checked, and JRUN is being started under "administrator", whereas I am launching VisualVM under a different user. Maybe this is relevant?
Yes, it is relevant that you are running VisualVM under different user. Memory Sampling uses Attach API, which only works if you are running monitored application and VisualVM as the same user. This is also reason that the JVM properties reports that your application is not attachable. If you run VisualVM as "administrator", it will automatically detect your Coldfusion 8 application and the Memory sampler will work.

Need help to convert PSD to PNG in GraphicsMagick

I am trying to convert PSD to PNG in GraphicsMagick by using the following command:
#gm convert file.psd -flatten file.png
Everything looks no problem while the web application is in stagging server.
When I move the web application to a production server, some of the PSD files will get the following error message:
In GraphicsMagick 1.3.12 : gm convert: Too much image data in file.
or this one:
In GraphicsMagick 1.2.10 : gm convert: Memory allocation failed (unable to allocate cache info) [Cannot allocate memory].
The most ridiculous part is it works in stagging server, but not works in production server.
The stagging servers are FreeBSD in VMWare, and the production servers are physical server.
There are very very few documentation I found on the Internet. Only a thread few months ago here:
http://sourceforge.net/mailarchive/forum.php?thread_name=20110301013714.GC15521%40node99.net&forum_name=graphicsmagick-help
mentioned the same problem, but no reply.
I am wondering if I can get help here. Or maybe I am wrong, I should choose ImageMagick.
It is likely the production server has a lower per-process memory limit, than your staging VM. That is, the limitation is, probably, imposed by software, rather than the actual hardware.
The limits can be per user (or, rather, per user-class) as well as system-wide (using sysctl). Try running
% sysctl -Aa|fgrep kern.max
and compare the outputs between the two servers (kern.maxdsiz and kern.maxssiz are of particular interest). Also, try simply to run limits as the web-server user:
% su # become root
% su -m www # become www
% limits
and, again, compare the output.
Finally, could it be, that your production server is simply using more memory (for other things, such as serving other content), whereas the staging system is only processing a single file-conversion and thus is not hitting a limit?
I've run up against this exact problem. It appears to be a limitation of graphicsmagick. If you use the -debug all switch you will see that the decoder for psd is trying to allocate more memory than is available per process.
For instance, I had a 6MB psd that I wanted to thumb into a jpg. GM was unable to do it because it was trying to inflate it out all at once and would need 64GB of ram (this is on my development machine which only has 8GB) .
I ran the same command on imagemagick and it worked no problem (on my development machine)