I am trying to convert PSD to PNG in GraphicsMagick by using the following command:
#gm convert file.psd -flatten file.png
Everything looks no problem while the web application is in stagging server.
When I move the web application to a production server, some of the PSD files will get the following error message:
In GraphicsMagick 1.3.12 : gm convert: Too much image data in file.
or this one:
In GraphicsMagick 1.2.10 : gm convert: Memory allocation failed (unable to allocate cache info) [Cannot allocate memory].
The most ridiculous part is it works in stagging server, but not works in production server.
The stagging servers are FreeBSD in VMWare, and the production servers are physical server.
There are very very few documentation I found on the Internet. Only a thread few months ago here:
http://sourceforge.net/mailarchive/forum.php?thread_name=20110301013714.GC15521%40node99.net&forum_name=graphicsmagick-help
mentioned the same problem, but no reply.
I am wondering if I can get help here. Or maybe I am wrong, I should choose ImageMagick.
It is likely the production server has a lower per-process memory limit, than your staging VM. That is, the limitation is, probably, imposed by software, rather than the actual hardware.
The limits can be per user (or, rather, per user-class) as well as system-wide (using sysctl). Try running
% sysctl -Aa|fgrep kern.max
and compare the outputs between the two servers (kern.maxdsiz and kern.maxssiz are of particular interest). Also, try simply to run limits as the web-server user:
% su # become root
% su -m www # become www
% limits
and, again, compare the output.
Finally, could it be, that your production server is simply using more memory (for other things, such as serving other content), whereas the staging system is only processing a single file-conversion and thus is not hitting a limit?
I've run up against this exact problem. It appears to be a limitation of graphicsmagick. If you use the -debug all switch you will see that the decoder for psd is trying to allocate more memory than is available per process.
For instance, I had a 6MB psd that I wanted to thumb into a jpg. GM was unable to do it because it was trying to inflate it out all at once and would need 64GB of ram (this is on my development machine which only has 8GB) .
I ran the same command on imagemagick and it worked no problem (on my development machine)
Related
We are using django-eventstream for sending out events to clients. You can think of our workflow to be celery like use case but a very simple one. Things were working flawlessly until we hit the 'too many open files' error (Redhat 7.4). We tracked which processes are opening the files using 'lsof' and found python was shooting several threads which loaded the required libraries (mostly .so files). We are using gunicorn as our server which spawns uvicorn workers. Tried to fall back to 'runserver', but faced the same issue.
On trying out the 'time' and 'chat' examples, we saw the same behavior. On every refresh of the page (same machine, same browser, same tab) a new thread is spawned and 'lsof' lists an increment of about 2k files on every refresh of the page.
We tried to recreate the same issue on two other different machines with the same OS. Saw the same behavior, expect in 1 machine. This was a laptop with 4GB of RAM and the rest are servers with 256GB of RAM. Interestingly everything works absolutely fine in the laptop, but not in the servers. Maybe because of the relative sparsity of resources, OS is closing the files in the laptop but not in servers, which is causing the 'too many open files' error?
Any idea how to resolve this issue?
Cheers!
Going ahead with the threads assumption, tried to limit number of threads by setting ASGI_THREADS . The number of threads are now limited and thus the number of files too. I don't know what will happen if users > ASGI_THREADS will try to connect to the server. I guess now I need to read up on load balancing..
I have a horizon app running in digitalocean on 1GB RAM machine.
I try to set permissions with:
hz set-schema ./.hz/schema.toml
but getting the following error:
error: rethinkdb stderr: warn: Cache size does not leave much memory for server and query overhead (available memory: 779 MB).
Tried to use "cache-size" option in rethinkdb config file, but still getting the same error (I restarted the service).
Do I need to enlarge my digitalocean machine or I can do something with the existing one?
It would be better to give RethinkDB more memory (I usually give it at least 2 gigs), but if your load is light that size should be fine. I think the recommendation is to make the cache size about half your available memory, but you can also lower it if that turns out to be unstable.
I am facing serious ColdFusion Server crashing issue. I have many live sites on that server so that is serious and urgent.
Following are the system specs:
Windows Server 2003 R2, Enterprise X64 Edition, Service Pack 2
ColdFusion (8,0,1,195765) Enterprise Edition
Following are the hardware specs:
Intel(R) Xeon(R) CPU E7320 #2.13 GHZ, 2.13 GHZ
31.9 GB of RAM
It is crashing on the hourly bases. Can somebody help me to find out the exact issue? I tried to find it through ColdFusion log files but i do not find anything over there. Every times when it crashes, i have to reset the ColdFusion services to get it back.
Edit1
When i saw the runtime log files "ColdFusion-out165.log" so i found following errors
error ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: Java heap space
04/18 16:19:44 error ROOT CAUSE:
java.lang.OutOfMemoryError: GC overhead limit exceeded
javax.servlet.ServletException: ROOT CAUSE:
java.lang.OutOfMemoryError: GC overhead limit exceeded
Here are my current JVM settings:
As you can see my JVM setting are
Minimum JVM Heap Size (MB): 512
Maximum JVM Heap Size (MB): 1024
JVM Arguments
-server -Dsun.io.useCanonCaches=false -XX:MaxPermSize=512m -XX:+UseParallelGC -Dcoldfusion.rootDir={application.home}/../ -Dcoldfusion.libPath={application.home}/../lib
Note:- when i tried to increase Maximum JVM Heap size to 1536 and try to reset coldfusion services, it does not allow me to start them and give the following error.
"Windows could not start the ColdFusion MX Application Server on Local Computer. For more information, review the System Event Log. If this is a non-Microsoft service, contact the service vendor, and refer to service-specific error code 2."
Should i not able to set my maximum heap size to 1.8 GB, because i am using 64 bit operating system. Isn't it?
How much memory you can give to your JVM is predicated on the bitness off your JVM, not your OS. Are you running a 64-bit CF install? It was an uncommon thing to do back in the CF8 days, so worth asking.
Basically the error is stating you're using too much RAM for how much you have available (which you know). I'd be having a look at how much stuff you're putting into session and application scope, and culling back stuff that's not necessary.
Objects in session scope are particularly bad: they have a far bigger footprint than one might think, and cause more trouble than they're worth.
I'd also look at how many inactive but not timed-out sessions you have, with a view to being far more agressive with your session time-outs.
Have a look at your queries, and get rid of any SELECT * you have, and cut them back to just the columns you need. Push dataprocessing back into the DB rather than doing it in CF.
Farm scheduled tasks off onto a different CF instance.
Are you doing anything with large files? Either reading and processing them, or serving them via <cfcontent>? That can chew memory very quickly.
Are all your function-local variables in CFCs properly VARed? Especially ones in CFCs which end up in shared scopes.
Do you accidentally have debugging switched on?
Are you making heavy use of custom tags or files called in with <cfmodule>? I have heard apocryphyal stories of custom tags causing memory leaks.
Get hold of Mike Brunt or Charlie Arehart to have a look at your server config / app (they will obviously charge consultancy fees).
I will update this as I think of more things to look out for.
Turn on ColdFusion monitor in the administrator. Use it to observe behavior. Find long running processes and errors.
Also, make sure that memory monitoring is turned off in the ColdFusion Server Monitor. That will bring down a production server easily.
#Adil,
I have same kind of issue but it wasn't crashing it but CPU usage going high upto 100%, not sure it relevant to your issue but atleast worth to look.
See question at below URL:
Strange JRUN issue. JRUN eating up 50% of memory for every two hours
My blog entry for this
http://www.thecfguy.com/post.cfm/strange-coldfusion-issue-jrun-eating-up-to-50-of-cpu
For me it was high traffic site and storing client variables in registry which was making thing going wrong.
hope this help.
Does anyone have a good way to set up multiple CFML engines, and versions of them, together in a suitable environment for cross testing a CFML based application.
Ideally, I'd like this to be Ubuntu Server based as I'm using it with VirtualBox (under Windows 7). Plus it'd be helpful if it was possible to switch between, so my laptop can cope with one at a time rather than all running at once. I'm thinking of the following:
Adobe ColdFusion 9
Adobe ColdFusion 10
Railo 3.3.x
Railo 4.x
OpenBD 2.x
I'd also like to get them serving from the same shared directory, so I don't have to have a copy of the code for each engine. Cheers
You mentioned being able to "switch between, so my laptop can cope with one at a time rather than all running at once", I'm guessing that you are thinking that each one will run on a different VM, or that they might require a huge amount of memory. I don't think you need to worry about that. Unless you require that they be on different machines, I think you could do this all on one VM and with one instance of a servlet container (like Tomcat).
From a high-level view, here is how I would do it.
Install Tomcat
Create or download .wars for each of the engines.
Deploy said .wars to that one instance of Tomcat
Set up Tomcat to use each of those servlets from a different host name (server.xml)
Create a code directory outside of Tomcat for your one copy of the code
Set up a Symbolic link in each webapp to link the code folder into the servlet
You should then be able to hit the same source from each engine by visiting the different host names in the browser.
I may be missing something. It has been a long time since I set something like this up. You'll likely need to make a bunch of tweaks (JVM settings, switching to Sun/ORACLE JVM vs. OpenJDK, etc).
I don't think running this many engines will cause you great trouble. In my experiences, for development, I have had 3 instances of CF9 running on Tomcat using only 189mb of RAM. And each additional instance did not increase that number by 1/3. Far less. It would not surprise me if you could run all of those handily with less than 512md of RAM. Possibly even 256mb if you are really hurting on memory.
I hope this helps.
For ColdFusion 10, Railo and OpenBD you would be looking at deploying with standalone installations of Tomcat, Jetty or JBoss.
ColdFusion 9, probably the easiest solution is "Enterprise Multiserver configuration" setup.
With these kinds of installation they are pretty much platform agnostic.
The things to be aware of are the web server, proxy and jndi ports that are used by each installation, but only if you want to run more than one server at a time.
After that it's whether you are bothered about proxying from apache or Nginx to the server instances and the connector you want to use.
No idea if this helps...
Since you've mentioned the VirtualBox, I'll share my personal approach to this task. It includes few fairly simple steps:
Install Ubuntu Server as VirtualBox guest (host is also Ubuntu).
Set up only basic software like JVM and updates. Set up virtual
machine networking as bridged adapter to use my Wi-Fi connection.
Configure my Wi-Fi router DHCP to assign static IP for MAC address of the virtual machine.
Add entry to my (host) system hosts: ip_assigned_to_vm virtual.ubuntu
Set up guest additions and mount my ~/www directory inside the machine to access web applications.
Now, when I need another machine for experiments, or some other configuration of software (I've tested ACF 10 and Railo 4 this way) I do two things:
Clone existing clean machine.
Make sure it is using the same MAC address with bridged interface.
That's it.
It doesn't matter which of the machines I run, they all can be accessed as http://virtual.ubuntu (of course, it requires proper web-server configuration on the guest). Same time they are independent and it is completely safe to make anything I wish and test anything that runs on Ubuntu.
Obvious downsides are that I can run just one machine at a time, plus much more disk space is used. Not a problem to me.
I've tried approach with Tomcat and multiple WARs, but it has couple of issues: I can't use different JVM and Tomcat settings, also if I screw the setup -- all the Tomcat hosts are down.
Hope this helps.
I've got some code that runs in Enterprise guide (SAS Enterprise build, Windows locally, Unix server), which imports a large table via a local install of PC File server. It runs fine for me, but is slow to the point of uselessness for the system tester.
When I use his SAS identity on my windows PC, the code works; but when I use my SAS identity on his machine it doesn't, so it appears to be a problem with the local machine. We have the same version of EG (same hot fixes installed) connecting to the same server (with the same roles) running the same code in the same project, connecting to the same Access database.
Even a suggestion of what to test next would be greatly appreciated!
libname ACCESS_DB pcfiles path="&db_path"
server=&_CLIENTMACHINE
port=9621;
data permanent.&output_table (keep=[lots of vars]);
format [lots of vars];
length [lots of vars];
set ACCESS_DB.&source_table (rename=([some awkward vars]));
if [var]=[value];
[build some new vars, nothing scary];
;
run;
Addenda The PC files server is running on the same machine where the EG project is being run in both case - we both have the same version installed. &db_path is the location of the Access database - on a network file store both users can access (in fact other, smaller tables can be retrieved by both users in a sensible amount of time). This server is administered by IT and not a server we as the business can get software installed on.
The resolution of your problem will require more details and best solved by dialog with SAS Tech Support. The "online ticket" form is here or you can call them by phone.
For example, is the PCFILES server running locally on both your machine and your tester's machine? If yes, is the file referenced by &db_path on a network file server and does your tester have similar access (meaning both of you can reach it the same way)? Have you considered installing the PCFILE server on your file server rather than on your local PC? Too many questions, I think, for a forum like this. But I could be wrong (its happened before); perhaps others will have a great answer.