How can we handle out of memory error? - height

How can we handle out of memory error?
One of our customers contacted us and said when he uploads images, face with this error
out of memory
I asked about size of images he said 4MB, but I could upload even a 7 MB image successfully.
So I tried to upload big images until I faced the error, most of them uploaded successfully but I faced the same error when uploading some of them
I found that error was not related in size of image, it's related to width and height of image
You can understand better by look at this link
memory error
but I don't know what exactly max-width and max-height are, that I can validate to prevent this error.
I forgot to say, also I resize images dynamically.
But I can't say to users resize your images because them take large pictures with their phone but maybe some of them didn't know how can resize images with photoshop.
please help

Ideally you should restructure your code to use less memory.
There are ways to decrease the memory overflow for any JVM process.
just give the JVM more memory with the -Xmx option.
You should decode with inSampleSize option to reduce memory consumption.
Another option inJustDecodeBounds can help you to find correct inSampleSize value
Also have a look over this article

Related

Following reinstall of failed hard drive, RMarkdown graphics now vary in size

My Mac's hard drive failed and I had a replacement drive installed. All R related programs were installed as if my computer was brand new.
Prior to the failure, if I inserted a graphic into a RMardown document, the rendered size of the graphic was consistent and directly related to the actual size of the screen shot image.
Now, relatively small screen captures render large in size. I attempt to add a pic to show the problem. Not certain what I need to do to fix this situation. This is a book that will soon go to publication. As is stands, the publication is delayed unless I am able to somehow return to 'normal.' A Time Machine restore is out of the question for reasons I cannot go into, even though that was attempted. Several Apple senior advisors with whom I spoke do not recommend the 'Time Machine' restore option. As such, I seek another solution - hopefully not individually resizing 1,200+ text images. pic of mis-sizing

Style transfer on large image. (in chunks?)

I am looking into various style transfer models and I noted that they all have limited resolution (when running on Pixel 3, for example, I couldn't go beyond 1,024x1,024, OOM otherwise).
I've noticed a few apps (eg this app) which appear to be doing style transfer for up to ~10MP images, these apps also show progress bar which I guess means that they don't just call a single tensorflow "run" method for entire image as otherwise they won't know how much was processed.
I would guess they are using some sort of tiling, but naively splitting the image into 256x256 produces inconsistent style (not just on the borders).
As this seems like an obvious problem I tried to find any publications about this, but I couldn't find any. Am I missing something?
Thanks!
I would guess people split the model into multiple ones (for VGG it is easy to do manually, eg. via layers) and then use model_summary Keras function (or benchmarks) to estimate relative time it takes for each step and thus guide progress bar. Such separation probably also saves memory as tensorflow lite might not be clever enough to reuse memory storing intermediate activations from lower layers once they are not needed.

CKFinder: How to create small, medium, and large images automatically from image upload

I am looking for a way to make small, medium, and large copies of an image automatically when uploaded through CKFinder 2.5. Basically I want the same image copied and resized to different sizes so that I can use different images for my responsive site. The imageresize plugin does similar, but not quite what I am looking for since it is still initiated by the user.
I would prefer to add the code to the config file over adjusting code in the core folder. I am using coldfusion, but I would appreciate ideas in any language possible so I can make something work. Thanks in advance!

Import large CSV file into TileMill for use in MapBox

I am beginning to experiment with using MapBox and TileMill, and what I would like to do is map 400,000 addresses in a CSV file which have been pre-geocoded. When I try to add this 100mb CSV file as a layer into MapBox, I receive an error telling me that the CSV file is greater than 20mb and apparently this is a problem.
Can someone point me in the right direction in terms of what is the best way to get these 400k records into TileMill? Eventually, I want to publish the map to the web, and I was planning to do that using MapBox. I saw a program for converting CSV to a shapefile, and wondering whether this is the best approach.
Hundreds of thousands of markers is a lot. In the free tier of Mapbox, there is a limit of two thousand features. Such a limit would not stop you from displaying those in Tilemill, but it would stop you from uploading them to mapbox.com.
For discussion of that limit, see here.
A simple strategy for reducing the markers is to restrict to the subset of features that lies within a smaller bounding box.
I don't think it will matter whether your features are expressed in geojson, shapefiles, csv, or other formats. The number of features is what's stopping you.
I have the same problem. I had to import a 22MB csv file into tilemill and got the same error.
Although I don't have a working answer for you, but I would think either:
Convert csv to SQLite export files http://www.mapbox.com/tilemill/docs/tutorials/sqlite-work/
Configure the buffer for tilemill (however I doubt this would be the best because my tilemill cant take 5 GB of memory when doing points/markers rendering, increasing the buffer would make things worse)
I will keep experimenting with the ideas, and update this thread as soon as I found something. Also, I am looking forward to the tillmill pros out here for the best working answer~!
Best

og:image is not being fetched correctly by Facebook on liking or sharing

I'll try to be direct explaining my problem:
When liking/sharing an article from my website (http://www.radiopico.com) og:image does not work well (you can try with this url for example:http://www.radiopico.com/index.php?n=noticias&menu=noticias&id_noticia=14350)
on the debugger I always get the corret image url
It never shows the image
If I click the image I get a not found error.. if I copy the url and past it on the address bar I get the image
Sometimes it says image is to small.. it is wrong, because it is very larger then the minimum sizes facebook encourage. Also there is no ratio problem
When I try to post the address on facebook it never shows the picture
I have tried to add a time var (?T=...) after the address to make sure it is not caching
I have put multiple og:images.. still does not works.
I have read and tried all the "tricks" I find here (Stackoverdlow) or on google
Thanks for your support and best wishes on resolving this mystery
I notice that your image is technically not defined at 72 dpi, but rather at 300 dpi:
I cannot say definitively that this would cause your problem. Browsers are generally smart enough to display an image at 72 dpi (using the pixel dimensions) regardless of the "Resolution" that is defined within the image. However, it might be worth re-saving the image with the resolution correctly set for the web at 72dpi. Perhaps Facebook is getting confused about the image size due to the resolution setting.
Also, just to eliminate potential problems with the image, if you have Photoshop, use the Save for Web command to re-save the image, which will automatically define the re-saved image at 72dpi AND save it without a built-in preview, which you should always do for the web anyway to reduce file size. Then try again. Again, I'm not sure that these suggestions will work, but it seems like a good idea to eliminate the resolution, embedded preview and potential issues with the original file by re-saving.
Debuging your link
I see linter response is 206.
I don't know if it can help u but I've found this
check it out ;)