I am creating a theme for opencart but facing issue such as when I upload a small image in image manager it shows in the folder but when I upload a large image(4MB) then folder does not open.
I use Opencart 2.3, all image folders are chmod 777.
I do not what is the problem is causing this error.
I think, You have get the following Allowed memory size error there.
Fatal error: Allowed memory size of 67108864 bytes exhausted (tried to
allocate 4000 bytes) in /home/public_html/system/library/image.php on
line 34
You will need to change Memory limit size in php.ini file for that.
Go to Your site source > admin folder > and Open php.ini file.
And then change memory_limit = 64M; to memory_limit = 128M; and then check it.
Related
I have a container that is built to run selenium-chromedriver with python to download an excel(.xlsx) file from a website.
I am Using SAM to build & deploy this image to be run in AWS Lambda.
When I build the container and invoke it locally, the program executes as expected: The download occurs and I can see the file placed in the root directory of the container.
The problem is: when I deploy this image to AWS and invoke my lambda function I get no errors, however, my download is never executed. The file never appears in my root directory.
My first thought was that maybe I didn't allocate enough memory to the lambda instance. I gave it 512 MB, and the logs said it was using 416MB. Maybe there wasn't enough room to fit another file inside? So I have increased the memory provided to 1024 MB, but still no luck.
My next thought was that maybe the download was just taking a long time, so I also allowed the program to wait for 5 minutes after clicking the download to ensure that the download is given time to complete. Still no luck.
I have also tried setting the following options for chromedriver (full list of chromedriver options posted at bottom):
options.add_argument(f"--user-data-dir={'/tmp'}"),
options.add_argument(f"--data-path={'/tmp'}"),
options.add_argument(f"--disk-cache-dir={'/tmp'}")
and also setting tempfolder = mkdtemp() and passing that into the chrome options as above in place of /tmp. Still no luck.
Since this applicaton is in a container, it should run the same locally as it does on AWS. So I am wondering if it is part of the config outside of the container that is blocking my ability to download a file? Maybe the request is going out but the response is not being allowed back in?
Please let me know if there is anything I need to clarify -- Any help on this issue is greatly appreciated!
Full list of Chromedriver options
options.binary_location = '/opt/chrome/chrome'
options.headless = True
options.add_argument('--disable-extensions')
options.add_argument('--no-first-run')
options.add_argument('--ignore-certificate-errors')
options.add_argument('--disable-client-side-phishing-detection')
options.add_argument('--allow-running-insecure-content')
options.add_argument('--disable-web-security')
options.add_argument('--lang=' + random.choice(language_list))
options.add_argument('--user-agent=' + fake_user_agent.user_agent())
options.add_argument('--no-sandbox')
options.add_argument("--window-size=1920x1080")
options.add_argument("--single-process")
options.add_argument("--disable-dev-shm-usage")
options.add_argument("--disable-dev-tools")
options.add_argument("--no-zygote")
options.add_argument(f"--user-data-dir={'/tmp'}")
options.add_argument(f"--data-path={'/tmp'}")
options.add_argument(f"--disk-cache-dir={'/tmp'}")
options.add_argument("--remote-debugging-port=9222")
options.add_argument("start-maximized")
options.add_argument("enable-automation")
options.add_argument("--headless")
options.add_argument("--disable-browser-side-navigation")
options.add_argument("--disable-gpu")
driver = webdriver.Chrome("/opt/chromedriver", options=options)```
Just in case anybody stumbles across this queston in future, adding the following to chrome options solved my issue:
prefs = {
"profile.default_content_settings.popups": 0,
"download.default_directory": r"/tmp",
"directory_upgrade": True
}
options.add_experimental_option("prefs", prefs)
I am doing a distributed training using GCP Vertex platform. The model is trained in parallel using 4 GPU's using Pytorch and HuggingFace. After training when I save the model from local container to GCP bucket it throws me the error.
Here is the code:
I launch the train.py this way:
python -m torch.distributed.launch --nproc_per_node 4 train.py
After training is complete I save model files using this. It has 3 files that needs to be saved.
trainer.save_model("model_mlm") #Saves in local directory
subprocess.call('gsutil -o GSUtil:parallel_composite_upload_threshold=0 cp -r /pythonPackage/trainer/model_mlm gs://*****/model_mlm', shell=True, stdout=subprocess.PIPE) #from local to GCP
Error:
ResumableUploadAbortException: Upload complete with 1141101995 additional bytes left in stream; this can happen if a file changes size while being uploaded
And sometimes I get this error:
ResumableUploadAbortException: 409 The object has already been created in an earlier attempt and was overwritten, possibly due to a race condition.
As per the documentation name conflict, you are trying to overwrite a file that has already been created.
So I would recommand you to change the destiny location with a unique identifier per training so you don't receive this type of error. For example, adding the timestamp in string format at the end of your bucket like:
- gs://pypl_bkt_prd_row_std_aiml_vertexai/model_mlm_vocab_exp2_50epocs/20220407150000
I would like to mention that this kind of error is retryable as mentioned in the error documentation error docs.
At work I had the bad luck to have fix a badly written url validator script in python done by someone else. It's a really messy code, and trying to fix one of the bugs, I found some behavior that I don't understand.
The script has to process a file with around 10 thousand url's in it, it has to check each url to see if it's valid, not only in it's structure but also check if it exists (using pycurl for this). On one part of the code, this is done:
for li in lineas:
liNew = "http://" + li
parsedUrl = urlparse.urlparse(liNew)
On this case the bug was the addition of "http://" at the beginning of the line as that was being done before on the script. So I changed the code to this:
for li in lineas:
liNew = li
parsedUrl = urlparse.urlparse(liNew)
Now, with the same input file the script fails with the error:
IOError: [Errno 24] Too many open files:/path/to/file/being/written/to.txt
With liNew = "http://" + li, file descriptors don't go over the default limit of 1024, but changing that line to liNew = li will make them go over 8000, why ??
With liNew = "http://" + li, file descriptors don't go over the default limit of 1024, but changing that line to liNew = li will make them go over 8000, why ??
before: broken url - nothing gets downloaded (no files are opened)
after: correct url - urls are saved to files (there are 10K urls)
It probably doesn't make sense to download more that a few hundreds urls concurrently (bandwidth, disk). Make sure that all files (sockets, disk files) are properly disposed after the download (close() method is called in time).
Default limit (1024) is low but don't increase it unless you understand what the code does.
I'm trying to set up the sorl-thumbnail django app to provide thumbnails of pdf-files for a web site - running on Windows Server 2008 R2 with Appache web server.
I've had sorl-thumbnail functional with the PIL backend for thumbnail generation of jpeg images - which was working fine.
Since PIL cannot read pdf-files I wanted to switch to the graphicsmagick backend.
I've installed and tested the graphicsmagick/ghostscript combination. From the command line
gm convert foo.pdf -resize 400x400 bar.jpg
generates the expected jpg thumbnail. It also works for jpg to jpg thumbnail generation.
However, when called from sorl-thumbnail, ghostscript crashes.
From django python shell (python manage.py shell) I use the low-level command described in the sorl docs and pass in a FieldFile instance (ff) pointing to foo.pdf and get the following error:
In [8]: im = get_thumbnail(ff, '400x400', quality=95)
**** Warning: stream operator isn't terminated by valid EOL.
**** Warning: stream Length incorrect.
**** Warning: An error occurred while reading an XREF table.
**** The file has been damaged. This may have been caused
**** by a problem while converting or transfering the file.
**** Ghostscript will attempt to recover the data.
**** Error: Trailer is not found.
GPL Ghostscript 9.07: Unrecoverable error, exit code 1
Note that ff is pointing to the same file that converts fine when using gm convert from command line.
I've tried also passing an ImageFieldFile instance (iff) and get the following error:
In [5]: im = get_thumbnail(iff, '400x400', quality=95)
identify.exe: Corrupt JPEG data: 1 extraneous bytes before marker 0xdb `c:\users\thin\appdata\local\temp\tmpxs7m5p' # warning/jpeg.c/JPEGWarningHandler/348.
identify.exe: Corrupt JPEG data: 1 extraneous bytes before marker 0xc4 `c:\users\thin\appdata\local\temp\tmpxs7m5p' # warning/jpeg.c/JPEGWarningHandler/348.
identify.exe: Corrupt JPEG data: 1 extraneous bytes before marker 0xda `c:\users\thin\appdata\local\temp\tmpxs7m5p' # warning/jpeg.c/JPEGWarningHandler/348.
Invalid Parameter - -auto-orient
Changing back sorl settings to use the default PIL backend and repeating the command for jpg to jpg conversion, the thumbnail image is generated without errors/warnings and available through the cache.
It seems that sorl is copying the source file to a temporary file before passing it to gm - and that the problem originates in this copy operation.
I've found what I believe to be the copy operation in the sources of sorl_thumbnail-11.12-py2.7.egg\sorl\thumbnail\engines\convert_engine.py lines 47-55:
class Engine(EngineBase):
...
def get_image(self, source):
"""
Returns the backend image objects from a ImageFile instance
"""
handle, tmp = mkstemp()
with open(tmp, 'w') as fp:
fp.write(source.read())
os.close(handle)
return {'source': tmp, 'options': SortedDict(), 'size': None}
Could the problem be here - I don't see it!
Any suggestions of how to overcome this problem would be greatly appreciated!
I'm using django 1.4, sorl-thumbnail 11.12 with memcached and ghostscript 9.07.
After some trial and error, I found that the problem could be solved by changing the write mode from 'w' to 'wb', so that the sources of sorl_thumbnail-11.12-py2.7.egg\sorl\thumbnail\engines\convert_engine.py lines 47-55 now read:
class Engine(EngineBase):
...
def get_image(self, source):
"""
Returns the backend image objects from a ImageFile instance
"""
handle, tmp = mkstemp()
with open(tmp, 'wb') as fp:
fp.write(source.read())
os.close(handle)
return {'source': tmp, 'options': SortedDict(), 'size': None}
There are I believe two other locations in the convert_engine.py file, where the same change should be made.
After that, the gm convert command was able to process the file.
However, since my pdf's are fairly large multipage pdf's I then ran into other problems, the most important being that the get_image method makes a full copy of the file before the thumbnail is generated. With filesizes around 50 Mb it therefore turns out to be a very slow process, and finally I've opted for bypassing sorl and calling gm directly. The thumbnail is then stored in a standard ImageField. Not so elegant, but much faster.
I've spent all night researching this without a solution.
I'm trying to verify the digital signature of a file in the drives folder (C:\Windows\System32\drivers*.sys) pick whatever one you want. I know that the code is correct because if you move the file from that folder to C:\ the test works.
WinVerifyTrust gives error 80092003
http://pastebin.com/nLR7rvZe
CryptQueryObject gives error 80092009
http://pastebin.com/45Ra6eL4
What's the deal?
0x80092003 = CRYPT_E_FILE_ERROR = An error occurred while reading or writing to the file.
0x80092009 = CRYPT_E_NO_MATCH = No match when trying to find the object.
I'm guessing you're running on a 64-bit machine and WOW64 file system redirection is redirecting you to syswow64\drivers, which is empty. You can disable redirection with Wow64DisableWow64FsRedirection().
if you right click and view properties of file can you see a digital signature? most likely your file is part of a catalogue and you need to use the catalogue API to extract the cert from cert DB and verify it.