Can anyone suggest why vQmod fails to write cache files and instead writes an empty file named vq2-C to /vqmod/vqcache?
Environment
Windows Server 2012
Plesk Panel 12.5
PHP 5.3.2.9
MySQL 5.6.26
OpenCart 2.1.0.1
vQmod 2.6.1
Issue
vQmod fails to save modifications to /vqmod/vqcache/vq2-*.php
On each page load it applies modifications specified in /vqmod/xml/*.xml and writes an empty file named vq2-C to /vqmod/vqcache.
Background
The affected website was migrated from another Windows box with similar configuration.
In short the old server ran Plesk Panel 12.0, the new server is Plesk 12.5 so has minor updates to software versions.
Both sites run on PHP 5.3.2.9 and the new server follows OWASP recommendations more closely so has more PHP functions disabled eg. fopen_with_path.
Investigation so far
Running the vQmod installer again reports: VQMOD ALREADY INSTALLED!
File permissions
/vqmod/logs and /vqmod/vqcache have modify permissions, files are written here. Permissions are applied through Plesk Panel, checked over remote desktop and enabling global write permissions on web root through Plesk does not change anything.
Logs
vQmod logs have no useful information, only skipped files are noted eg. VQModObject::parseMods - Could not resolve path for [ catalog/language/english/module/featured.php] (SKIPPED).
No php_error.log files are generated.
Failed Request Tracing does not pick up any issues.
Tests
All /vqmod/xml files have been removed except vqmod_opencart.xml and one that modifies column_left.tpl. These modifications are applied successfully but no cache files are generated in /vqmod/vqcache.
If I remove /vqmod/checked.cache and /vqmod/mods.cache the files are regenerated on next page load.
vQmod versions - rolled back to 2.5.1 but the issue persists.
Other considerations
When one particular vQmod modification is enabled page load time is unacceptably slow (up to 20 sec). The modification displays the first 4 products from sub categories on the parent category page. I've not gone through the code yet but assume it's hitting pretty hard on the database.
On the original server page load was sub 2 seconds. I doubt this is related to the cache issue as that seems to be a permissions problem.
I had such an issue with mine, I later found out that the issue was with directory permissions, in this case, might be caused by moving. Set permissions of vqcache folder recursively to 777. It worked for me.
My apologies, I should have updated this sooner.
The issue relates to case sensitive preg_replace.
In short changing line #120 of vqmod.php from
$stripped_filename = preg_replace('~^' . preg_quote(self::getCwd(), '~i') . '~', '', $sourcePath)
to
$stripped_filename = preg_replace('~^' . preg_quote(self::getCwd(), '~i') . '~i', '', $sourcePath);
Means the /vqmod/vqcache/vq2-*.php cache files are written and the website runs as normal.
Explained in more detail at https://github.com/vqmod/vqmod/issues/81
I don't think the preg_quote should have the i argumment but it's in the original code so I left it in.
Related
We have recently began migrating to ColdFusion 2018 Enterprise, but have found that the scheduled tasks do not work. Although the relevant cfm file works if run in the browser on the same server, if we try and run it as a scheduled tasks then it does not work (although it will say it has run successfully on the screen).
The log file just contains a single line for each run:
Information","DefaultQuartzScheduler_Worker-5","11/20/20","12:48:18","","Task default.takename triggered."
From what I understand there should be additional lines for the http request etc, however.
We have tried various usernames and passwords, including admin accounts to make sure it is not a permissions issue but nothing seems to make any difference.
We have also tried outputting to a file but nothing ever populates the file, although it does update the file's modified date with the date/time the tasks ran (or create a new file if necessary).
Does anyone have any experience with this type of problem?
This ended up being an IIS permissions issue. We resolved it by enabling anonymous authentication for both the directory that the relevant cfm files are contained in, as well as the "jakarta" directory that I believe ColdFusion uses for some integration requirements. Scheduled tasks then ran as expected.
I upgraded from Mamp-PRO 5 to 6 and now all the websites are red.
In the «document root» field, it says «restricted folder» and when I try to add a new host, I can't select a folder - they're all greyed out.
My files are located in /myUser/Library/Webserver/Documents/…
I gave apachectl, httpd and Mamp-Pro full disk access in the system preferences.
Any idea if I can fix this without moving the whole folder to a different location?
The exisiting websites are working, though (e.g. I can start Apache and the local websites are responding).
(running MacOS Catalina)
It's possible that changing the name of htdocs to localhost will do the trick. However, here are the steps I took to correct the problem:
Stopped MAMP Pro servers
Copied Applications/MAMP/htdocs to my Dropbox folder (I'm guessing it could go in the documents folder or anywhere else)
Clicked on localhost in MAMP and chose the new document root location (Dropbox/htdocs in my case)
Chose new document roots for all my hosts at the new location
Of note, MAMP automatically changed the name of htdocs to localhost. This is the basis for my assertion at the beginning of this post.
I am having the same problem on Mojave, I filed a bug report to the developer and will respond if I receive something.
My files are located in /Applications/MAMP/htdocs and I cannot chose any folder within that folder.
Launching the (red marked) existing hosts works, interestingly. Just not editing nor creating new ones.
It is possible to use the same document root with multiple hostnames. This can be done via the "Aliases" option in the "General" section.
This answer comes a little late and is not the answer to the original question, but maybe this will help others to work around the limitation of MAMP PRO 6.
Unfortunately, I didn't get a tip in the right direction from MAMP PRO support or via a Google search.
I have over 30 Leaflet maps hosted on my Google Cloud Platform bucket (for example) and it has always been an easy process to upload my folder (which includes an html file with sub-folders including .js and .css files) and share the map publicly.
I tried uploading another map today, but within the folder there are no files showing and I get the following message "There are no live objects in this folder. If you have object versioning enabled, this folder may contain archived versions of objects, which aren't visible in the console. You can list archived object versions using gsutil or the APIs."
Does anyone know what is going on here?
We have also seen this problem, and it seems that the issue is limited to buckets that have spaces in the name.
It's also not reproducible through the gcloud web console, but if you use gsutil to upload a file to a bucket with a space in the name then it won't be visible on the web UI.
I can see from your screenshot that your bucket also has spaces (%20 in the url).
If you need a workaround asap, you could rename your bucket...
But google should fix this soon, I hope.
There is currently open issue on GCS/Console integration
If files have any symbols that needs urlencoding - they are not visible in console - but accessible via gsutil/API (which is currently recommended as workaround)
Issue has been resolved as of 8-May-2018 10:00 UTC
This can happen if the file doesn't have an extension, the UI treats it as a folder and lets you navigate into it, showing a blank folder instead of the file contents.
We had the same symptom (files show up in API but invisible on the web and via CLI).
The issue turned out to be that we were saving files to "./uploads", which Google interprets as "create a directory literally called '.' and then a subdirectory called uploads."
The fix was to upload to "uploads/" instead of "./uploads". We also just ran a mass copy operation via the API for everything under "./uploads". All visible now!
I also had spaces in my url and it was not working properly yesterday. Checked this morning and everything is working as expected. I still have the spaces in my URL btw.
This is not first time when I see this infinite loading after copying my OpenCart shop - but if I look to source of page in browser - I see the code and loaded resources at Network tab at Chrome. After maybe 10 minutes of loading sometimes I can see my site that looks like without CSS.
My steps: creating new VM at Google Cloud, installation of mysql (+ creation user & "grant all"), installation of apache2, phpmyadmin, copy files from old hosting and at config.php change internal path.
At /var/log/apache2/error.log no one error. Apache and MySQL is working. VM is fast enough and CPU with RAM not loaded too much. Restart of VM does not help.
Where can be the problem?
Often, this is caused by a typo in the config file for the HTTP_SERVER. Make sure it's formatted as http://www.domain.co.uk/ , anything else could cause the loop.
http://www.keciadesign.dk
I am trying to set up table rates in Magento 1.6.2.0. The problem occurs when I try to upload the file with table rates (CSV-file). Then the error "Unable to list current working directory" appears and I can't go any further.
TMP, Media and Var folders have perm.777.
I have read everything there was to find on the Internet on this problem - many seem to have had this problem but I have yet to see a solution.
Note:
Probably not very relevant, but I am on Unoeuro hosting on a shared serverspot.
With some extensions (Wyomind Simple Google Shopping) the error shows up when var/tmp is missing in Magento directory structure.
The most popular reason of this problem - wrong permissions for media directory. It should be writeable by web server. More information can be checked here.
Look to your php.ini and find upload_tmp_dir option (or use echo ini_get('upload_tmp_dir') in your code. Seems like PHP can't list files in this directory where apache uploads files. I'm afraid you can't change permissions of this folder on shared hosting.
This error can also be reported if you have ran out of disk space.