Cannot send session cache limiter - headers already sent - Opencart Issue - opencart

I am getting the following error on our customers Opencart website after a recent website transfer from one server to another. Using opencart v1.5.6.4
Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at /home/sites/10b/3/33a32a4f67/public_html/vqmod/vqcache/vq2-system_library_language.php:88) in /home/sites/10b/3/33a32a4f67/public_html/system/library/session.php on line 21Warning: Cannot modify header information - headers already sent by (output started at /home/sites/10b/3/33a32a4f67/public_html/vqmod/vqcache/vq2-system_library_language.php:88) in /home/sites/10b/3/33a32a4f67/public_html/index.php on line 357
I believe this issue is preventing the site from functioning properly as currently having an issue with adding and taking away products from the cart.
Any ideas how to fix?
Thanks

No idea what i did but i have fixed the issue. I simply deleted the vqmod folder and then reuploaded all other folders from the original server.

Related

H18 Error: Django app Media Upload failing on Heroku

Our Django App is failing media upload. This has been an off-and-on issue for us for a while. however, for about a week now, it's been consistently failing to upload media. Our media files are stored on S3.
On inspection, the uploaded files were found in the S3 buckets... However, the logs display the message below while the app throws an Application error...
Found this answer on GitHub (https://github.com/benoitc/gunicorn/issues/840)
Hi, we hit this issue in production using Flask + Gunicorn + Heroku and couldn't find a cause or a workaround.
For one particular POST request with POST parameters, the request would fail with an H18 error (sock=backend) in Heroku's router indicating that the server closed the socket when it shouldn't have.
We started decreasing the response size of that failing endpoint until we narrowed it down to around the 13k mark. If we sent less than 13k, the response would always work. If we sent more than 13k, the response would almost always not work.
Code to reproduce this is available at https://github.com/erjiang/gunicorn-issue - just deploy the repo to Heroku as is and follow the instructions in the README.

Delete postman cache

I use Postman extension to check out my RESTful APIs
I am trying to make a request to my "localhost", but it seems to have cached one of the query parameters.
I tried clearing cache of my chrome browser but this does not seem to work. I went to the extent of even changing the API resource name.
Has anyone come across such an issue?
Cache-Control request header can be used but one thing to clarify
no-cache does not mean do not cache. In fact, it means on every HTTP request it "revalidate with server" before using any cached response. If the server says that the resource is still valid then the cache will still use the cached version.
while no-store is effectively asking to not cache at all and is intended not to to store anything in the cache.
I tried the solution above and it didn't work for me. What worked was restart the application. I'm using eclipse and running a spring boot application.
In case someone is using the same environment and facing the same problem it may help.
I suggest to use Postman App rather than the extension because with postman app you can do lot more cool things like you can use the console to debug your APIs, create/delete cookies and cache with excellent GUI.
I came across same situation where the request are cached in Postman. I deleted JSESSIONID cookie from Cookies section on PM rather closing the PM app, it solved my problem (means - the call reached to my localhost app) and got accurate response. Please try it if someone needs this solution.
I usually just request the data on a chrome incognito tab/firefox private tab and I guess that this just resets the cache and then it appears on my Postman app.
(I would recommend using the Postman app instead of the website as it has many more features!)

IE7 & IE8, JSESSIONID cookie breaks file download

Is there a way to prevent websphere from sending cookies in a response on a per request/url basis?
Our users get a link which allows them to download a file. Works fine in all major browsers except for IE8 & IE7. In IE7 & IE8, the file download breaks when cookies are sent with the response.
When a new session is created, the WebSphere sends a JSESSIONID cookie, and sets Cache-control to no-cache=set-cookie. This causes the download process to break in IE8 and lower.
Things I tried:
1) I know that no-cache=set-cookie can be turned off in Websphere admin console, but it's not an option.
2) The websphere is fronted by a web server, so the response headers can be changed using the web server, but it's not really an option.
3) I created a servlet filter, but it seems like whatever websphere does happens after the filter runs.
4) I created a JSP page that would prompt file download on load. The idea was that the cookie will be exchanged on page load, so that it won't interfere with the download. Unfortunately, because the download is triggered through JavaScript, IE blocks the download, and a user needs to manually approve it.
Is there any way to make it work?
IE8 has bug that may connected with your problem. Bug description. stackoverflow
I solved similar problem using good article.

Server blacklisted from Facebook?

Yesterday we installed a new plugin on our site to allow our posts to be posted to our Facebook page automatically. Well early morning Monday's and some testing code was left in that made the call a couple of thousand times in about an hour. Over 24 hours later, Facebook still returns:
{"error":{"message":"(#1) An error occured while creating the share","type":"OAuthException","code":1}}
The post is going to {page_id}/feed and using the extended access_token, which doesn't expire for pages.
I know this is related to the IP of the server, as I can perform a post via curl on our other servers without and problems (just copy and pasting over the curl arguments). So I was wondering if anyone has experienced this before and if there is a policy to get your server removed from the blacklist, or is it just a wait it out type thing?
TIA!

recieving error The file you are attempting to save or retrieve has been blocked from this Web site by the server administrators.<nativehr>0x800401e6

After building and deploying, checked the solution management from Central administration and it's up, a simple web service method that only created a Document Library list with a few columns when trying to retrieve the wsdl or even just by calling the WS fromt the adress since its a void method I recieve some error:
The file you are attempting to save or retrieve has been blocked from this Web site by the server administrators.<nativehr>0x800401e6</nativehr><nativestack></nativestack>
The very same method runs fine when called from another web service project that is already deployed so there's nothing wrong with the code. I'm most probably doing something wrong but can't figure.
The system is running on a win server 2008 with sharepoint 2010, framework 3.5 and "ANY" cpu mode.
thank you!
[edit]
Managed to get rid of the previous error by removing asmx extention from the blocked file list in central administration now instead I'm recieving a 404 error:
The resource cannot be found.
Description: HTTP 404. The resource you are looking for (or one of its dependencies) could have been removed, had its name changed, or is temporarily unavailable. Please review the following URL and make sure that it is spelled correctly.
Requested URL: /_layouts1/my2claims/tt_claims.asmx
it must run under the same application pool as SharePoint