I’ve setup cookiebot as a cookie manager solution in order to be compliant with different set of legal obligations.
But it seems that it makes new-relic script instantiation impossible, the cookie manager is only used on my homepage if you reach the following URL : https://www.vetorino.com/?cookie=1
As you can see there is an error in the console as stated in the attached screenshot. Any solution ?
Related
I've just noticed my console is littered with this warning, appearing for every single linked resource. This includes all referenced CSS files, javascript files, SVG images, and even URLs from ajax calls (which respond in JSON). But not images.
The warning, for example in case of a style.css file, will say:
Cookie “PHPSESSID” will be soon treated as cross-site cookie against “http://localhost/style.css” because the scheme does not match.
But, the scheme doesn't match what? The document? Because that it does.
The URL of my site is http://localhost/.
The site and its resources are all on http (no https on localhost)
The domain name is definitely not different because everything is referenced relative to the domain name (meaning the filepaths start with a slash href="/style.css")
The Network inspector just reports a green 200 OK response, showing everything as normal.
It's only Mozilla Firefox that is complaining about this. Chromium seems to not be concerned by anything. I don't have any browser add-ons. The warnings seem to originate from the browser, and each warning links to view the corresponding file source in Debugger.
Why is this appearing?
that was exactly same happening with me. the issue was that, firefox keeps me showing even Cookies of different websites hosted on same URL : "localhost:Port number" stored inside browser memory.
In my case, i have two projects configured to run at http://localhost:62601, when i run first project, it saves that cookie in browser memory. when i run second project having same URL, Cookie is available inside that projects console also.
what you can do, is delete the all of the cookies from browser.
#Paramjot Singh's answer is correct and got me most of the way to where I needed to be. I also wasted a lot of time staring at those warnings.
But to clarify a little, you don't have to delete ALL of your cookies to resolve this. In Firefox, you can delete individual site cookies, which will keep your settings on other sites.
To do so, click the hamburger menu in the top right, then, Options->Privacy & Security or Settings->Privacy & Security
From here, scroll down about half-way and find Cookies and Site Data. Don't click Clear Data. Instead, click Manage Data. Then, search for the site you are having the notices on, highlight it, and Remove Selected
Simple, I know, but I made the mistake of clearing everything the first time - maybe this will prevent someone from doing same.
The warning is given because, according to MDN web docs:
Standards related to the Cookie SameSite attribute recently changed such that:
The cookie-sending behaviour if SameSite is not specified is SameSite=Lax. Previously the default was that cookies were sent for all requests.
Cookies with SameSite=None must now also specify the Secure attribute (they require a secure context/HTTPS).
Which indicates that a secure context/HTTPS is required in order to allow cross site cookies by setting SameSite=None Secure for the cookie.
According to Mozilla, you should explicitly communicate the intended SameSite policy for your cookie (rather than relying on browsers to apply SameSite=Lax automatically), otherwise you might get a warning like this:
Cookie “myCookie” has “SameSite” policy set to “Lax” because it is missing a “SameSite” attribute, and “SameSite=Lax” is the default value for this attribute.
The suggestion to simply delete localhost cookies is not actually solving the problem. The solution is to properly set the SameSite attribute of cookies being set by the server and use HTTPS if needed.
Firefox is not the only browser making these changes. Apparently the version of Chrome I am using (84.0.4147.125) has already implemented the changes as I got this message in the console:
The previously mentioned MDN article and this article by Mike Conca have great information about changes to SameSite cookie behavior.
Guess you are using WAMP or LAMP etc. The first thing you need to do is enable ssl on WAMP as you will find many references saying you need to adjust the cookie settings to SameSite=None; Secure That entails your local connection being secure. There are instructions on this link https://articlebin.michaelmilette.com/how-to-add-ssl-https-to-wampserver/ as well as some YouTube vids.
The important thing to note is that when creating the SSL certificate you should use sha256 encoding as sha1 is now deprecated and will throw another warning.
There is a good explanation of SameSite cookies on https://web.dev/samesite-cookies-explained/
I was struggling with the same issue and solved it by making sure the Apache 2.4 headers module was enabled and than added one line of code
Header always edit Set-Cookie ^(.")$ $1;HttpOnly;Secure
I wasted lots of time staring at the same sets of warnings in the Inspector until it dawned on me that the cookies were persisting and needed purging.
Apparently Chrome was going to introduce the new rules by now but Covid-19 meant a lot of websites might have been broken while people worked from home. The major browsers are working together on the SameSite attribute this so it will be in force soon.
My Google App Engine based website started failing suddenly with following error -
file_exists(): open_basedir restriction in effect. File(/base/data/home/.config/gcloud/application_default_credentials.json) is not within the allowed path(s):
Any help/pointers will be appreciated as my website is currently down because of it.
I post here another possible solution as I had exaclty the same error message, even the reason was different. This is why this question came up while I searched for it.
I am using the Google Translation API:
https://googleapis.github.io/google-cloud-php/#/docs/google-cloud/v0.153.0/translate/v2/translateclient
In my case I didn't provide in $translate = new TranslateClient(); the parameter key.
Providing the Google API key $translate = new TranslateClient([ "key" => "my-key" ]); fixed it.
I'm trying to post a feed on my wall or on the wall on some of my friends using Graph API. I gave all permissions that this application needs, allow them when i make the request from my page, I'm having a valid access token but even though this exception occurs and no feed is posted. My post request looks pretty good, the permissions are given. What do I need to do to show on facebook app that I'm not an abusive person. The last think I did was to dig in my application Auth Dialog to set all permission I need there, and to write why do I need these permissions.
I would be very grateful if you tell me what is going on and point me into the right direction of what do I need to do to fix this problem.
Had the same problem. I figured out that Facebook was refusing my shortlinks, which makes me a bit mad...but I get the point because its possible that shortlinks can be used to promote malicious content...so if you have shortlinks as part of your test, replace them w the full url...
I believe this message is encountered for one of the two reasons :
Your post contains malicious links
You are trying to make a POST request over a non-https connection.
The second one is not confirmed but I have seen that behavior. While same code in my heroku hosted app worked fine, it gave this #368 error on my 000webhost hosted .tk domain which wasn't secured by SSL
Just in case anyone is still struggling with this, the problem occurs when you put URLs or "action links" that are not in your own app domain, if you really need to post to an extarnal page, you'll have to post to your app first, then redirect from there using a script or something. hope that helps.
also it's better in my opinion to use HTTPS links, as sometimes i've seen a behaviour where http links would be rejected, but that's intermittent.
I started noticing that recently as well when running my unit tests. One of the tests I run is submitting a link that I know Facebook has blocked to verify that I handle the error correctly. I used to get this error:
Warning: This Message Contains Blocked Content: Some content in this message has been reported as abusive by Facebook...
But starting on July 4th, I started receiving this error instead:
(#368) The action attempted has been deemed abusive or is otherwise disallowed'
Both errors indicate that Facebook doesn't like what you're publishing.
I have been banging my head against a wall with this for a few hours now.
I have checked all of our Facebook applications in IE and I get the following error when the permissions dialogue box has been accepted:
SCRIPT70: Permission denied
all.js, line 22 character 4321
I have looked at past posts but they seemed to have happened a while back and Facebook have said that the issue is closed. It seems to have re-surfaced.
I am using the correct https code and it all works fine in Chrome/Safari/Firefox.
Has anyone got any ideas on this ?
Many thanks
The channelUrl solution works fine except in one specific case. If you have a script on your page that shortens the document.domain, then the file you create for the channelUrl must also shorten the document.domain to match.
For instance, if my host page is "foo.bar.com" and I have JavaScript shorten the document.domain to "bar.com" (which is legal, not advised, but legal), then the file I specify in channelUrl must do the same.
I know that Facebook states the file for channelUrl must contain just one line and that must be the script tag they specify, but that is really not the case. As long as the script tag is in the head of the page your create, all is well. Also, the document.domain shortening must happen before the Facebook code is called on both the host and channelUrl page.
I hope this helps others out, it sure was a pain to figure out on our site. Oh, and we have to shorten our document.domain because of our ads server, so it's something we have no control over.
One reported cause of this is if the channelURL you defined in the SDK init method doesn't match (protocol and domain) with the page load itself
i.e if you use a HTTP channel URL and the user is using HTTPS, or vice-versa, it may not work.
Try changing that and see if it helps. If you don't have a channelURL defined, you should add one (note that it's case sensitive).
i came accross the following error, when my client tries to edit list data through datasheet view from terminal machine.
The Web application at xxx could not be found. Verify that you have typed the URL
correctly. If the URL should be serving existing content, the system administrator may
need to add a new request URL mapping to the intended application.
Note: this error is coming with only 1 list. All other lists are working fine. i m using sharepoint 2007 on 32bit
This may be related to alternate access mappings.
I had this issue, and the clue was that the datasheet was referencing a URL of the form:
_http://hostname/site/...
instead of
_http://hostname.domain/site/...
ie. the datasheet was not referencing the fully qualified domain name (FQDN).
If the error message states The Web application at _http://hostname/site/..., ie. the error doesn't use the FQDN, alternate access mapping may resolve it. The end of the error message seems to suggest alternate access mappings, although it is not entirely explicit.
I resolved this by adding an alternate access mapping as follows:
internal url: http://hostname
public url: http://hostname.domain (FQDN)
Default Zone in my case, should work for other zones.
hope this helps :)