since mid-August I've seen a sharp drop in the # of people who are opening our e-newsletters, and wonder if the error below is related, what it means, and whether there's a solution? I'm not getting bounced emails, and the people I've checked with have said the messages aren't going into their spam; the emails aren't reaching some mailboxes at all. I tried to look at the source mentioned in the trace, but I couldn't figure out anything from it.
-----------------------------ERROR MESSAGE IN LOG-------------------------------------
Sep 17 13:56:26 [info] $Fatal Error Details = Array
(
[message] => We can't load the requested web page. This page requires cookies to be enabled in your browser settings. Please check this setting and enable cookies (if they are not enabled). Then try again. If this error persists, contact the site adminstrator for assistance.<br /><br />Site Administrators: This error may indicate that users are accessing this page using a domain or URL other than the configured Base URL. EXAMPLE: Base URL is http://example.org, but some users are accessing the page via http://www.example.org or a domain alias like http://myotherexample.org.<br /><br />Error type: Could not find a valid session key.
[code] =>
)
Sep 17 13:56:26 [info] $backTrace = #0 /home/afaeus/public_html/wp-content/plugins/civicrm/civicrm/CRM/Core/Error.php(315): CRM_Core_Error::backtrace("backTrace", TRUE)
#1 /home/afaeus/public_html/wp-content/plugins/civicrm/civicrm/CRM/Core/Controller.php(278): CRM_Core_Error::fatal("We can't load the requested web page. This page requires cookies to be enable...")
#2 /home/afaeus/public_html/wp-content/plugins/civicrm/civicrm/CRM/Core/Controller.php(186): CRM_Core_Controller->key("CRM_Mailing_Controller_Send", TRUE, FALSE)
#3 /home/afaeus/public_html/wp-content/plugins/civicrm/civicrm/CRM/Mailing/Controller/Send.php(41): CRM_Core_Controller->__construct("New Mailing", "null", NULL, FALSE, TRUE)
#4 /home/afaeus/public_html/wp-content/plugins/civicrm/civicrm/CRM/Core/Invoke.php(287): CRM_Mailing_Controller_Send->__construct("New Mailing", TRUE, "null", NULL, "false")
#5 /home/afaeus/public_html/wp-content/plugins/civicrm/civicrm/CRM/Core/Invoke.php(70): CRM_Core_Invoke::runItem((Array:14))
#6 /home/afaeus/public_html/wp-content/plugins/civicrm/civicrm/CRM/Core/Invoke.php(52): CRM_Core_Invoke::_invoke((Array:3))
#7 /home/afaeus/public_html/wp-content/plugins/civicrm/civicrm.php(344): CRM_Core_Invoke::invoke((Array:3))
#8 [internal function](): civicrm_wp_invoke("")
#9 /home/afaeus/public_html/wp-includes/plugin.php(505): call_user_func_array("civicrm_wp_invoke", (Array:1))
#10 /home/afaeus/public_html/wp-admin/admin.php(212): do_action("toplevel_page_CiviCRM")
#11 {main}
It makes a big difference whether
nobody is getting emails, or
fewer people are getting emails.
If it's the latter, and they're not getting them in spam or anything else, you might try looking in your mail log. On a Debian/Ubuntu machine with Postfix, that's usually /var/log/mail.log. On other VPS/dedicated setups, it should be someplace similar. You might find that some servers are rejecting the messages.
You also should us a blacklist search to see if your server is being blacklisted someplace.
Finally, you should know that if your "from" address is a Yahoo or AOL address (or possibly another third-party service), you're likely to get rejected my many providers. They'll effectively say, "We know Yahoo's servers, and this is coming from someplace else--it must be a scam."
Now, on the other hand, if you have no email going out, the CiviCRM error is likely related. I don't know what could be causing that one, however.
The error suggests a few things, none of which would be related to deliverability of the emails.
People may be clicking on a link in an email or perhaps in a bookmark which includes a session key that is now expired. Search results, and multi-stage actions typically have in the url a key-value pair like qfKey=0fe0c51c4024538bb34d5c84305ffb8a_8786 that is a give-away that it cannot be shared, and will not work if you sign-out from the site.
As the error description indicates, you may have more than one domain configured for the site, and the session is not being carried from one to other. Check that your CiviCRM base_url is correct both in civicrm.settings.php and through the browser at the following urls:
See CiviCRM Menu: Administer >> System Settings >> Cleanup Caches and Update Paths
Drupal sites: http:///index.php?q=civicrm/admin/setting/updateConfigBackend&reset=1
Joomla 1.5 sites: http:///administrator/index2.php?option=com_civicrm&task=civicrm/admin/setting/updateConfigBackend&reset=1
Joomla 1.6 sites: http:///administrator/index.php?option=com_civicrm&task=civicrm/admin/setting/updateConfigBackend&reset=1
Wordpress sites: http:///wp-admin/admin.php?page=CiviCRM&q=civicrm/admin/setting/updateConfigBackend&reset=1
NB: Prior to 4.3.3 the WordPress implementation mistakenly drops everything after the domain in its suggestion for a new URL. The default location for a WordPress install relative to docroot would normally mean that the url should be http:///wp-content/plugins/civicrm/civicrm/
Related
I've just noticed my console is littered with this warning, appearing for every single linked resource. This includes all referenced CSS files, javascript files, SVG images, and even URLs from ajax calls (which respond in JSON). But not images.
The warning, for example in case of a style.css file, will say:
Cookie “PHPSESSID” will be soon treated as cross-site cookie against “http://localhost/style.css” because the scheme does not match.
But, the scheme doesn't match what? The document? Because that it does.
The URL of my site is http://localhost/.
The site and its resources are all on http (no https on localhost)
The domain name is definitely not different because everything is referenced relative to the domain name (meaning the filepaths start with a slash href="/style.css")
The Network inspector just reports a green 200 OK response, showing everything as normal.
It's only Mozilla Firefox that is complaining about this. Chromium seems to not be concerned by anything. I don't have any browser add-ons. The warnings seem to originate from the browser, and each warning links to view the corresponding file source in Debugger.
Why is this appearing?
that was exactly same happening with me. the issue was that, firefox keeps me showing even Cookies of different websites hosted on same URL : "localhost:Port number" stored inside browser memory.
In my case, i have two projects configured to run at http://localhost:62601, when i run first project, it saves that cookie in browser memory. when i run second project having same URL, Cookie is available inside that projects console also.
what you can do, is delete the all of the cookies from browser.
#Paramjot Singh's answer is correct and got me most of the way to where I needed to be. I also wasted a lot of time staring at those warnings.
But to clarify a little, you don't have to delete ALL of your cookies to resolve this. In Firefox, you can delete individual site cookies, which will keep your settings on other sites.
To do so, click the hamburger menu in the top right, then, Options->Privacy & Security or Settings->Privacy & Security
From here, scroll down about half-way and find Cookies and Site Data. Don't click Clear Data. Instead, click Manage Data. Then, search for the site you are having the notices on, highlight it, and Remove Selected
Simple, I know, but I made the mistake of clearing everything the first time - maybe this will prevent someone from doing same.
The warning is given because, according to MDN web docs:
Standards related to the Cookie SameSite attribute recently changed such that:
The cookie-sending behaviour if SameSite is not specified is SameSite=Lax. Previously the default was that cookies were sent for all requests.
Cookies with SameSite=None must now also specify the Secure attribute (they require a secure context/HTTPS).
Which indicates that a secure context/HTTPS is required in order to allow cross site cookies by setting SameSite=None Secure for the cookie.
According to Mozilla, you should explicitly communicate the intended SameSite policy for your cookie (rather than relying on browsers to apply SameSite=Lax automatically), otherwise you might get a warning like this:
Cookie “myCookie” has “SameSite” policy set to “Lax” because it is missing a “SameSite” attribute, and “SameSite=Lax” is the default value for this attribute.
The suggestion to simply delete localhost cookies is not actually solving the problem. The solution is to properly set the SameSite attribute of cookies being set by the server and use HTTPS if needed.
Firefox is not the only browser making these changes. Apparently the version of Chrome I am using (84.0.4147.125) has already implemented the changes as I got this message in the console:
The previously mentioned MDN article and this article by Mike Conca have great information about changes to SameSite cookie behavior.
Guess you are using WAMP or LAMP etc. The first thing you need to do is enable ssl on WAMP as you will find many references saying you need to adjust the cookie settings to SameSite=None; Secure That entails your local connection being secure. There are instructions on this link https://articlebin.michaelmilette.com/how-to-add-ssl-https-to-wampserver/ as well as some YouTube vids.
The important thing to note is that when creating the SSL certificate you should use sha256 encoding as sha1 is now deprecated and will throw another warning.
There is a good explanation of SameSite cookies on https://web.dev/samesite-cookies-explained/
I was struggling with the same issue and solved it by making sure the Apache 2.4 headers module was enabled and than added one line of code
Header always edit Set-Cookie ^(.")$ $1;HttpOnly;Secure
I wasted lots of time staring at the same sets of warnings in the Inspector until it dawned on me that the cookies were persisting and needed purging.
Apparently Chrome was going to introduce the new rules by now but Covid-19 meant a lot of websites might have been broken while people worked from home. The major browsers are working together on the SameSite attribute this so it will be in force soon.
I'm trying to configure cubesviewer and try out the setup.
I've got the app installed running, along with cubes slicer app too.
However, when I visit the home page
http://127.0.0.1:8000/cubesviewer/
it fails popping up an error "Error occurred while accessing the data server"
Debugging with the browser console, shows a http status 403 error with the url http://localhost:8000/cubesviewer/view/list/
After some googling and reading, I figured I'll need to add rest frame auth settings. (as mentioned here.).
Now after running migrate and runserver, I get 401 error on that url.
Clearly I'm missing something with settings.py , Can somebody help me out.
I'm using the cubesviewer tag v0.10 from the github repo.
And find my settings here. http://dpaste.com/2G5VB5K
P.S: I've verified Cubes slicer works separately on its' own.
I have reproduced this. This is error may occur when you use different URL to access a website and to access related resources. For security reasons, browsers allow to access resources from exactly the same host as the page you are viewing.
Seems you are accessing the app via http://127.0.0.1:8000, but you have configured CubesViewer to tell clients to access the data backend via http://localhost:8000. While it's the same IP address, they are different strings.
Try accessing the app as http://localhost:8000.
If you deploy to a different server, you need to adjust settings. Here are the relevant configuration options, now with more comments:
# Base Cubes Server URL.
# Your Cubes Server needs to be running and listening on this URL, and it needs
# to be accessible to clients of the application.
CUBESVIEWER_CUBES_URL="http://localhost:5000"
# CubesViewer Store backend URL. It should point to this application.
# Note that this must match the URL that you use to access the application,
# otherwise you may hit security issues. If you access your server
# via http://localhost:8000, use the same here. Note that 127.0.0.1 and
# 'localhost' are different strings for this purpose. (If you wish to accept
# requests from different URLs, you may need to add CORS support).
CUBESVIEWER_BACKEND_URL="http://localhost:8000/cubesviewer"
Alternatively, you could change CUBESVIEWER_BACKEND_URL to "http://127.0.0.1:8000/cubesviewer" but I recommend you to use hostnames and not IP addresses for this.
Finally, I haven't yet tested with CORS support, but check this pull request if you wish to try that approach.
I'm writing an application that listens to HTTP traffic and tries to recognize which requests where initiated by a human.
For example:
The user types cnn.com in their address bar, which starts a request. Then I want to find
CNN's server response while discarding any others requests (such as XHR, etc.)
How could you tell from the header information what means what?
After doing some research I've found that relevant responses come with :
Content-Type: text/html
Html comes with a meaningful title
status 200 ok
There is no way to tell from the bits on the wire. The HTTP protocol has a defined format, which all (non-broken) user agents adhere to.
You are probably thinking that the translation of a user's typing of just 'cnn.com' into 'http://www.cnn.com/' on the wire can be detected from the protocol payload. The answer is no, it can't.
To detect the user agent allowing the user such shorthand, you would have to snoop the user agent application (e.g. a browser) itself.
Actually, detecting non-human agency is the interesting problem (with spam detection as one obvious motivation). This is because HTTP belongs to the family of NVT protocols, where the basic idea, believe it or not, is that a human should be able to run the protocol "by hand" in a network terminal/console program (such as a telnet client.) In other words, the protocol is basically designed as if a human were using it.
I don't think header information can suffice to identify real users from bots, since bots are made to mimic real users and headers are very easy to imitate.
One thing you can do, is to track the path (sequence of clicks) followed by a user, which is most likely to be different from one made by a bot, and made some analysis on the posted information (i.e. bayesian filters).
A very easy to implement check is based on the IP source. There are databases of black listed IP addresses, see Project Honeypot - and if you are writing your software in java, here is an example on how to check an IP address: How to query HTTP:BL for spamming IP addresses.
What I do on my blog is this (using wordpress plugins):
check if an IP address is in the HTTP:BL, if it is the user is shown an html page to take action to whitelist his IP address. This is done in Wordpress by Bad Behavior plugin.
when the user submits some content, a bayesian filter verifies the content of his submission and if his comment is identified as spam, a captcha is displayed before completing the submission. This is done with akismet and conditional captcha, and the comment is also enqueued for manual approval.
After being approved once, the same user is considered safe, and can post without restrictions/checks.
Applying the above rules, I have nomore spam on my blog. And I think that a similar logic can be used for any website.
The advantage of this approach, is that most of the users don't even notice any security mechanism, since no captcha is displayed, nor anything unusual happens in 99% of the times. But still there is quite restrictive, and effective, checks going on under the hoods.
I can't offer any code to help, but I'd say look at the Referer HTTP header. The initial GET request shouldn't have a Referer, but when you start loading the resources on the page (such as JavaScript, CSS, and so on) the Referer will be set to the URL that requested those resources.
So when I type in "stackoverflow.com" in my browser and hit enter, the browser will send a GET request with no Referer, like this:
GET / HTTP/1.1
Host: stackoverflow.com
# ... other Headers
When the browser loads the supporting static resources on the page, though, each request will have a Referer header, like this:
GET /style.css HTTP/1.1
Host: stackoverflow.com
Referer: http://www.stackoverflow.com
# ... other Headers
For a number of sites that are functioning normally, when I run them through the OpenGraph debugger at developers facebook com/tools/debug, Facebook reports that the server returned a 502 or 503 response code.
These sites are clearly working fine on servers that are not under heavy load. URLs I've tried include but are not limited to:
http://ac.mediatemple.net
http://freespeechforpeople.org
These are in fact all sites hosted by MediaTemple. After talking to people at MediaTemple, though, they've insisted that it must be a bug in the API and is not an issue on their end. Anyone else getting unexpected 500/502/503 HTTP response codes from the Facebook Debug tool, with sites hosted by MediaTemple or anyone else? Is there a fix?
Note that I've reviewed the Apache logs on one of these and could find no evidence of Apache receiving the request from Facebook, or of a 502 response etc.
Got this response of them:
At this time, it would appear that (mt) Media Temple servers are returning 200 response codes to all requests from Facebook, including the debugger. This can be confirmed by searching your access logs for hits from the debugger. For additional information regarding viewing access logs, please review the following KnowledgeBase article:
Where are the access_log and error_log files for my server?
http://kb.mediatemple.net/questions/732/Where+are+the+access_log+and+error_log+files+for+my+server%3F#gs
You can check your access logs for hits from Facebook by using the following command:
cat <name of access log> | grep 'facebook'
This will return all hits from Facebook. In general, the debugger will specify the user-agent 'facebookplatform/1.0 (+http://developers.facebook.com),' while general hits from Facebook will specify 'facebookexternalhit/1.0 (+http://www.facebook.com/externalhit_uatext.php).'
Using this information, you can perform even further testing by using 'curl' to emulate a request from Facebook, like so:
curl -Iv -A "facebookplatform/1.0 (+http://developers.facebook.com)" http://domain.com
This should return a 200 or 206 response code.
In summary, all indications are that our servers are returning 200 response codes, so it would seem that the issue is with the way that the debugger is interpreting this response code. Bug reports have been filed with Facebook, and we are still working to obtain more information regarding this issue. We will be sure to update you as more information becomes available.
So good news, is that they are busy with it solving it. Bad news, it's out of our control.
There's a forum post here of the matter:
https://forum.mediatemple.net/topic/6759-facebook-503-502-same-html-different-servers-different-results/
With more than 800 views, and recent activity, it states that they are working hard on it.
I noticed that https MT sites don't even give a return code:
Error parsing input URL, no data was scraped.
RESOLUTION
MT admitted it was their fault and fixed it:
During our investigation of the Facebook debugger issue, we have found that multiple IPs used by this tool were being filtered by our firewall due to malformed requests. We have whitelisted the range of IP addresses used by the Facebook debugger tool at this time, as listed on their website, which should prevent this from occurring again.
We believe our auto-banning system has been blocking several Facebook IP addresses. This was not immediately clear upon our initial investigation and we apologize this was not caught earlier.
The reason API requests may intermittently fail is because only a handful of the many Facebook IP addresses were blocked. The API is load-balanced across several IP ranges. When our system picks up abusive patterns, like HTTP requests resulting in 404 responses or invalid PUT requests, a global firewall rule is added to mitigate the behavior. More often than not, this system works wonderfully and protects our customers from constant threats.
So, that being said, we've been in the process of whitelisting the Facebook API ranges today and confirming our system is no longer blocking these requests. We'd still like those affected to confirm if the issue persists. If for any reason you're still having problems, please open up or respond to your existing support request
In my Django login I always rewrite a logged in users url to have their username in it. So if the username is "joe" I rewrite the url to be "joe.example.com". This works great except on IE8 for usernames with underscores like "joe_schmoe". IE8 won't login the users when the url is like: "joe_schmoe.example.com". In my settings file I have wildcard subdomains for example.com turned on.
Is this a bug in IE8 or django? How can I work around it other than removing all underscores from usernames?
It's an IE issue. IBM Lotus Sametime has a support page about this:
Error "Cookies are not enabled" in Internet Explorer if underscore in the hostname
This error message is displayed when using Internet Explorer 5.5 and 6.0 or later with the Microsoft Patch MS01-055 (or a Service Pack that also includes this patch). When Internet Explorer is updated, it then becomes compliant with Request for Comments (RFC) 952, which defines and restricts host and domain naming conventions. This compliance is to avoid certain security vulnerabilities with session cookies [...]
You can read more (including reference to Microsoft's Knowledge Base Article and RFC 952) on the above-mentioned support page.
I know that LiveJournal always rewrite such usernames using dash, so "joe-schmoe". I think they do it on purpose :)
I suspect the same is true if the hostname has four parts instead of three -- we have no trouble with sitename.ourdomain.net , but IE8 for one customer is refusing cookies coming from test.sitename.ourdomain.net . But I can't reproduce it on other IEs yet.