Accessing wildcard page in flask by naked domain from phone redirects to searchvity.com - flask

Hi I'm new at domains and registrars and just stumbled upon a really weird behaviour that I don't know how to tackle.
I built a website with flask (hosted in PythonAnywhere, with domain.com as my registrar). I set things up at domain.com so that the naked domain redirects to the www. version, and it works well for any page in the site that I've defined specifically in my flask like #app.route('/something/').
I had to tweak things a bit so that the naked domain also accepts them without the last slash, like this...
#app.route('/something/')
#app.route('/something')
def something()
# actual code
...but, when I try to access a page that doesn't exist through the naked domain, on computer it doesn't work (404 error, doesn't even show a simple html page) and on my phone it shows a weird random page that after gossiping a bit I realized is by searchvity.com. And I mean, I have absolutely no clue about how on earth that's possible.
Also, the weirdest part of all this is I actually have a route in flask that should manage this (#app.route('/<randomurl>/'), also with and without slash), but as said, that only works when accessing the www. version of the domain.
I know it's kinda a minor issue (since why would anyone try to access on purpose a page that doesn't exist specifically in the naked domain). But it bothers me quite a bit that someone could be redirected to that random site if the conditions are given and they are comming from phone... and in any case it's an issue that shouldn't be there and I don't even know where to start in order to fix it.
EDIT: now apparently the desktop version also shows that same weird page.
EDIT2: The reason I had only 404 on my desktop and not the weird (DNS spoofing?) page was AdBlock.
EDIT3: When the issue happens, the server at pythonanywhere doesn't even see the access data (it's like nothing happened).

Finally I found NakedSSL, which lets you redirect people from your naked domain towards the https version.
I got to add a free SSL certificate on my pythonanywhere page (which is as easy as two clicks), and then on NakedSSL everything is quite straight forward too.
Now I get the proper pages in all the cases (404, wildcard, etc) and there are no more weird spoofing things.

Related

Having 2 website on the same server using ember-simple-auth logs both out

I currently have 2 of the same site (one for production and the other is a development version) on one server and I have an issue with the ember-simple-auth for both of the site. Whenever I log in on one of the site, it works perfectly fine, the session works and everything works as expected. However, when I have both of the site open on different tabs (on the same browser and same window) and I try logging on one of them, they both log out creating an error in the console saying:
"The authenticator "authenticator:oauth2" rejected to restore the session - invalidating…"
On the other hand, when I have one of the site open on a regular browser and the other one on the same browser but in incognito (no caches), they both work perfectly fine (e.g. none of them logs out and everything works as expectedly). It works perfectly fine too if I open one site in one browser (such as Chrome) and the other site in a different browser (such as Safari).
My first guess is that these 2 different site has the same session used in cache but I could be wrong. If you have any idea on why this occur or you have a solution, please let me know.
Probably both sites are on the same origin and you use the local-storage session-store. Then both will use the ember_simple_auth:session localStorage key.
Probably the easiest thing could be to override the session-store and define a custom key that contains the information if its the dev or production build.
The probably easiest thing is to have both sites on different ports and/or domains so you have a different origin.

How to keep the root Rails 4 app from accepting subdomains?

When I originally wrote the API for my website, I wanted to have it as a subdomain like so: api.example.com. That worked. Now I am changing it so api can only work when trying to access it as part of the URLL example.com/api/.... Now this works except locally api.example.com/api... works still. More interesting is the fact that some random subdomain does not work. On the production site though any subdomain works, which is not what I want. Is there a way to ensure that subdomains aren't caught by the root in Rails 4?
As an extra question that hopefully is answered, but doesn't have to be, how can I redirect to another subdomain using the routes. I haven't been able to figure it out and I am wondering if it is possible.

In Django, after I forward a domain with masking, how do I get the internal links to point to the masked domain rather than the ip address?

Example:
I have a domain name on Godaddy, www.example.com, which I want to forward with masking to 200.200.200.200 which is a server hosted on amazon ec2. When I go to www.example.com through my browser, I see my site just fine. But all the links on my site link to 200.200.200.200/home. How do I make the links point to www.example.com/home instead? I'm using django as my web framework. Thanks!
edit:
an example of the linking I'm using is home so this gets rendered as <a href="/home/" >home</a>
Are you sure you need to use masking instead of forwarding? With forwarded urls this does not happen, but I guess since masking just sticks a new url in the address bar while actually referencing the original page, any relative links still refer to the original url. In addition, I understand that using masking changes the way Google's crawlers behave, so your site might not show up as high as it should in search results, which is something to look into if that is important to you. If for whatever reason you do need masking, I think you'll have to use absolute urls in all your links (it's possible there's some setting in GoDaddy to avoid this, but I have no idea - if there is, hopefully someone else will answer).
The easiest way to use absolute urls in django is probably to define a ROOT_URL variable (i.e. ROOT_URL = http://www.example.com) in settings.py. Then your home link would be:
home
You'll also need to pass 'ROOT_URL'=settings.ROOT_URL to the view's HtmlResponse (or pass a context_instance instead) so that the template has access to the ROOT_URL variable.
Hope that helps!

Is someone trying to hack my Django website

I have a website that I built using Django. Using the settings.py file, I send myself error messages that are generated from the site, partly so that I can see if I made any errors.
From time to time I get rather strange errors, and they seem to mostly be around about the same area of the site (where I wrote a little tutorial trying to explain how I set up a Django Blog Engine).
The errors I'm getting all appear like something I could have done in a typo.
For example, these two errors are very close together. I never had an 'x' or 'post' as a variable on those pages.
'/blog_engine/page/step-10-sub-templates/{{+x.get_absolute_url+}}/'
'/blog_engine/page/step-10-sub-templates/{{+post.get_absolute_url+}}/'
The user agent is:
'HTTP_USER_AGENT': 'Mozilla/5.0 (compatible; Purebot/1.1; +http://www.puritysearch.net/)',
Which I take it is a scraper bot, but I can't figure out what they would be able to get with this kind of attack.
At the risk of sounding stupid, what should I do? Is it a hack attempt or are they simply trying to copy my site?
Edit: I'll follow the advice already given, but I'm really curios as to why someone would run a script like this. Are they just trying to copy. It isn't hitting admin pages or even any of the forms. It would seem like harmless (aside from potential plagiarism) attempts to dig in and find content?
From your USER_AGENT info it looks like this is a web spider from puritysearch.net.
I suggest you do is put a CAPTCHA code in you website. Program it to trigger when something tries to access 10 pages in 10 seconds (mostly no humans would do this or figure out a proper criteria to trigger your CAPTCHA).
Also, maintain robots.txt file which most crawlers honor. Mention your rules in robots.txt. You can say the crawlers to keep off certain busy sections of your site etc.
If the problem persists, you might want to contact that particular site's system admin & try to figure out what's going on.
This way you will not be completely blocking crawlers (which are needed for your website to become popular) and at the same time you are making sure that your users get fast experience on your site.
Project HoneyPot has this bot listed as a malicious one http://www.projecthoneypot.org/ip_174.133.177.66 (check the comments there) and what you should probably do is ban that IP and/or Agent.

Django, from php to Django

I have a website done with Django, that was previously done with PHP and CodeIgniter. I've moved the website to Webfaction, changed the DNS and all other configurations, but now my email is full of errors like this:
Error (EXTERNAL IP): /index.php/main/leer/7497
I don't know why the Django app is looking for pages from the PHP app, specially since the PHP app was in another host.
Are those URLs from your old site? That's probably a case of people having stale bookmarks, trying to navigate to them, and getting 404s. You might want to consider catching those, and redirecting to the new URL with response code 302.
I can't imagine those errors are caused by Django (except in the sense that the reports are from Django reporting 404s, which it does for free).
I agree with above. Just want to add you should use django.contrib.redirects to move the redirects.
You can read more about it here