So I have a Django backend deployed on Google App Engine. This backend supports an iOS app. In my server logs I can see all the requests coming in and where they were made. It used to be that I would only get requests from Joon/7.** (which is the iOS app name + version). However, recently I've been getting requests from Chrome 72 which doesn't make sense cause the app shouldn't be able to be used on Chrome. Furthermore these requests are creating a lot of errors in my backend because it is not sending an authentication token. Does anyone know what is going on here? Are my servers being hacked?
Looks like someone discovered the URL to your App Engine app. You can use Ingress controls to only allow access via Cloud Load Balancing and then Google Cloud Armor in front to protect that with rules that look like:
has(request.headers['user-agent']) && request.headers['user-agent'].contains('Godzilla')
It is quite common to see all sorts of hits (from what I call spam bots) to an App Engine App. Technically, GCP expects you to use Google Firewall rules to block these. The challenge though is that these bots usually change their IP Addresses frequently or use multiple ones. I don't have a 'perfect' solution.
a) You can try the method by #jeff-williams (I've never tried that)
b) You can also try GCP's firewall rules (I use this but I try to block a range of IPs instead of blocking them one by one)
c) Sometimes I also put my service behind a specific non-intuitive path. This way, the spam bots will only hit the default/base url and then I have a separate service which returns 404 for all calls to that base url
Related
everyone! I am building a web application, i.e. a server-client application. For the interaction between the two, I have to define the URLs twice (hard-coded strings), both on the backend and the frontend, which makes future changes hard, because it would require changing the code in two places, rather than just one.
I am using Django and Angular and so I am looking for a way to specify the back end endpoints once, then ideally read them and use them for the Angular production build. Therefore changes to the endpoints will only require a new build, but no further changes.
Should these be defined in some .cfg file and be read by the back end on server startup and maybe somehow add them to the Angular's build process? Any suggestion would help because this redundancy comes in almost every webapp project and there has to be a more clever solution!
Thanks for the help in advance!
Here, it is the backend application that owns and defines url mappings to entities. It is possible that multiple clients can consume from the same API, like a web client, an Android client and an iOS client. In this setup, your backend is the point of truth for the url mappings, and client applications should be configured to use the url mappings defined in the backend application.
One possible way to do this is to serve defined urls in the backend on a path of the backend application, and have your client applications configure themselves using the data provided there. For example, if you use Django Rest Framework, by default, on the root path of the API ("/"), resources along with url mappings for the resources are served. You can use such a mechanism to configure your client applications on build time.
How many endpoints and how likely are you to alter them? Most likely you will always have to make more changes than just in 1 place as the reason behind changing an endpoint is normally you are trying to POST or GET new data structures. This would mean you will have to alter that request process anyway to handle the new data type or what was being posted.
Also, consider some of the publicly available api's out there - they don't give you an endpoint that serves a config file of available routes. When they make a change to their endpoints they usually create a versioned api so that consumers can upgrade in their own time.
In my opinion, unless you are planning a large scale web app, I wouldn't be too worried about trying to implement something like this.
I am currently developing an instant messaging feature for my apps (ideally cross platform mobile app/web app), and I am out of ideas to fix my issue.
So far, I have been able to make everything work locally, using a Node.js server with socket.io, django, and redis, following what most tutorials online suggest.
The step I am now at consists in putting all that in the cloud using amazon AWS. My Django server is up and running, I created a new separate Node.js server, and I am using Elasticache to handle the Redis part. I launch the different parts, and no error shows up.
However, whenever I try using my messaging feature on the web, I keep getting an error 500:
handshake error
I then used the console to check the request header, and I observed that the cookies are not in there, contrary to when I am on localhost. I know it is necessary to authorize the handshake, so I guess that's where my error is coming from..
Furthermore, I have also checked that the cookies do exist, they are just not set in the request header.
My question is then: How can I make sure Django or socket client (not sure who's responsible here..) puts the cookies in the header??
One of my ideas was that maybe I am supposed to put everything on the same server, with different ports, instead of 2 separate servers? Documentation on that specific architecture problem is surprisingly scarce, compared to the number of tutorials describing how to make it work on local.
I hope I described the problem accurately enough! :)
Important note: I am using socket.io v0.9.1-1, only one compatible with a titanium mobile app.
Thank you for any help!
All right, so I've made some progress.
The cookie problem came from the fact I was making cross-domain request, adding a few lines enabled CORS, which didn't solve the cookie issue, but allowed me to communicate between servers (basically I set the headers of the response using express. I then passed necessary data in the query, even if not the most secure way to do it, I'm just building an MVP, and it's enough for now.
I haven't been able to make the chat work from my Titanium mobile app, but since I can use a webview to handle it, I will be fine.
Hopefully that will help someone.. If anyone needs me to post some code snippets I will gladly do so upon request!
Cheers
I've designed a desktop app using PyQt GUI toolkit and now I need to embed this app on my Django website. Do I need to clone it using django's own logic or is there a way to get it up on website using some interface. Coz I need this to work on my website same way it works as desktop. Do I need to find out packages in django to remake it over the web or is there way to simplify the task?
Please help.
I'm not aware of any libraries to port a PyQT desktop app to a django webapp. Django certainly does nothing to enable this one way or another. I think, you'll find that you have to rewrite it for the web. Django is a great framework and depending on the complexity of your app, it might not be too difficult. If you haven't done much with web development, there is a lot to learn!
If it seemed like common sense to you that you should be able to run a desktop app as a webapp, consider this:
Almost all web communication that you likely encounter is done via HTTP. HTTP is a protocol for passing data between servers and clients (often, browsers). What this means is that any communication that takes place must be resolved into discrete chunks. Consider an example flow:
You go to google in your browser.
Your browser then hits a DNS server (or cache) that resolves the name google.com to some IP address.
Cool, now your browser makes a request to that IP address and says "get me some stuff".
Google decides to send you back a minimal amount of HTML and lots of minified JavaScript in the page.
Your browser realizes that there are some image links in the HTML and so it makes additional requests to google to get each of the images so that it can display them.
Now all the content is loaded on your browser so it starts to execute the JavaScript code, and that code needs some more data from google so it starts sending requests to google too.
This is just a small example of how fundamentally different a web application operates than how a desktop application does. On a desktop app you have the added convenience that any operation doesn't need to be "packaged up" and sent, then have an action taken, etc (unless you're using a messaging architecture, but that's relatively uncommon outside of enterprise apps).
I've got a little Django site in which users can link to images on other sites in their comments. It's by no means a core feature.
I've just moved the entire site to SSL. That has worked fine for the most part but remote images are obviously not always going to be available over SSL. Only the slightest number of domains have valid certificates.
What's the best way to funnel images through then?
Download them when the user posts and alter the URL to a local one?
Make a proxy that just proxies another URL?
The second seems like less work (I feel like it would be possible just with NGINX rules) but that it would also open the site up to people using my proxy for their own nefarious gain... Which I'd like to avoid.
What's the best compromise here?
Github ran into this same issue when they moved to HTTPS everywhere and detailed it in their blog: https://github.com/blog/743-sidejack-prevention-phase-3-ssl-proxied-assets
Their solution was to create a proxy server which they open sourced as https://github.com/atmos/camo To address the same concerns about abuse of the proxy it is deployed with a shared secret with the application server. Integrating this would a Django project would be straight forward as you would just need to generate the digest from the shared key for the given image url.
Hello smart people on stackoverflow,
I would be very happy if someone could point me to the right libraries/frameworks to do what I want.
We have the following web architecture set up.
1. We have a tomcat server that offers REST services.
2. We have an apache2 server that serves up php pages to users.
a. Some of these php pages make REST calls to tomcat for data.
b. Other php pages contain javascript that makes REST calls that are routed through apache2 via mod_proxy to tomcat. e.g. All request to http://myapache.com/PASSTOTOMCAT/rest/getSecureData would go to tomcat.
Now, I'm asked to add authentication to everything, both the user pages as well as the REST calls. It would obviously be ideal for the user to sign-in once for access to both.
What library can I use for this? I don't think I can use any php-based solution (ie. one that involves adding a ) because the pass-through url's won't have a chance to add this code and check for authentication. I think I need to use something built into apache2 itself.
One minor requirement is that I would like the user credentials stored in a mysql database as opposed to a file.
Am I over-thinking this?
Thanks in advance
Well it's been 5 days, so I guess I'll answer my own question...
I ended up using the new mod_auth_form for authentication because it lets you use a nice stylized webpage to log users in.
I also used mod_dbd to access user credentials in mysql.
I couldn't find a nice tutorial on this so I struggled through the installation and setup a bit, but if anyone cares, I created a set of instructions on my blog in case anyone else tries to do the same thing.
Installation
Setup