I have a django application that puts a task in a queue. Another service is used to read that queue and process some files. At the end I need to save the processed files in the database managed by the django application.
I do not want to give the microservice access directly to the database, since I want the responsibility only to be to process the files.
So I wanted to post the changes to django using HTTP request. The problem is that I do not have any authorization at the time, even though I know that HTTP from this type of machine is to be accepted.
For the django application I use JWT as an authorization token. How is the best way to approach this type of problem? Maybe just send a token together to the queue? But how to make such token? It's not certain when the process will be executed..
When you really think about it, there is no need for your internal services to authenticate themselves if they are in the same network.
In that case - You can put Django behind an API gateway (don't write your own, find an open source highly rated project). Then you can control via this gateway which end point is allowed by which traffic source. Then you can easily control end points that are specifically for internal services and which end points need authentication by an external entity.
If they aren't in the same network (which means they are separated by the great gulf of the cloudy net) then the usual way two machines communicate is with an API key. In that case, you can configure your services with symmetric keys, or private/public pair, it doesn't really matter. Machines can be trusted with secret keys. Why would you need to send the token in the queue? If the service is allowed to post results to Django, its allowed to do so for all requests, so it needs to be configured with an API key that tells your API that it is allowed to post processed files.
Related
I have a web application where the regular user login is handled via SSO.(This prevents me from creating a service user)
In this application I have some web service endpoints that are not in the scope of any user. They will be triggered by another application and do some stuff.
Is the following the right way to do.
A token string is created by hand(because of simplicity)
The token string is stored in the environment variables of the system that provides the webservice endpoint as well as in the system that calls those endpoints.
On every call a simple equality check is proceeded - if the token is not present, the endpoint returns a 401.
Is my approach to simple?
I have not found much on this topic - my approach comes from the moodle-webservice handling, where you generate a webservice token in moodle and place it aswell in the application that calls the webservice.
For a basic application with no high security requirements, this might be ok.
A few things you could do (all of which will increase complexity and/or cost):
The service could store a proper password hash (like bcrypt, pbkdf2 or argon2) instead of the actual password. This would help, because if it is compromised, the actual key would not be lost, the attacker could still not easily call the service. (But it's already compromised, and this is not like a user password that would be reused, so it depends your choices and threat model.)
You could store this secret in a proper vault like AWS Secrets Manager or Hashicorp Vault or similar. This would enable you to control access to the key in one place, and audit key usage (maybe alert on attempts and so on). Access to the vault would still have to be managed, but that's easy via roles on AWS for example, where instances with the right role can access the secret but others cannot.
I have a client app that connects to an Elastic Beanstalk server app. Some of my users need to register. But when the registration form loads it needs to get some data from DynamoDB so the user can choose between a few options.
The problem is that I set up my server in a way that any request to the server that is not authenticated (no auth tokens previously obtained by the client app from Cognito) gets denied. Of course, if a person is going to register they are not authenticated, which means they do not have access to the information from DynamoDB they need to register. It is only a couple of pieces of information I need, so it is very frustrating.
What I have thought about how to solve this:
Putting a long string of characters in the client app that gets sent to the server when a request is made for ONLY the couple of pieces of information I need. The server would also have that same string stored somewhere and would then compare them. If they match, then it returns the info requested. As I said, this would be done only for the 2 pieces of info I need, everything else would still be secure.
Leave the two routes public in my API that lead to the pieces of info I need (I know, it is a bad idea).
What would be the best way to go about this?
Assuming you're using cognito there is also a concept of an anonymous guest user which can have its own role assigned.
You can treat the anonymous guest user like a regular cognito user (it can have a role assigned), however you would scope its permissions down to the minimum it requires to perform these operations.
Alternatively use option 2, the API could call a Lambda that would return the necessary information simply reading the data. You would possibly want to look at caching the results as well to avoid your API Gateway endpoint being abused.
We have a REST endpoint that provides some back end services to our publicly available Web site. The web site does not require any user authentication to access its content. Anyone can access it anonymously.
Given this scenario, we would still like to protect the back-end REST api to be somewhat secured in the sense that only users using our Web site can call it.
We dont want a malicious user to run a script outside the browser bombarding it for example.
We dont even want him to run a script automating the UI to access the endpoint.
I understand that a fully public endpoint without user authentication is somewhat impossible to secure. But can we restrict usage to valid scenarios?
Some ideas:
Use TLS/SSL for the communication - this protects the channel only.
Use some Api key (that periodically expires) that the client/browser needs to pass to the server. (a malicious user can still use the key)
Use the key to throttle the number of requests.
Use it with conjunction of a CSRF token??
Use CAPTCHA on the web site to ensure human user ( adds an element of annoyance to the final user).
Use IP whitelisting.
Use load balancing and scaling of server to handle loads.
I suppose this should be a scenario occurring in the wild.
What security steps are prevalent?
Is it possible to restrict usage via only the website and not via a script?
If its not possible to secure, what kind of mitigations are used with such public rest endpoints?
"Instead of using cookies for authorization, server operators might
wish to consider entangling designation and authorization by treating
URLs as capabilities. Instead of storing secrets in cookies, this
approach stores secrets in URLs, requiring the remote entity to
supply the secret itself. Although this approach is not a panacea,
judicious application of these principles can lead to more robust
security." A. Barth
https://www.rfc-editor.org/rfc/rfc6265
What is meant by storing secrets in URLs? How would this be done in practice?
One technique that I believe fits this description is requiring clients to request URLs that are signed with HMAC. Amazon Web Services offers this technique for some operations, and I have seen it implemented in internal APIs of web companies as well. It would be possible to sign URLs server side with this or a similar technique and deliver them securely to the client (over HTTPS) embedded in HTML or in responses to XMLHttpRequests against an API.
As an alternative to session cookies, I'm not sure what advantage such a technique would offer. However, in some situations, it is convenient or often the best way to solve a problem. For example, I've used similar techniques when:
Cross Domain
You need to give the browser access to a URL that is on another domain, so cookies are not useful, and you have the capability to sign a URL server side to give access, either on a redirect or with a long enough expiration that the browser has time to load the URL.
Examples: Downloading files from S3. Progressive playback of video from CloudFront.
Closed Source Limitations
You can't control what the browser or other client is sending, aside from the URL, because you are working with a closed source plugin of some kind and can't change its behavior. Again you sign the URL server side so that all the client has to do is GET the URL.
Examples: Loading video captioning and/or sprite files via WEBVTT, into a closed-source Flash video player. Sending a payload along with a federated single sign-on callback URL, when you need to ensure that the payload can't be changed in transit.
Credential-less Task Worker
You are sending a URL to something other than a browser, and that something needs to access the resource at that URL, and on top of that you don't want to give it actual credentials.
Example: You are running a queue consumer or task-based worker daemon or maybe an AWS Lambda function, which needs to download a file, process it, and send an email. Simply pre-sign all the URLs it will use, with a reasonable expiration, so that it can perform all the requests it needs to without any additional credentials.
I have created a web service API and it's architecture is such that the server requires a client to sign the request along with a secret key assigned to it (signature is always different between multiple requests).
Server matches the client's signature with its own computed signature. If they are a match then the server returns the response.
I am wondering if a client should check the response coming back from the server to see if it's from the same application to which the request was made.
Is any kind of attack possible between HTTP request and HTTP response?
Do we need a security signature for the web service response?
It depends. There are a few types of web service APIs out there. Some need strict security other might not. You could have a few types of APIs:
(1) completely opened API. Say you have a blog where you post about writing RESTful services and clients. You host a complete working REST service based on one of your posts so that people give it a spin. You don't care who calls your service, the service returns some dummy data etc. It's just a demo, a toy, no security here, no request signing, nada. It's just plain HTTP calls.
(2) service with an API key. Say you have a web service and you want to know who calls it. This kind of service needs a pre-registration and each client who wants to call your service needs to register and obtain a key first. Do note that the registration is not about authentication or authorization, you just want to know who's using your API (e.g. what business sector they operate in, how many clients they have, why are they using your API for etc) so that you later make some analysis of your own and take some (marketing maybe) decisions of some sort later on based on the data you get back.
There is nothing secret about this API key. It's just an UUID of some sort, the most basic way of differentiating between calls. This again involves only plain HTTP calls with the key as an additional request parameter.
(3) service with an API key and a secret key. This is similar to number (2) but you need to absolutely make sure that the calls are coming from the client that presents some API key. You need this because you probably want to bill the client for how much they have used your service. You want to make sure the calls actually come from that client and not someone ill intentioned that maybe wants to overcharge the client's bill.
So the client uses it's key for identification and a signature of the request with the secret key to actually vouch for it's identity. This again can be plain HTTP calls with the key and signature as additional request parameters.
(4) data "tampered-safe" web services. For numbers (1), (2) and (3) above I haven't considered any message security issues because most APIs don't need it. What's exchanged isn't confidential and not all that important to protect. But sometimes although the data isn't confidential you need to make sure it wasn't tampered with during transit.
Say you are the owner of a shop that builds some product and you want to advertise your product on some partner web sites. You expose a service with the product details and your partners just use this data to display your product details on their sites. Everybody knows what products you are building so you don't need to hide that, but you are paranoid about your competition trying to ruin you so you want to avoid them intercepting the
request and multiplying by 10 all your prices in the responses of your result just to scare potential buyers away.
Number (3) above, although uses the signing part as a way to prove the identity of the caller, also assures the request was not tampered with (server will reject the request if the signature does not match). So if you need to assure an original response you can also sign the response.
For this, the service can provide the client with an API key and two secret keys. One secret key is used by the client to sign their requests while the second secret key is used by the client to verify the signature of the response (using an unique secret key for the server isn't all that safe so the server emits a server secret key specific to each client).
But this has a weak point: you would need to trust your partners that they will indeed validate the response signature before displaying the information on the site and not just bluntly display it. Paranoid as you are you want to protect against this and for this you need HTTPS.
As #SilverlightFox mentioned this proves the validity of the response. The data was not tempered with because it's encrypted. The client does not need to have an extra step to verify the response signature because that verification is already done at a lower (transport) level.
(5) secure services. And now we reach the last type of service where the data is actually confidential. HTTPS is a must for these services. HTTPS ensures the data remains confidential, that it isn't tempered in transit, identifies the server and can also identity the client if client side certificates are used.
So, in conclusion, it depends on what type of service you have.
Make the request over HTTPS to ensure the validity of the response.
This will ensure your data is not vulnerable to a MITM attack. Rolling your own untested encryption/hashing methods is a sure way to open up your application to attack, so you should use TLS/SSL which means that you should connect to your web service API over HTTPS. TLS is the proven and secure way to ensure the response is coming from the application that the request was made to.