I have a REST web service that accepts a bunch of fields. These fields are going have processing and eventually become part of an email.
When I am building up the email, the field called message.image will later become rc.image and it will be added to the HTML email via
...
var body &= "<p><img src='#EncodeForHTMLAttribute(rc.image)#' alt='#EncodeForHTMLAttribute(rc.image_name)#'></p>";
...
My concern is that this could be still be vulnerability
Related
I have been making websites in Django for 2 years now. A client gave me an ethical-hack report which mentioned that all passwords in my website are clear-text.
I confirmed this by checking the request headers in the 'Network' section in developer console of browsers. I can clearly see my username and password in clear text in the POST queries. This is for all the password fields. Even in django's admin interface login fields.
I am using django's built in UserCreationForm and AuthenticationForm with views from django.contrib.auth, since i thought this is the safest practice.
So should i be worried? Of course Django's developers surely know what they are doing. But is this really safe? Passing cleartext passwords in POST requests? Should i enable django admin in production environment or not?
It is common practice to send the password in plain text. Not only in Django, but in a lot of authentication frameworks. As long as it uses a secure channel (and that channel is not compromised), that should be sufficient.
Normally you communicate nowadays with a server over a encrypted layer like HTTPS. This means that the browser and the server first negotiate encryption, and thus all requests you do are submitted over the encrypted "channel". So the POST request you make to authenticate is encrypted. The browser does not show this, since the request itself contains indeed the password in plain text, but the entire message is encrypted.
Adding extra encryption on top of that would not add much "value". Imagine that you encrypt the password, then that means that if the hacker somehow can intersept and decrypt the package, he/she can send the encrypted password to the server as well.
HTTPS normally aims to prevent a man-in-the-middle attack through certificates.. Sophisticated attacks exists to strip the the SSL layer from a connection, therefore technologies like HSTS [wiki] should be used to prevent protocol downgrades.
Is it possible to implement Rails csrf through cookie_store at the same while using ember-simple-auth Devise?
Guides like this one always deactivate Rails.application.config.session_store which from my understanding does not allow Rails to keep track of csrf tokens which causes Rails to lose track of sessionsg. After attempting many solutions including:
require jquery_ujs on Rails manifesto.
Rails.application.config.session_store :disabled.
https://github.com/abuiles/rails-csrf.
Changing Ember.js Adapter to append CSRF.
The end result is still pretty much the same:
Can't verify CSRF token authenticity followed by Completed 422 Unprocessable Entity if protect_from_forgery is set with :exception instead of :null_session.
Example Transaction:
Partial Request HEADER:
X-CSRF-Token:1vGIZ6MFV4kdJ0yYGFiDq54DV2RjEIaq57O05PSdNdLaqsXMzEGdQIOeSyAWG1bZ+dg7oI6I2xXaBABSOWQbrQ==
Responder HEADER
HTTP/1.1 422 Unprocessable Entity
Content-Type: text/plain; charset=utf-8
X-Request-Id: 71e94632-ad98-4b3f-97fb-e274a2ec1c7e
X-Runtime: 0.050747
Content-Length: 74162
The response also attaches the following:
Session dump
_csrf_token: "jFjdzKn/kodNnJM0DXLutMSsemidQxj7U/hrGmsD3DE="
The rails-csrf response from my csrf branch (branch has been deleted).
beforeModel() {
return this.csrf.fetchToken();
},
Partial dump of the return statement:
_result: Object
param: "authenticity_token"
token: "1vGIZ6MFV4kdJ0yYGFiDq54DV2RjEIaq57O05PSdNdLaqsXMzEGdQIOeSyAWG1bZ+dg7oI6I2xXaBABSOWQbrQ=="
From my understanding, all of these attempted solutions have the common root: session_store is disabled...
Update!
The answer below turned out to be wrong in nature after learning more about CSRF protection and Ember-Cli-Rails.
The idea here is to have two storages: A cookie-based storage only for the csrf token that is maintained by Rails, and a localStorage maintained by Ember-simple-auth where the user authenticity token, email and id are being taken care of while a custom session SessionAccount inherits those values and validates them against the server before setting the user that will be available for the entire Ember.js.
The validation by the SessionAccount occurs in order to detect any tampering with the localStorage. The validation occurs every time the SessionAccount queries the localStorage (e.g page reload) as it communicates with the server through a Token model (token, email and id.) The server responds with 200 or 404 through a TokenSerializer which only renders email or the validation error, thus not disclosing the frontend to see other authentication_tokens unless the user sign in through the login form which requires email and password.
From my understanding, the weak spots in this methodology are not susceptible enough to be so easily hackable unless:
Someone invades the server and get the database content. Although the passwords are salted, any person who has the database dump can change the localStorage token, email and id to the person they want to impersonate and the server validation will work. However, this can be minimized by a worker that changes the authentication token for non-logged in users every 24 hours (or any other timeframe.) The code example section currently does not have the worker since I still have not learned about them.
Someone know the password and email of the person they want to hack... Not much I can do about that one at the moment.
Someone intercept the data being passed around through the JSON API. A strong SSL implementation should do a good job.
If your sessionAccount has something in the lines of is_Admin, then the token could be sent alongside the POST request for admin only requests in order for further backend validation since you can never trust the frontend.
Something else? Those are the ones I am aware of.
Now onto the practical approach:
Set
Rails.application.config.session_store :csrf_cookie_store, key: '_xxxxx_id', httponly: false on the session_store.db.
Create the csrf_cookie_store under lib and require it on application.rb with require 'csrf_cookie_store'.
Set protect_from_forgery with: :exception on application.rb.
Create a TokenController to handle Validation.
TokenSerializer so that only the email is sent back.
Update your Devise Session controller to change the token upon login and to skip authenticity token validation on session destroy.
Check your routes for the tokens to be create only and to have the custom Devise session.
Create the Ember.js Token Model
Match your SessionAccount to what I created.
Update your Devise Authenticator to send a delete request to the server when the session is being invalidated.
Test it out.
The use case:
User makes order his payment gets accepted and his details are getting post to a django's view. Using these details django's view creates user and everything that is necessary (Username and password is provided by me). Then before returning it sends email to clients email with his data (Username and password for now).
But sometimes I get a gateway timeout error from apache(app is deployed on openshift). Because the user is created I assume that the timeout comes from the email sending part. How can I make sure everything went ok and inform the user? How can I make sure that if the email isn't sent I can resend it? What is the best practice at that?
If you have timeouts with an API or Service, you should fire your POST / sendmail request with AJAX...
Serialize the whole form (like jQuery's serialize())
Send that data via AJAX (with jQuery's ajax())
Inform the User of success or error (alert() or jQuery UI dialog)
You can find a lot of examples on this website.
Another "dirty" approach would be to add the attribute target="_blank" to your form tag what opens your lazy request in a new tab / window.
I'm currently designing a solution with this pretty standard pattern:
1 web-app using Django (it hosts the one and only DB)
1 client mobile app using AngularJS
This client app uses a REST API (implemented on the Django Server with Tastypie) to get and set data.
As a beginner in these architectures, I'm just asking myself where the logic should go and I'd like to use a simple example case to answer my concerns:
On the mobile client App, a client is asked to subscribe by entering only an email address in a form.
a) If the address is unused, inscription is done (stuff is written on the DB).
b) If the address is used, an error is raised, and the user is asked to try again.
What is the workflow to perform these simple operations?
I'm asking for example how to compare the entered e-mail address in the mobile app with the existing e-mail adresses in my DB:
Should I GET the list of all email adresses from the server, then perform the logic in my client app to state if the entered address already exists ? This seems really a bad way to do because getting lots of elements isn't performant with web services, and client should not be able to see all email adresses.
Should I send the entered e-mail address to the server and let it make the comparison? But if yes, how am I supposed to send the data? As far as I know, PUT/POST are made to write in the DB, not to just send data to server to analyse it and proceed some logic.
I have the feeling I am clearly missing something here...
Thanks a lot for help.
PUT and POST are designed to be used to create and update resources. The server may or may not have a database behind it. It might use a local filesystem, or it might handle anything in memory. It's none of the client's business. It is certainly common to have business logic on most servers which provide APIs.
Use PUT/POST to send up the email address to the server. The server checks to see if the email address is (a) valid, and (b) allowed. If it fails either check, return a relevant response to the client as documented in the RFC. I would go with 403 Forbidden, which indicates a problem with the data being sent up to the server. Use the entity in the response to detail what the problem was with the request.
I had done similar thing in a angular web app,
I have disabled the submit button, and added a check availability button beside the email field.
I have send the email to server and checked if it already exist and got the result to client,
then asked the user to enter an alternate email if not valid or enable the form's submit button
Alternatively
when the user leaves the email field, You can send the email to a service that validates the email, and get the response, and show a message that this email already exist and disable the submit, or enable the submit button otherwise
I need to come up with a scheme for remote devices running linux to push data to a web service via https. I'm not sure how I want to handle authentication. Can anyone see any security risks by including some kind of authentication in the body of the request itself? I'm thinking of having the request body be JSON, and it would look like this:
{
'id':'some unique id',
'password':'my password',
'data':1234
}
If the id and password in the JSON don't match what is in my database, the request gets rejected.
Is there a problem with this? Is there a better way to ensure that only my clients can push data?
That scheme is primitive, but it works.
Usually a real session is preferred since it offers some advantages:
separation of authentication and request
history of requests in a session
credentials get sent only once for multiple requests
flexible change of authentication strategy
...