I am consuming a webservice in my mobile app which requires hashing, among other information, a timestamp of when the request was generated. If the time is out by a few minutes then my requests fail.
Since I cannot rely on my users having correct time on their devices I have changed the app to first make a request to my server for the current time, then make the actual request to the webservice. This works but seems messy. Am I approaching this in the wrong way? Is there a common way of solving this issue?
Related
What kind of existing options there is to make the client's GET "myserver://api/download/12345.jpg" to download from some_cloudfront_server://files/12345.jpg without redirecting the client to that CloudFront-path? Ie. I want the client see only myserver://api/download/12345.jpg all the time.
It should be some kind channeling solution, as downloading a full file first to Django-server and then sending it to the client is not applicable (takes so much time that the client will see timeout before a response to its query comes). Any existing libraries for this? If I have to create one myself, I welcome even just tips where to start from as Django's communication layer is not too familiar to me.
Problem here is that we are creating CloudFront signatures with wildcards to certain set of files, in format files/<object_id>*, thus allowing the client access only to certain object's all files in CloudFront-server directly. This works fine as long as file access traffic from clients is low. But if we start creating separate access signatures for hundred different files at same time, CloudFront starts throttling our requests. Solution I came up is to create and store to Django-server one generic allow-all signature for files/*, which is used only by Django-server and never given to any client, and then we let Django-server decide should it fetch files for the client or not. Thus, I can't give to the client the CloudFront-path with a allow-all signature, but I can channel CloudFront-data thru Django's endpoint to the client without showing signature to the client.
Environment I'm working with is Django v1.11, and Django REST Framework v3.4's ViewSets.
One colleague heard about my problem and mentioned that signatures can be made locally at server, thus no CloudFront connection needed every time the signature is created and we can keep signatures as file specific. He took this task for himself so I don't know details yet.
What would be a best practice to secure a web service call over HTTP communication channel?
I aim to using extra parameters in query string to make time-limited hash values (generated by a calculation on client side) to define that sending request is valid. This will be understand by server side application how knows the client algorithm. But this way has a risk also when the client application is de-compiled!
So what is the best way? I'm looking for a dynamic algorithm, any thought?
From your comment it sounds like your need is to guard against "replay attacks" (someone sending the same request twice, when you only want for them to be able send it once). A common way to handle this is to use a nonce. At a basic level, it works like this:
1) Client wants to make a request; before doing that, it asks the server for a nonce, which is a value which will not be reused and shouldn't be predictable (e.g. based on the time).
2) Client gets the nonce from the server, then makes its request, sending back to the server the nonce it just got from it
3) If the client (or another client) then tries to make the request already made in step 2 above, the server won't accept it, because it'll contain a nonce which the server has already marked as "used."
That's a slight oversimplification, but that's how nonces work to prevent replay attacks.
Note: "nonce" comes from "number used once." But nonces don't actually need to be numbers.
I want to prevent "time cheating" via setting the mobile device's clock forward. It's ok for the game to require a network connection. After looking over the numerous "time web service" related resources I'm a bit lost. I just want to submit a request to something like www.gettime.com/utc and parse the result to use in my game.
What's a good web service to use for this purpose?
Note the game only requests utc time once on start-up. It looks like I should use an NTP server but I'm not sure which is a good choice. Since the url will be hard-coded in the app it's important to make a good choice.
You have here a 'web service' that offers you the current UTC time in millis. A simple GET request will suffice... If you go to the homepage you will see a similar link that gives you the seconds since the epoch.
I've been assigned a small project and directed to use Mirth Connect as part of the solution. We currently do not use Mirth but because we have an upcoming project that will require an interface engine, I was asked to use it for this project so I can gain experience with it. However, I think it's a poor suggestion for this project; I also know my boss would not want me to implement something that adds unnecessary complexity just for the sake of learning.
With that said, I want to make sure I have valid reasons for suggesting that Mirth Connect should not be used for this project. Neither of us know much about it, but I think he's been convinced it is the end all solution for all things interface/webservice related. I appreciate any input I can get from those of you who have more experience with the product than I have.
This is a very simple project in that we have a client needing to make a handful of requests into our system from there's in order to retrieve and update data. For example, they will make a request to get patient demographics, to add an admission for a patient, a request to get a list of possible care settings from our application, etc. For this project we will not use HL7 but a set of predefined XML messages.
Both the client's application and our application reside on the client's network.
They do not want to build any services of their own, so the services we build need to handle all of the work. The results returned in response to their calls to the services will be returned as XML.
There are no plans to integrate any other applications with theirs or ours in the foreseeable future.
It seems to me the best option would be for us to build a standalone web service that would take their request and send back an XML response. I just don't see any reason to include Mirth Connect in the picture (other than for learning but that can be gained in other ways).
What are your thoughts? Is it true that the interface engine is not a good choice if the client wants to receive data from our system without having a receiving mechanism on their end? In other words, they want to make a web service call such as GetCareSettings and to get a response back with an XML representation of all the possible care settings in our system. It seems to me they would need a web service on their end for Mirth to use as a destination to send the results. All Mirth is going to send back is an ACK message, correct? (Unless of course it wrote the data to another webservice on the client end, which they have said they do not want to do.)
Thanks for taking the time to read this. I hope my lack of knowledge and understanding of Mirth Connect and the use of interface engines hasn't made this question difficult to answer.
From what I understand, Your client appears to be either a Lab or a third party service vendor, who will take inputs from your application like patient demographic charts, appointments, provider details etc. Basically he wants to query your application.
A) HL7: It has the capacity to handle query request and response with demographics. I am assuming that you have done you might be knowing about QRY messages.
B) XML/webservices/SOAP:still provides a viable solution, a little more concrete and can be expanded to Handle custom request like GetCallSettings, or may be any other. The vendor is not just interested in fetching patient related data but also other inputs for which HL7 might not be enough.
If we talk about approach, then its a professional advice to use an interface engine. It is not limited to just using mirth connect, you can also use Iguana if you want. A good reason which comes instantly to my mind is that an engine gives you an advantage while troubleshooting, support and maintenance activity.
Your Webservice responses can be handled easily by HTTP sender connector type and through RESTful webservices.
The engine is also capable of handling large volumes of request and responses at the same time, which in case is not required right now, but I think will be the condition later on. Your source in the channel shall change to an Webservice Listener.
Another good approach is to do away with XML and use JSON for handling request and responses, a much more light weighted than XML, to save your overhead with the network. We are doing some similar work, but we are sending request to a webservice through JSON.
Overall, Mirth is there to make your life more easier.
Good Luck!
From what I know, when seeding or leeching torrent, your IP is on tracker and it remains there for some few hours or days How do I manually tell my the tracker using Libtorrent I am no longer going to be connected to the tracker and it should forget my IP as I am neither seeding nore leeching. Any code bits or advices would be appreciated, currently I am using Python binding provided by rasterbar but I am okay with C++ code too.
Trackers are just HTTP services (although poorly designed). See BitTorrent Tracker Protocol, in particular, the event query parameter. In Python, you can use urllib.
libtorrent automatically does this when stopping a torrent, or stopping the session. If it seems to fail, you might want to increase the tracker timeout when shutting down. This will add to the shutdown delay, but will give some more overloaded trackers some more time. See session_settings::stop_tracker_timeout. By default this is 5 seconds, but sometimes trackers take much longer than that to respond, up to 30 seconds.
Trackers typically time out peers in about an hour, and you need to re-announce every 30 minutes to stay alive.
If you're trying to just send the stopped event to trackers, using a separate bittorrent client (in this case, assuming whatever client you're using fails to send stopped events to the trackers), it might be a bit less reliable.
You're supposed to include the info-hash (i.e. the unique identifier for the torrent), your key which the client generates on startup, peer-id (which is also generated by the client) and transfer statistics, in the tracker request.
You can get away with omitting the statistics, but if you don't know the info-hash or the client key, and in some cases the peer-id, the tracker won't be able to figure out that your request actually refers to your client's tracker request, and it won't remove your IP.
In practice, for the most part you might be able to get it to work by just knowing the info-hash and tracker URL. You can get the info-hash by loading the .torrent file, grabbing the info-hash and tracker URLs out of it.