From what I know, when seeding or leeching torrent, your IP is on tracker and it remains there for some few hours or days How do I manually tell my the tracker using Libtorrent I am no longer going to be connected to the tracker and it should forget my IP as I am neither seeding nore leeching. Any code bits or advices would be appreciated, currently I am using Python binding provided by rasterbar but I am okay with C++ code too.
Trackers are just HTTP services (although poorly designed). See BitTorrent Tracker Protocol, in particular, the event query parameter. In Python, you can use urllib.
libtorrent automatically does this when stopping a torrent, or stopping the session. If it seems to fail, you might want to increase the tracker timeout when shutting down. This will add to the shutdown delay, but will give some more overloaded trackers some more time. See session_settings::stop_tracker_timeout. By default this is 5 seconds, but sometimes trackers take much longer than that to respond, up to 30 seconds.
Trackers typically time out peers in about an hour, and you need to re-announce every 30 minutes to stay alive.
If you're trying to just send the stopped event to trackers, using a separate bittorrent client (in this case, assuming whatever client you're using fails to send stopped events to the trackers), it might be a bit less reliable.
You're supposed to include the info-hash (i.e. the unique identifier for the torrent), your key which the client generates on startup, peer-id (which is also generated by the client) and transfer statistics, in the tracker request.
You can get away with omitting the statistics, but if you don't know the info-hash or the client key, and in some cases the peer-id, the tracker won't be able to figure out that your request actually refers to your client's tracker request, and it won't remove your IP.
In practice, for the most part you might be able to get it to work by just knowing the info-hash and tracker URL. You can get the info-hash by loading the .torrent file, grabbing the info-hash and tracker URLs out of it.
Related
I am writing a (Django-based) website which is working just fine. It displays a list of sensors and their status. If a new sensor is attached, the user needs to wait for a certain amount of time until it is warmed up and ready to use. Also, when the sensors are updated (which the user can trigger, but can also be done automatically by the system) - the user needs to wait.
On the server side I have all signals/Status updates/whatsoever available. Now I want to create an overlay for the current webpage where the statuschange is displayed for x seconds and userinput is disabled.
I have no clue what technology to use. I could frequently ask for updates client -> server but that doesn't feel like the correct way. Any suggestions on what to search for?
No code here because the answer is probably independed of my website code
Standard solution is to use Ajax (JavaScript) or similar to get state from your backend on specific intervals, that is the approach you're mentioning.
You can also "push" changes from your backend to frontend using WebSockets but that is a bit more complex. A popular framework is socket.io, I recommend you take a look at it.
I am currently developing a tool that automatically connects and authenticates users to certain wireless hotspots under given circumstances.
To test if the device is behind a captive portal i send a http request via wininet and check if it gets redirected (yes i am aware of NCSI but it does not work correctly in this case).
If i do that directly after i get the callback for a successfull wlan connection i receive error 12007 (name not resolved) which i assume is because of the ipconfig not being fully applied at that point. If i put in a Sleep() for 2-3 seconds i dont receive the error (since i have one of the faster devices in our hardware-lineup it might vary on other target devices).
Is there a way i can programmatically check if the config has been fully applied to the interface?
Target-OS is Windows 7
Retrying like Jon suggests is not really a feasable Option in this case since I have to enable a Hotspot registration Mode in the Firewall which closes again after a certain number of network operations which is why I would like to avoid that.
Normally for a situation like this, if you error is catchable, you would retry for a certain amount of time then give up (timeout) with the most recent error. This is simpler and the same logic the OS would be implementing anyways
So, in this case I would:
For X(default 30) seconds at most {
test if I can get a dns resolution
delay 1 second
}
Quite possibly the easiest solution would be to do a DNS lookup, using a randomly generated name within a domain that you control. E.g. 79BF2DA7-EE45-4E11-89A4-45EEF2838003.guid.example.com. This should of course fail, but it has to fail by returning a negative response from the DNS server. And that DNS server has to be reachable to return a negative response.
I'm putting together a website that will track user-defined events with time limits. Every user would be free to create events, and when the time limit expired, the server would need to take some action based on the outcome of the event. The specific component I'm struggling with is the time-keeping: think like eBay's auction clock -- it's set to expire at a certain time, clearly runs server-side, and takes some action when the time runs out. Searches for a "server side timer," unfortunately, just bring back results for a timer that gets the time from the server instead of the client. :(
The most obvious solution is to run a script on the server, some program that would watch all the clocks and take action when any of them expired. Tragically, I'll be using free web hosting, and sincerely doubt that I'll be able to find someone who'll let me run arbitrary stuff on their servers.
The solutions that I've looked into:
Major concept option 1: persuade each user's browser to run the necessary timers (trivial javascript), and when the timers expire, take necessary action. The problem with this approach is obvious: there could be hundreds, if not thousands, of simultaneous expiring timers (they'll tend to expire in clusters), and the worst case is that every possible user could be viewing their timer expire. That's a server overload waiting to happen at the worst possible instant.
Major concept option 2: have one really trusted browser, say, a user logged in to the website as "cron" which could run all of the timers at once. The action would all happen in that browser's javascript, and would work great, as long as that browser never crashed, that machine never failed, and that internet connection never went down.
As you can see, I feel like I'm barking up the wrong forest on this problem. Some other ideas that have presented themselves:
AJAX: I'm not seeing anything here that will do quite what I need. It's all browser-run stuff, nothing like a server-side process that could run independent of the user's browser.
PHP: Runs neatly on the server, but only in response to client requests. I'm not seeing any clean way to make PHP fork off a process and run a timer independent of the user's browser.
JS: same problems as PHP, but easier to read. ;)
Ruby: There may be some multi-threading with Ruby, but it isn't readily apparent to me. Would it be possible to have each user's browser check to see if a timer process was running for their event, and spawn a new server-side ruby process if it wasn't?
I'm wide open for ideas -- I've started playing with concepts in JS and PHP, but I'm not tied to any language, particularly. The only constraint, really, is that I won't own the server that I'm running the site on, so I can't just run a neat little local process that does what I need it to do. :(
Any thoughts? Thanks in advance,
Dan
ASP.NET has multi-threading. You can have a static variable to collect the event data, and use a thread to do whatever needed when the time comes. After you can empty the static variable so it's ready for future use.
http://leedale.wordpress.com/2007/07/22/multithreading-with-aspnet-20/
You might want to take a look at the Quartz scheduler for Java which also has a .NET version. With a friendly open source license (Apache 2.0) this is probably a very good starting point.
If you can control cron jobs, which at least I could on HostPapa's shared hosting, you could run a php file every second which checks the timers and takes action based on them.
I would suggest AJAX anyway, what we did on a game server was emulation of "server connects to client" via AJAX request to server without any time-out (asynchronous connection). Basically you create one extra connection for each client that hangs on the server and waits for the server to take self-invoked action. After the action is done you start a new hanging connection immediately so you have one hanging all the time (so the server can talk to your client any time it wants). You can send javascript code from the server that will decide what will happen next. You can check clients to have these hanging connections on the server side to count as valid and of course run your timers on the server.
I've got a short-lived client process that talks to a server over SSL. The process is invoked frequently and only runs for a short time (typically for less than 1 second). This process is intended to be used as part of a shell script used to perform larger tasks and may be invoked pretty frequently.
The SSL handshaking it performs each time it starts up is showing up as a significant performance bottleneck in my tests and I'd like to reduce this if possible.
One thing that comes to mind is taking the session id and storing it somewhere (kind of like a cookie), and then re-using this on the next invocation, however this is making me feel uneasy as I think there would be some security concerns around doing this.
So, I've got a couple of questions,
Is this a bad idea?
Is this even possible using OpenSSL?
Are there any better ways to speed up the SSL handshaking process?
After the handshake, you can get the SSL session information from your connection with SSL_get_session(). You can then use i2d_SSL_SESSION() to serialise it into a form that can be written to disk.
When you next want to connect to the same server, you can load the session information from disk, then unserialise it with d2i_SSL_SESSION() and use SSL_set_session() to set it (prior to SSL_connect()).
The on-disk SSL session should be readable only by the user that the tool runs as, and stale sessions should be overwritten and removed frequently.
You should be able to use a session cache securely (which OpenSSL supports), see the documentation on SSL_CTX_set_session_cache_mode, SSL_set_session and SSL_session_reused for more information on how this is achieved.
Could you perhaps use a persistent connection, so the setup is a one-time cost?
You could abstract away the connection logic so your client code still thinks its doing a connect/process/disconnect cycle.
Interestingly enough I encountered an issue with OpenSSL handshakes just today. The implementation of RAND_poll, on Windows, uses the Windows heap APIs as a source of random entropy.
Unfortunately, due to a "bug fix" in Windows 7 (and Server 2008) the heap enumeration APIs (which are debugging APIs afterall) now can take over a second per call once the heap is full of allocations. Which means that both SSL connects and accepts can take anywhere from 1 seconds to more than a few minutes.
The Ticket contains some good suggestions on how to patch openssl to achieve far FAR faster handshakes.
(Edited to try to explain better)
We have an agent, written in C++ for Win32. It needs to periodically post information to a server. It must support disconnected operation. That is: the client doesn't always have a connection to the server.
Note: This is for communication between an agent running on desktop PCs, to communicate with a server running somewhere in the enterprise.
This means that the messages to be sent to the server must be queued (so that they can be sent once the connection is available).
We currently use an in-house system that queues messages as individual files on disk, and uses HTTP POST to send them to the server when it's available.
It's starting to show its age, and I'd like to investigate alternatives before I consider updating it.
It must be available by default on Windows XP SP2, Windows Vista and Windows 7, or must be simple to include in our installer.
This product will be installed (by administrators) on a couple of hundred thousand PCs. They'll probably use something like Microsoft SMS or ConfigMgr. In this scenario, "frivolous" prerequisites are frowned upon. This means that, unless the client-side code (or a redistributable) can be included in our installer, the administrator won't be happy. This makes MSMQ a particularly hard sell, because it's not installed by default with XP.
It must be relatively simple to use from C++ on Win32.
Our client is an unmanaged C++ Win32 application. No .NET or Java on the client.
The transport should be HTTP or HTTPS. That is: it must go through firewalls easily; no RPC or DCOM.
It should be relatively reliable, with retries, etc. Protection against replays is a must-have.
It must be scalable -- there's a lot of traffic. Per-message impact on the server should be minimal.
The server end is C#, currently using ASP.NET to implement a simple HTTP POST mechanism.
(The slightly odd one). It must support client-side in-memory queues, so that we can avoid spinning up the hard disk. It must allow flushing to disk periodically.
It must be suitable for use in a proprietary product (i.e. no GPL, etc.).
How is your current solution showing its age?
I would push the logic on to the back end, and make the clients extremely simple.
Messages are simply stored in the file system. Have the client write to c:/queue/{uuid}.tmp. When the file is written, rename it to c:/queue/{uuid}.msg. This makes writing messages to the queue on the client "atomic".
A C++ thread wakes up, scans c:\queue for "*.msg" files, and if it finds one it then checks for the server, and HTTP POSTs the message to it. When it receives the 200 status back from the server (i.e. it has got the message), then it can delete the file. It only scans for *.msg files. The *.tmp files are still being written too, and you'd have a race condition trying to send a msg file that was still being written. That's what the rename from .tmp is for. I'd also suggest scanning by creation date so early messages go first.
Your server receives the message, and here it can to any necessary dupe checking. Push this burden on the server to centralize it. You could simply record every uuid for every message to do duplication elimination. If that list gets too long (I don't know your traffic volume), perhaps you can cull it of items greater than 30 days (I also don't know how long your clients can remain off line).
This system is simple, but pretty robust. If the file sending thread gets an error, it will simply try to send the file next time. The only time you should be getting a duplicate message is in the window between when the client gets the 200 ack from the server and when it deletes the file. If the client shuts down or crashes at that point, you will have a file that has been sent but not removed from the queue.
If your clients are stable, this is a pretty low risk. With the dupe checking based on the message ID, you can mitigate that at the cost of some bookkeeping, but maintaining a list of uuids isn't spectacularly daunting, but again it does depend on your message volume and other performance requirements.
The fact that you are allowed to work "offline" suggests you have some "slack" in your absolute messaging performance.
To be honest, the requirements listed don't make a lot of sense and show you have a long way to go in your MQ learning. Given that, if you don't want to use MSMQ (probably the easiest overall on Windows -- but with [IMO severe] limitations), then you should look into:
qpid - Decent use of AMQP standard
zeromq - (the best, IMO, technically but also requires the most familiarity with MQ technologies)
I'd recommend rabbitmq too, but that's an Erlang server and last I looked it didn't have usuable C or C++ libraries. Still, if you are shopping MQ, take a look at it...
[EDIT]
I've gone back and reread your reqs as well as some of your comments and think, for you, that perhaps client MQ -> server is not your best option. I would maybe consider letting your client -> server operations be HTTP POST or SOAP and allow the HTTP endpoint in turn queue messages on your MQ backend. IOW, abstract away the MQ client into an architecture you have more control over. Then your C++ client would simply be HTTP (easy), and your HTTP service (likely C# / .Net from reading your comments) can interact with any MQ backend of your choice. If all your HTTP endpoint does is spawn MQ messages, it'll be pretty darned lightweight and can scale through all the traditional load balancing techniques.
Last time I wanted to do any messaging I used C# and MSMQ. There are MSMQ libraries available that make using MSMQ very easy. It's free to install on both your servers and never lost a message to this day. It handles reboots etc all by itself. It's a thing of beauty and 100,000's of message are processed daily.
I'm not sure why you ruled out MSMQ and I didn't get point 2.
Quite often for queues we just dump record data into a database table and another process lifts rows out of the table periodically.
How about using Asynchronous Agents library from .NET Framework 4.0. It is still beta though.
http://msdn.microsoft.com/en-us/library/dd492627(VS.100).aspx