I'm am new to server technology and dont really understand how they work (hope you could shed some light on this as well).
But basically my problem is I have a Firebase Database which i need to update every 20 seconds the whole day. This is the way I think i should solve the problem. I need to send a HTTP POST request to the firebase database every 20 sec. This means I need to have a server where I run a piece of code sending the HTTP request every 20s. Im not sure if this is the right way to do it, and even if it is how to implement it.
Some questions i have are
I definitely need to create a server for this right? and if so what platform is recommended to write my server code? (preferable free platforms)
I have tried reading up on the platforms available such as AWS, Google Cloud but dont really get the terminology used. Are there any tutorials for this available?
I am really lost, and have been stuck on this for some time, any help is deeply appreciated.
This is achievable by leveraging Cloud Watch Events specifically using a Rate Expression that invokes a SNS topic which can then hit your HTTP endpoint.
Hope that helps!
I would suggest that you try and keep everything within Firebase. Create a firebase cloud function that sends the HTTP request for the update, and use Firebase functions-cron that is a cron like scheduler to schedule.
Related
I would like to implement a queuing mechanism for sending out email via PHPMailer on Amazon EC2. I have set up Beanstalkd correctly on the server and can access it via a console. The mail doesn't seem to go through (trying the various combinations of sample code). In addition do I need to set up a cron job also that would call one of the producer or consumer files?
Does anyone have working code for sending out email via phpmailer/pheanstalk please for Amazon EC2?
Thanks.
Beanstalkd is great, and I use it myself, however, don't use it for this; It's reinventing the wheel in a bad way. Instead, install a local mail server such as postfix and get that to do your queuing for you. This is also much, much simpler, faster, and easier to control. Email servers are built for managing queues, and they are extremely good at it.
Before you do so, get your mail sending script working – there's no point in even attempting to get something more complex working until you've done that. Also be aware that sending email from EC2 is difficult – Amazon wants you to use their SES service rather than sending directly – you may find sending is blocked altogether. Read the PHPMailer troubleshooting guide to see how to diagnose that.
Background:
I've a local application that process the user input for 3 second (approximately) and then return an answer (output) to the user.
(I don't want to go into details about my application in purpose of not complicate the question and keep it a pure architectural question)
My Goal:
I want to make my application a service in the cloud and expose API
(for the upcoming website and for clients that will connect the service without install the software locally)
Possible Solutions:
Deploy WCF on the cloud and use my application there, so clients can invoke the service and use my application on the cloud. (RPC style)
Use a Web-API that will insert the request into queue and then a worker role will dequeue requests and post the results to a DB, so the client will send one request for creating a request in the queue, and another request for getting the result (which the Web-API will get from the DB).
The Problems:
If I go with the WCF solution (#1) I cant handle great loads of requests, maybe 10-20 simultaneously.
If I go with the WebAPI-Queue-WorkerRole solution (#2) sometimes the client will need to request the results multiple times its can be a problem.
If I go with the WebAPI-Queue-WorkerRole solution (#2) the process isn't sync, the client will not get the result once the process of his request is done, he need to request the result.
Questions:
In the WebAPI-Queue-WorkerRole solution (#2), can I somehow alert the client once his request has processed and done ? so I can save the client multiple request (for the result).
Asking multiple times for the result isn't old stuff ? I remmemeber that 10 - 15 years ago its was accepted but now ? I know that VirusTotal API use this kind of design.
There is a better solution ? one that will handle great loads and will be sync or async (returning result to the client once it done) ?
Thank you.
If you're using Azure, why not simply fire up more servers and use load balancing to handle more load? In that way, as your load increases, you have more servers to handle the requests.
Microsoft recently made available the Azure Service Fabric, which gives you a lot of control over spinning up and shutting down these services.
I am writing an ASP.NET web page which calls an API to update my client's property website using XML data. The data from the API is real-time, so I would like to run the page every 10 minutes.
Clearly I don't want to load my page manually to keep my client's property website up-to-date. There is a lot of help in Stack Overflow and elsewhere on this type of question but I have become a little overwhelmed by the options. I think that one way to go would be:
Windows Task Scheduler to fire every ten minutes (to trigger a VB.Net Service)
VB.Net Service (to run the web page)
My page runs..
That feels like overkill, and I haven't written a Windows Service or used the Task Scheduler and it feels like there should be 2 steps not three.
Now if I do use a VB.Net Service then I think that it might be better to give more work to the VB.Net Service rather than put my script in a web page, but I am used to writing web pages!
I can't help feeling that if I just keep the page open in a browser somewhere I can easily use JavaScript to run the page every 10 minutes, but that means ensuring it's open in a browser. Bad solution I think...
What I need is an overview of my options to make an informed decision and if it means learning then fine. Thanks in advance!
You can use javascript/Jquery to call a page/webmethod continously in timely manner
setInterval(function() {
// call your page or webmethod
}, 1000 * 60 * x); // x is your time interval in mins, in your case x=10
In my opinion the best approach would be to create a windows service and have the service call the web page. The windows service is much more stable than the Task Scheduler because the task scheduler can overlap if the previous Scheduled event did not finish. Also using the windows service gives you more control over error handling and logging
Get started with this link:
http://code.msdn.microsoft.com/windowsdesktop/CSWindowsService-9f2f568e
What should i use to implement comet on django?
All things i've found on google seems outdated. Some people point to orbited.org or hookbox.org, but both of them are just dead now. How people solve this problem nowadays?
I would check out Pusher which is a third party service that will allow you to push events to your browser with a drop-dead simple API. They have a free sandbox plan that comes with 100k messages per day and 20 connections.
Alternatively, you could run APE on your server to push events down.
Django isn't really designed for long polling and comet.
While building this web service and the app that calls it, we have noticed that the first call to the web service each day is extremely slow. It even will time out on some days. However, every call after that work great. Can anybody shed light on why this might be and how we can get rid of this pain?
Thanks in advance!
If it's an ASP.NET web service, it may be the CLR initializing and loading and verifying the assemblies for the first time. You may want to consider pre-compilation
Agree with the other answers on caching, initialization, etc. As far as a workaround, one possibility may be to set up some sort of daily task (SQL Server job, Windows service, something else?) to simulate a hit to the service each day, so that your users don't experience this first slow request.
If it is an ASP.NET web service, then you might want to check the settings of the application pool the web service is running in, especially the idle timeout which defaults to 20 minutes in IIS7.
Configuring IIS7 idle-timeout
Even if it is not an ASP.NET web service, other web servers will have equivalent configuration settings you have to tweak to keep your web service alive overnight.
Can you duplicate the same behavior on your database? It could just be the db needing to optimise the query for the first run (Maybe the parameter is today's date?).
Are there a lot of static constructors or set up code in the Global.asax class? Because IIS recycles worker processes periodically, the start up code may be running again.
The rule for optimization is: don't guess. Put in profiling to find out exactly what is slow, and then work to make that faster. Everything already posted provides excellent tips on where to start looking for slowness.