Best option to schedule payments: azure scheduler, WebJob or Azure Functions or a Worker Role? - azure-webjobs

I've hosted my website on azure and now I want to schedule payments on a monthly basis. I am using Authorize.net for payments but I cannot use their recurring billing feature as it gives very little control. I've to perform checks in the database, make payments and update records. What should I use Azure Scheduler, Azure WebJob or Azure Functions a Worker Role?

Definitely not a Worker Role. They are very heavyweight and generally not worth the effort for a single, simple job like this.
Web jobs might be a good solution. It can run in the context of your web app, so you can use this with no additional cost. But you'll need to do some development with this - you have to create an app that calls Authorize.net.
If you only need to fire a single HTTP request, then using Azure Scheduler to schedule this HTTP action might be a good choice. You can configure the request itself (headers, payload) and it has error handling as well. But you might have to store sensitive information in the Azure portal, in the configuration of the scheduled job.
So I'd say forget about the Worker Role, then weigh simplicity against flexibility and development effort. That being sad, I would probably try it with the scheduler, and then move on to the WebJob, if I encounter something that is not feasible with the scheduler.
Edit:
Azure Functions can also be a good option - I'd say it's sort of a middle ground between the webjob and the simple scheduled option. It is part of the app services featureset, so it can run in the same appservice plan as the web app, so no costs. But here you have to code the http request to Authorize.net yourself as well. But Azure Functions is a lot more lightweight compared to webjobs - you do not have to create an exe (or ps script or whatever), you can just code the http request in a script editor inside the Azure portal. But you still have to do it yourself. This is a bit more flexible than the simple scheduled option though, which is something to consider when it comes to error handling.
So this is a good middleground, but I think it's still a lot of work given the complexity of the task (which is to fire a single HTTP request).

To get it working quickly, Logic Apps is a good choice. With Logic Apps, you can trigger it with a timer based on schedule you defined, use the out-of-box SQL/DocDB (depending on your exact scenario) to connect to your database. Although there's currently no Authorize.net connector available, you should be able to use the generic HTTP action to talk to its RESTful APIs. Most likely, you should be able to get this working very quickly. I'd also recommend submit a suggestion on aka.ms/logicapps-wish so we can track the request for Authorize.net connector, when available, is going to make this ever easier.

Related

Multiple web jobs on same App Service or single Web Job on multiple app services?

I need to deploy multiple web jobs for my current project. However, none of them will be too resource intensive, so I will be using the same App service plan. I am trying to decide between
Create separate App Services per Web Job and deploy them all into the same App Service Plan OR
Create a single App service, and deploy multiple instances of the Web Job into that same app service, with different web job names to make them unique
I know this answer may be based on different opinions, but I am hoping you could share some things to consider. For e.g., one advantage of #1 is that it is easier for me to deploy each web job separately in that option. Any other things that I should consider?
Thanks!
#Snooks, Web Apps are designed to run multiple Web Jobs and there is no platform limit (although there might be practical limits depending on what your web job is doing). My suggestion would be to go with route 1 from a management perspective. This will make life easier when navigating through the portal etc.
A third option to consider as well would be Azure Functions (serverless). While many customers create app services that aren't hosting any web site just to use for the web job capability, the initial intention was for app services services to serve web sites and then also have automation capability via web jobs.
Since web jobs became so popular, Azure Functions was born to provide a more focused product on task/job automation (and much more) while running on a serverless platform. This means you only pay for what you consume, whereas with a web app, you pay a flat rate each month, no matter how much or how little you consume.
If your requirements state you must go the route of Web Jobs, I would certainly go with route 1 just to avoid having to manage multiple apps. If you are willing, I think it's worth looking into Azure Functions to see if it meets your needs.

Is there any equivalent feature to BPMN UserTask available in AWS Step functions?

We have our old 'Camunda based Spring boot application' which we currently deployed it into kubernetes which is running in an AWS EC2 Instance. This application acts as a backend for an angular based UI application.
Now we need to develop a new application similar to the above, which needs to interact with UI.
Our process flow will contain some UserTasks (BPMN) which will wait until manual interaction performed by human user via angular UI.
We are evaluating the possibility of using AWS stepfunctions instead of Camunda, if possible.
I googled but unable to find a concrete answer.
Is AWS stepfunctions have any feature similar to BPMN/Camunda's UserTask ?
Short answer: No.
====================================
Long answer:
After a whole day of study, I decided to continue with CamundaBPM because of below reasons.
AWS step-functions don't have an equivalent feature of UserTask in BPMN.
Step functions supports minimal human intervention via sending emails/messages by using AWS SQS(simple queue service) and AWS SNS(simple notification service).
Refer this link for full example. This manual interaction also based on 'task token'. So this interaction is limited to basic conversational style.
Step-function is NOT coming with in-built database & data management support, the developer has to take care of designing database schema, creating tables, their relationship etc.
On the other hand, Camunda is taking care of creating tables, their relationship, saving & fetching data.
No GUI modeler is available in step-functions, instead you need to draw workflow in a JSON-like language. This will be very difficult if your workflow becomes complex.
Drawing workflow in Camunda is just drag-and-drop using it's modeler.

GCP Best way to manage multiple cloud function flow

I'm studying GCP and reading about different ways to communicate and manage cloud functions I end up wondering when to use each of the services that offer GCP.
So, I have been reading about GCP Composer, GCP Workflows, Cloud Pub/Sub and I don't see clearly when to use each one, or use simple HTTP calls.
I understand that it depends a lot on the application that you are building, but for example, If I'm building a payment gateway and some functions should be fired after the payment was verified, like sending emails, making not related business logic, adding the purchase to a sales platform. So which one should be the way I manage this flow and in which case would be better to use the others? Should I use events to create an async flow with Pub/Sub, or use complex solutions like composer and workflows? or just simple HTTP calls?
As always, it depends!! Even in your use case, it depends! Ok, after a payment you want to send an email, make business logic, adding the order to your databases,...
But, is all theses actions can be done in parallel, or you need to execute them in a certain order and if a step fails, you stop the process?
In the first case, you can use Cloud PubSub with 1 message published (payment OK) and then a fan out to several functions in parallel. Else, you can use workflow to test the response of the fonction and then to call, or not the following fonctions. With composer you can perform much more checks and actions.
You can also imagine to send another email 24h after to thank the customer for their order, and use Cloud Task to delayed an action.
You talked about Cloud Functions, but you also have other solutions to host code on GCP: App Engine and Cloud Run. Cloud function is, most of the time, single purpose. Sending an email is perfect for a function.
Now, if you have "set of functions" to browse your stock, view the object details, review the price, and book an object (validate an order "books" the order content in your warehouse), the "functions" are all single purpose but related to the same domain: warehouse management. Thus you can create a webserver that propose different path to manage the warehouse (a microservice for the warehouse if you prefer) and host it on CloudRun or App Engine.
Each product has its strength and weakness. You will also see this when you will learn about the storage on GCP. Most of the time, you can achieve things with several product, but if you don't use the right one, it will be slower, or cost much more.

Display real time data on website that scales?

I am starting a project where I want to create a website which will display LIVE flight information and status. We all have seen this at airport. An example is given here - http://www.computronics.biz/productimages/prodairport4.jpg. As you can see this information changes continuously. The website will talk to a backend api and the this backend api will talk to database. Now the important part is that the flight information in the database will be updated by the airline itself. There could be several airlines and they will update their data respectively. I have drawn a diagram and uploaded here - https://imgur.com/a/ssw1S.
Now those airlines will obviously have an interface (website talking to some backend API) through which they will update the database.
Now here is my attempt to solve it. We need to have some sort of trigger such that if any airline updates a flight detail in the database between current time - 1 hour to current + 4 hours (website will only display few hours of flights), we need to call the web api and then send the update to the website in the real time. The user must not refresh the page at all. At the same time the website needs to scale well i.e. if 1 million users are on the website, and there is an update in the database in the correct time range, all 1 million user's website should get updated within a decent amount of time.
I did some research and it looks like we need to have an event based approach. For example - we need to create a function (AWS lambda or Azure function) that should be called whenever there is an update in the database (Dynamo DB for example) within the correct time range. This function then should call an API which should then update the website through web socket technology for example.
I am not looking for any code but just some alternative suggestions on how this can be solved in a scalable way. Also how do we test scalability?
Dont use serverless functions(Lambda/Azure functions)
Although I am a huge fan of serverless functions, and currently running a full web app in Lambda, I don't think its needed for your use case and doesn't make sense economically. As you've answered in the comments, each airline will not write directly to the database, they'll push to an API, meaning you are explicitly told when flights have changed. When an airline has sent you new data you can simply propagate this to all the browser endpoints via websockets. This keeps the design very simple. There is no need to artificially create a database event that then triggers a function that will then tell you a flight has been updated. Thats like removing your doorbell and replacing it with a motion detector that triggers a doorbell :)
Cost
Money always deserves its own section. Lambda is more of an economic break through than a technological one. You have to know when its cost effective. You pay per request so if your dealing with a process that handles 10,000 operations a month, or something that only fires 1,000 times a day, than lambda is dirt cheap and practically free. You also pay for the length of time the function is executing and the memory consumed while executing. Generally, it makes sense to use lambda functions where a dedicated server would be sitting idle for most of the time. So instead of a whole EC2 instance, AWS provides you with a container on demand. There are points at which high requests rates and constantly running processes makes lambda more expensive than EC2. This article discusses how generally its cheaper to use lambda up to a point -> https://www.trek10.com/blog/lambda-cost/ The same applies to Azure functions and googles equivalent. They are all just containers offered on demand.
If you're dealing with flight information I would imagine you will have thousands of flights being updated every minute so your lambda functions will be firing constantly as if you were running an EC2 instance. You will end up paying a lot more than EC2. When you have a service that needs to stay up 24/7 and run 24/7 with high activity that is most certainly a valid use case for a dedicated server or servers.
Proposed Solution
These are the components I would use below:
Message Queue of some sort (RabbitMQ or AWS SQS with SNS perhaps)
Web Socket Backend (The choice will depend on programming language)
Airline input API (REST,GraphQL, or maybe AWS Kinesis Data Firehose)
The airlines publish their data to a back-end api. The updates are stored on a message queue and the web applicaton that actually displays the results to users, via websockets, reads from the queue.
Scalability
For scalability you can run the websocket application on multiple EC2 instances (all reading from the same queuing service) in an autoscaling group, so with extra load more instances will be created automatically hence the name "autoscaling". And those instances can sit behind an elastic load balancer. Lots of AWS documentation on how to do this and its their flagship design pattern. If you use AWS SQS you don't have to manage the scalability details yourself, aws handles that. The only real components to scale are your websocket application and the flight data input endpoint. You can run the flight api in an autoscaling group as well but AWS does offer an additional tool for high traffic data processing. I detail that below.
Testing Scalability
It would be fairly easy to have a mock airline blast your service with thousands and thousands of fake updates and on the other end you can easily run multiple threads of selenium tests simulating browser clicks and validating that the UI is still operational.
Additional tools
If it ends up being large amounts of data, rather than using a conventional REST api for your flight update service you could consider a service AWS offers specifically for dealing with large amounts of real time updates (Kinessis Data Firehose) https://aws.amazon.com/kinesis/data-firehose/ But I've never used it.
First, please don't over think this. This is a trivial problem to solve and doesn't require any special techniques, technologies or trendy patterns & frameworks.
You actually have three functional areas you can address almost separately.
Ingestion - Collection and normalization of the data from the various sources. For this, you'll need a process and transformation engine, LogicApps or such.
Your databases. You'll quickly learn that not all flights are the same ;). While it might seem so, the amount of data isn't that much. Instances of MySQL/SQL Server tuned for a particular function will work just fine. Hint, you don't need to have data for every movement ready to present all the time.
Presentation. The data API and UIs. This, really, is the easy part. I would suggest you use basic polling at first. For reasons you will never have any control over, the SLA for flight data is ~5 minutes so a real-time client notification system is time you should spend elsewhere at first.

Can i open a website through an Amazon Web Service?

Is it possible to open a website,like facebook.com for example, on an amazon web service?
My objective is to automate a certain task in a game and to do so without having to be online on my computer. The point is to spend less time on that game, but to not be left behind on the progress. (I'm building a bot to automate the daily tasks there, just need to know if i can now leave everything running on amazon)
Another project i want to do is to automate access to my email account and perform certain tasks depending on the emails i receive.
You get the point, i tried searching on google but i only find results about creating or hosting your own website in there and not about accessing existing websites and using automation in them.
It sounds like what you want is a virtual private server - basically a computer in the cloud that you control and is always on.
AWS have a service called LightSail for this kind of purpose. Under the hood lightsail just uses EC2, but lightsail takes away a lot of the options and configuration to provide a simpler 'click and go' kind of service.
Once you have a server you can schedule regular tasks. Depending on the complexity of your needs, you could look at using Cron as a scheduler and curl for you http requests.
For the specifics of any project you have I would suggest opening a new question with details of what you are trying to do, the reading you have done, and examples of any code you have tried.