What are the issues while doing asynchronous requests for Dynamo DB ? Is it advisable to use when we are using a lot of write operations ?
Using the AWS SDK in any number of languages, when you submit async requests it simply means the call to the AWS SDK will be non-blocking. This is implemented entirely client side. This also means it is the responsibility of your application code to make sure that the write request actually succeeded.
It is really more of a programming style choice and depending on your programming language of choice and framework that you are using.
Generally speaking using async requests could result in better throughput for applications making a large number of write requests to DynamoDB, but this could be accomplished using synchronous requests and multithreading.
Related
Why we started using javascript in server-side ?? and which is the best javascript language for server-side ?? and why for e.g node.js
By using both client and server JavaScript, you can reduce the number of different concepts required for web development, get a possibility to reuse code between client and server and reduce the need for context switching.
Node.js uses an event-oriented architecture, which integrates very well with JavaScript (callbacks!). Using an event loop, Node interprets requests in a single thread asynchronously rather than sequentially, not allowing locks. This makes it incredibly fast, perfect for handling a very high number of requests, one of the main advantages that has excited so many developers to explore this technology.
We have a monolithic Rails API that was also serving our Websockets. Recently, we outgrew ActionCable and we decided to move our Websockets to Elixir's Phoenix.
In this model clients still interact with the Rails application for HTTP requests, but Phoenix handles all the Websocket traffic. Rails communicates to Phoenix what data to send and on what channel then Phoenix acts essentially as a passthrough for what Rails sends it.
I had initially set this up using Redis PubSub for communication from Rails to Phoenix. It works well at its current scale, but I'm starting to think that it may have been an inferior choice. Here is my list of pros and cons:
Redis
Pros:
Ordered messages (not important in our case)
Acts as a proper queuing mechanism
Wicked fast and dead simple publishing from Rails
Cons:
No competitive consumers - I would have to manually implement balancing if I had multiple Phoenix consumers (a real possibility)
Concurrency is more difficult to implement well (which really acts against Elixir's strengths)
HTTP
Pros:
Concurrency comes for free
Load balancing comes for free - a request will only be fulfilled by a single Phoenix consumer
Slightly more simple to implement
Cons:
Unordered messages (not important for us)
Much slower to send a message from Rails
Would have to manually implement retry and timeouts on the HTTP requests from Rails
If a message is lost (due to server restarts or similar), it's gone for good
Even after weighing it out, I still find it hard to claim one as being the clear choice. Are there patterns for Redis or HTTP communication between services that alleviate some of my problems? If not, which of these two would be preferred, considering the cons?
Is there another simple alternative that I'm overlooking? I don't want to involve something like Rabbit MQ if it can be avoided.
I am doing some research on SOAP, for a personal project, and I came across a website with a list of pros and cons for using SOAP, and I understood what most of them meant, except for this one under disadvantages:
SOAP is typically limited to pooling, and not event notifications, when leveraging HTTP for transport. What's more, only one client can use the services of one server in typical situations.
From my understanding of pooling, there should be no issue pooling a SOAP Object for re usability. Pooling is simply a way to use the same resources over and over again, like a connection to a database. Also not entirely certain on the context of Event Notifications.
So my two questions here are, what does the above block quoted text actually mean, and is this information correct?
Website: http://searchsoa.techtarget.com/definition/SOAP
SOAP is RPC, and in RPC some local client invokes a method on some remote target and receives a result. That's how it works, so SOAP works that way too. A client invokes a service asking for something and the service just responds.
If you want "events" in this type of communication the most simple approach is to invoke the service more often (i.e. polling). This has the advantage that nothing changes for the server or the client. It's the same RPC call but done more frequently.
These days everyone is connected to the web and everyone is subscribed to all sorts of services. They want to get notified as soon as something happens to the world around them. Pooling becomes inefficient in this sea of users and services because you are wasting resources. You might poll a service a hundred times just to get back one notification. For this reason technology is evolving so that resource use is minimized. And the direction this is moving to is push services.
Now almost everything happens in the browser. Every browser manufacturer rushes to implement the latest technology changes and HTML5 spec. This means actual pages that push notifications to users instead of faking it with Ajax, comet, etc.
SOAP has been around since 1998 and it's not moving as fast as the rest of the web, mainly because SOAP is mostly an enterprise player and because it's a protocol. Because it's a protocol you have to make new technology available to it without breaking that protocol. Things move slower so people have abandoned SOAP in favor of other ways of doing server-client communication.
SOAP is typically limited to pooling, and not event notifications...
That is correct. But be aware that "typically" does not mean "always".
You can have events, but it's harder. It involves using WS-* specifications like WS-Eventing and WS-Addressing. This is a change in the way SOAP clients operate because a client now becomes some sort of a service too because it needs to receive calls too, not just initiate them. If your technology stack implements these specifications then good for you, but if it doesn't, then you have to build it yourself and it's a real pain.
So for these reasons, if you don't have blocking performance or resource usage issues, you "typically" chose doing polling with SOAP and not event notifications.
I would like to use Amazon SQS in my application to queue requests from other external systems that don't belong to me.
What is the better way of doing this, directly expose the SQS Queue and the required messageformat OR publish a web service (WCF) that queues the request.
Also I read that SQS is relative slow for a singe access, but am I right that it can handle easyly a lot of concurrent accesses from different clients?
Best
Thomas
This is largely a matter of preference and depends a bit on your situation. But my recommendation would be to wrap it with your own web-service.
Building your web-service allows you to do things like validation, throttling, schema versioning etc. E.g. you can reject invalid messages with immediate synchronous feedback to the sender. If the external systems are publishing directly to your queue, then invalid messages become your problem not theirs, and if you revise your schema and want to reject old-schema messages then you either have to drop them or set up a separate back-channel to feed back information to the publisher. That adds unnecessary complexity to your system. Having a web-service would even let you switch to other queuing technologies later if you need to.
But building your own web-service has downsides too: will your own service be able to handle the same load as the SQS API with the same low latency? It won't scale infinitely like SQS, so how responsive will you need to be to changes in load? Have you got the resources to manage a separate service? And it's more work than just giving a client's AWS account permission to publish to your queue.
If you're happy with the extra work involved, and you want a more future-proof system, IMHO it's worth building the web-service wrapper.
I am trying to make a realtime messaging application. There will be 2 distinct server(node.js and django) and when a user sends message to another user message will be stored in database than node.js will send a message to receiver like "You have new Message!". For that i am planing to call url which node.js serve. So node.js and django will interact each other. And what is best way send message to specifig client ? (I keep clients with their id's in a assosicative array.)
what do you think about that? is it efficent or do you suggest better way to do this ?
Now that I understand more about what you're trying to do, here my answer, just keep in mind that this only reflects my opinion, and I bet that many others would argue about it.
It all matter on how much traffic you expect to have in your application. If it's not a high traffic application, then efficiency in run-time is insignificant when compared to that of the development, and so choose the technology you feel most comfortable with.
If though you do aim for high traffic application, then I believe that this setup is not a good one.
First of all while http based communication between servers might seem comfortable, you are dealing with the overhead of http over tcp (since http is based on tcp). And so regular tcp sockets scale better, but on the other hand if you write the sockets server in python than you can run it from the same process as the django and then just use it as an object from django (you're entering the realm of threads here). But that's problematic if you have a few web instances, again depends on how much traffic you expect.
As for your choice for implementing the messaging server, I've never tested node.js but I believe that in benchmark tests it won't compare for something written in erlang or Java NIO. For example: JAVA AIO (NIO.2) VS NODEJS