My application's users are geographically dispersed and data is stored in various regions. Each region has it's own data center and database server.
I would like to include a route value to indicate the region that the user wants to access and connect to, as follows:
/api/region/1/locations/
/api/region/2/locations/
/api/region/3/locations/
Depending on the region passed in, I would like to change the connection string being used. I assume this can be performed somewhere in the middleware chain, but don't know where/how. Any help is appreciated!
What should not be done
Loopback provides a method MyModel.attachTo (doesnt seem to be documented, but a reference to it is made there ).
But since it is a static method, it affects the entire Model, not a single instance.
So for this to work on a per-request basis, you must switch the DB right before the call to the datasource method, to make sure nothing async starts in between. I don't think this is possible.
This is an example using an operation hook (and define all datasources, include dbRegion1 below in datasources.json)
Bad, don't that below. Just for reference
Region.observe('loaded', function filterProperties(ctx, next) {
app.models.Region.attachTo(app.dataSources.dbRegion1);
}
But then you will most likely face concurrency issues when your API receives multiple requests in a short time.
(Another way to see it is that the server is no longer truly stateless, execution will not depend only on inputs but also on a shared state).
The hook may set region2 for request 2 while the method called after the hook was expecting to use region1 for request 1. This will be the case if something async is triggered between the hook and the actual call to the datasource method.
So ultimately, I don't think you should do that. I'm just putting it there because some people have recommended it in other SO posts, but it's just bad.
Potential option 1
Build an external re-routing server, that will re-route the requests from the API server to the appropriate region database.
Use the loopback-connector-rest in your API server to consume this microservice, and use it as a single datasource for all your models. This provides abstraction over database selection.
Then of course there is still the matter of implementing the microservice, but maybe you can find some other ORM than loopback's that will support database sharding, and use it in that microservice.
Potential option 2
Create a custom loopback connector that will act as router for MySQL queries. Depending on region value passed inside the query, re-route the query to the appropriate DB.
Option 3
Use a more distributed architecture.
Write a region-specific server to persist region-specific data.
Run for instance 3 different servers, each one configured for a region.
+ 1 common server for routing
Then build a routing middleware for your single user-facing REST api server.
Basic example:
var express = require('express');
var request = require('request');
var ips = ['127.0.0.1', '127.0.0.2'];
app.all('/api/region/:id', function (req, res, next) {
console.log('Reroute to region server ' + req.params.id);
request(ips[req.params.id], function (error, response, body) {
if (err) return next(err);
next(null, body);
});
});
Maybe this option is the easiest to do
Related
So I need to build a WebSocket API for my org. The requirements from the business are pretty typical websocket pattern stuff except for one detail:
This websocket api will be used by different teams in our org, and each team needs its own separate activeconnections dynamodb table.
Now in a typical websocket api, there would be a single connections table that the connect and disconnect lambda functions write/delete to. Also, the hooks in the websocket api ensure that the connectionId needed to identify a connection/session are always in the event.requestContext. Easy peasy for a single connections table.
However, In my approach of having a separate active connections db/table for each team, it gets more complicated. Yes, it's true that for the connect lambda, It is very easy to code so that it expects a "TeamDatabaseID" from somewhere in the initial connection request - Headers, queryStringParameters, etc.
The problem is in the subsequent disconnect that could be triggered from either client or server. The disconnect hook will run the disconnect function, and pass in the default requestContext with the connectionId, but with no TeamDatabaseID - which the disconnect lambda needs to have access to in order to know which database to delete from.
Is there a way to do this? Is there some notion of a context object that I can set values in from the initial connection, so that when the disconnect happens, the teamDatabaseID is propagated in some way to the subsequent disconnect lambda? I tried writing to the requestContext - and that seems to only be alive for the execution of the given lambda.
Instead of having a single Amazon API Gateway Web Socket API for multiple teams, could you instead have one Web Socket API per team?
We've created a Google Cloud Function that is essentially an internal API. Is there any way that other internal Google Cloud Functions can talk to the API function without exposing a HTTP endpoint for that function?
We've looked at PubSub but as far as we can see, you can send a request (per say!) but you can't receive a response.
Ideally, we don't want to expose a HTTP endpoint due to the extra security ramifications and we are trying to follow a microservice approach so every function is its own entity.
I sympathize with your microservices approach and trying to keep your services independent. You can accomplish this without opening all your functions to HTTP. Chris Richardson describes a similar case on his excellent website microservices.io:
You have applied the Database per Service pattern. Each service has
its own database. Some business transactions, however, span multiple
services so you need a mechanism to ensure data consistency across
services. For example, lets imagine that you are building an e-commerce store
where customers have a credit limit. The application must ensure that
a new order will not exceed the customer’s credit limit. Since Orders
and Customers are in different databases the application cannot simply
use a local ACID transaction.
He then goes on:
An e-commerce application that uses this approach would create an
order using a choreography-based saga that consists of the following
steps:
The Order Service creates an Order in a pending state and publishes an OrderCreated event.
The Customer Service receives the event attempts to reserve credit for that Order. It publishes either a Credit Reserved event or a
CreditLimitExceeded event.
The Order Service receives the event and changes the state of the order to either approved or cancelled.
Basically, instead of a direct function call that returns a value synchronously, the first microservice sends an asynchronous "request event" to the second microservice which issues a "response event" that the first service picks up. You would use Cloud PubSub to send and receive the messages.
You can read more about this under the Saga pattern on his website.
The most straightforward thing to do is wrap your API up into a regular function or object, and deploy that extra code along with each function that needs to use it. You may even wish to fully modularize the code, as you would expect from an npm module.
we are moving from monolithic to microservice architecture application, we're still in planning phase and we want to know what is the best practices of building it.
suppose we have two services :
User
Device
getUserDevices(UserId)
addDevice(DeviceInfo, UserId)
...
Each user has multiple devices
what is the most common, cleaner and proper way of asking the server to get all user devices ?
1- {api-url}/User/{UserId}/devices
needs another HTTP request to communicate with Device service.
for user X, get linked devices from User service.
// OR
2- {api-url}/Device/{UserId}/devices
for user X, get linked devices from Device service.
There are a lot of classic patterns available to solve such problems in Microservices. You have 2 microservices - 1 for User (Microservice A) and 1 for Device (Microservice B). The fundamental principle of a microservice is to have a separate database for each of the microservice. If any microservice wants to talk to each other (or to get data from another microservice), they can but they would do it using an API. Another way for communication between 2 microservices is by events. When something happens in Microservice A, it will raise an event and push it to a central event store or a message queue and Microservice B would subscribe to some or all of the events emitted by A.
I guess in your domain, A would have methods like - Add/Update/Delete a User and B would have Add/Update/Delete a device. Each user can have its own unique id and other data fields like Name, Address, Email etc. Each device can have its own unique id, a user id, and other data fields like Name, Type, Manufacturer, Price etc. Whenever you "Add" a device, you can send a POST request or a command (if you use CQRS) to Device Microservice with the request containing data about device + user-id and it could raise an event called "DeviceAdded". It can also have events corresponding to Update and Delete like "DeviceUpdated" and "DeviceRemoved". The microservice A can subscribe to events - "DeviceAdded", "DeviceRemoved", and "DeviceUpdated" events emitted by B and whenever any such event is raised, it will handle that event and denormalize that event into its own little database of Devices (Which you can call UserRelationships). In future, it can listen to events from other microservices too (so your pattern here would be extensible and scalable).
So now to get all devices owned by a user, all you have to do is make an end-point in User Microservice like "http://{microservice-A-host}:{port}/user/{user-id}/devices" and it will return you a list of the devices by querying for user-id in its own little database of UserRelationships which you must have been maintaining through events.
Good Reference is here: https://www.nginx.com/blog/event-driven-data-management-microservices/
it may really be either way, but to my liking, I would choose to put it under /Devices/{userId}/devices as you are looking for the devices given the user id. I hope this helps. Have a nice one!
You are requesting a resource from a service, resource being a device and service being a device service.
From a rest standpoint, you are looking for a resource and your service is providing various methods to manipulate that resource.
The following url can be used.
[GET] ../device?user_id=xyz
And device information can be fetched via ../device/{device_id}
Having said that, if you had one service that is providing for both user and device data than the following would have made sense.
[GET] ../user/{userId}/device
Do note that this is just a naming convention and you can pick what suits best for you, thing is pick one and hold onto it.
When exposing the api consistency is more important.
One core principle of the microservice architecture is
defining clear boundaries and responsibilities of each microservice.
I can say that it's the same Single Responsibility Principle from SOLID, but on macro level.
Сonsidering this principle we get:
Users service is responsible for user management/operations
Devices service is responsible for operations with devices
You question is
..proper way of asking the server to get all user devices
It's 100% responsibility of the Devices service and Users service nothing know about devices.
As I can see you thinking only in routing terms (yes API consistency is also important).
From one side the better and more logical URL is /api/users/{userId}/devices
- you try to get user's devices, these devices belong to user.
From other side you can use the routes like /api/devices/user/{userId} (/api/devices/{deviceId}) and that can be more easily processed
by the routing system to send a request to the Devices service.
Taking into account other constraints you can choose the option that is right for your design.
And also small addition to:
needs another HTTP request to communicate with Device service.
in the architecture of your solution you can create an additional special and separate component that routes the requests to the desired microservice, not only direct calls are possible from one microservice to another.
You should query the device service only.
And treat the user id like a filter in the device service. For eg: you should search on userid similar to how you would search device based on device type. Just another filter
Eg : /devices?userid=
Also you could cache some basic information of user in device service, to save round trips on getting user data
With microservices there is nothing wrong with both the options. However the device api makes more sense and further I'll prefer
GET ../device/{userId}/devices
over
GET ../device?user_id=123
There are two reasons:
As userId should already be there with devices service you'll save one call to user service. Otherwise it'll go like Requester -> User service -> Device Service
You can use POST ../device/{userId}/devices to create new device for particular user. Which looks more restful then parameterized URL.
I have a use case where the client app will make a REST call with MULTIPLE product identifiers and get their details. I have a lambda function(exposed by API gateway) which can take a SINGLE product id and get it's detail. I need to get them to work together.. What is a good solution for this?
Modify the client app so it makes single product Id requests. No change in Lambda function required then. But this increases client app network calls as it will invoke the lambda for each productId separately..
Modify the lambda function so that it can handle multiple product id's in the same call.. But this increases lambda response time..
I was also thinking about creating a new lambda function which takes in the multiple productId's and then calls the single product lambda function.. But not sure how to aggregate the responses before sending back to client app.
Looking for suggestions..
Option 1 is less optimal as it will move the chatty protocol between the client to server.
If there are no users of a single Id then I would not keep that one alive. (not option #3)
Depending on your language of choice - it might be best to implement 2 (or 3 for this matter). node.js (and C# too btw) makes it very easy to perform multiple calls in parallel (async calls) and then wait for all results and return to the client.
This means that you will not wait X time more - just a bit more, aligned with your slowest call.
ES6 (modern JS, supported by Lambda) now supports Promise.all() for this purpose.
C# also natively supports these patterns with Task.WaitAll()
I want to write my own "ServletContainerInitializer" that adds my local filter to the ServletContext. And I also want to manage ordering of ServletContainerInitializer invocation so that my local filter will get register and hit by the request before the websocket upgrade filter.
I want to know how to initialize my local ServletContainerInitializer ?
First, ServletContextInitializer are not ordered, that feature is not part of the Servlet spec. You can't accomplish that part of your question. (maybe in a future version of the Servlet spec)
Next, filtering on WebSocket Upgrade requests is highly discouraged, and a cause for a large number of problems in WebSocket. You have to be very careful to not do any of the following.
Access anything on the Response object
Do not wrap the Request or Response objects
Do not access the Request input streams or readers
Do not access the Response output streams or writers
Do not add headers
Do not change headers
Do not access request HttpSession
Do not access request user principal
Do not access request authentication / authorization methods
Do not access request parts (multipart/form-data)
Do not access request parameters
Do not access ServletContext
Do not access request.getScheme or isSecure
Do not remove things from the request (attributes, headers, parameters, etc)
In short, the only safe things you can do are
request.getAttribute(String name)
request.getContextPath()
request.getCookies()
request.getHeader(String name)
request.getIntHeader(String name)
request.getLocalName()
request.getLocalPort()
request.getPathInfo()
request.getPathTranslated()
request.getQueryString()
request.getRemoteAddr()
request.getRemotePort()
request.getRequestURI()
request.getRequestURL()
request.getServerName()
request.getServerPort()
As all other accesses on the request or response objects will change the state of the request and prevent an upgrade.
The fact that Jetty has a WebSocketUpgradeFilter is just our choice on implementation for the JSR-356 (aka javax.websocket) spec. It is added by a server side ServletContextInitializer and is forced to be first, always.
In practice you should work with the expectation that upgrades occur before the Servlet processing (and this includes filters), as this is how the spec is written. There are open bugs against the spec about how interactions with filters and whatnot should be treated, but those are currently unanswered and loosely scheduled for a future version of the javax.websocket spec.
Future versions of Jetty will likely change from using a filter to using something internal that cooperates at the path mapping level, merging the logic from the Servlet spec and the WebSocket spec into a single new set of rules.
Since this question comes up often, i've ticked the community wiki flag.
The number one reason this gets asked is because there is some authentication or authorization logic built into a filter on your project.
If this is the case, you have 2 options.
Refactor out the authentication and/or authorization logic into a standalone classs, unassociated with your filter.
Build a new Filter and a new ServerEndpointConfig.Configurator that uses this now common logic to accomplish the end results you need. Note that you do not have access to the entire HttpServletRequest object when under a potential WebSocket upgrade, you only have access to the HandshakeRequest object contents. (you can see the restrictions now)
Use the Servlet spec, and containers properly and implement / configure Security at the container level, which will always execute before websocket or servlets or filters. Thus dropping your security based Filters entirely.