Rest API that needs a connection - web-services

I have a system where the user needs to connect to first and then based on the connection fetch some data. For e.g. you connect to a database and then fetch say metadata about a table say.
I was planning to expose this via REST API. So in this case, you need to first connect and then use that connection to fetch the metadata.
Two options come to my mind:
a. Have a url say /connect where you post the connection parameters to and it returns a conneciton id. This id is then encoded in subsequent URL to identify the connection.
b. Second option is to post the connection parameters everytime.
What are the pros/cons of these approaches? Are there any other alternatives?
One constraint is that the authentication mechanism to connect to the system is not in my control, I am just exposing some data from the systems via webservices and I am exploring using REST.

Do you really need to expose the connection?
I think it may just be semantic prejudice - but usually connection details are hidden by the service.
Does the connection have business value?!
If the connection does have business value, then treat it like a resource:
i.e.
do a post on /connections to return a new connection
then do a get on /connection//metadata to get the metadata about that connection.

Related

Web API that itself sends data to client on availability

I am currently working with Django but I am stuck as I don't know if I am pursuing the right model given the nature of my application.
Problem Statement:
I have to make a REST API for a client such that whenever I get a trigger for a new entry/entries in my Database I have to send those to the client which is supposed to listen to a URL and has asked for data only once and now is open to receive data whenever available to him.
It is not sending a GET request now and then.
There will be different endpoints of the APIs. One is where I provide him all the new data available to me and in other where it asks for specific data (ex: '/myAPI/givemethis')
I can easily implement the second requirement as it is a simple request-response case.
I am not sure how to send him data that too on availability without him making a repeated request as client.
It appears a Publisher-Subscriber model is better suited for my use case but I don't know how to implement it on Django.
I bumped into several concepts like StreamingServices and MQTT but I am not sure what should be the right choice to go with.
Kindly provide some suggestions.

Luminus -- multiple requests within the same db connection

In my Luminus app I have this:
(defn page1 [id]
(layout/render "page1.html"
{:article (db/get-single-article {:id (Integer/parseInt id)}))
I want to perform multiple different requests to the db within the same db connection. How can I do that?
From your question it's not clear whether you want to reuse the same DB connection to handle multiple HTTP requests or single HTTP request calling multiple functions using JDBC API (so all those JDBC calls use the same DB connection).
If this is the latter case you can use with-db-connection to wrap all your functions calling JDBC API. You can also use with-db-transaction if all SQL operations should be part of one DB transactions.
For the former case I am not sure why you would need to reuse the same connection for multiple HTTP requests but it is not a common idiom as HTTP is stateless by definition and causes multiple issues.
You might store the connection in your ring HTTP session so you can fetch it whenever you get a request associated with the session and use for JDBC logic.
However, such a solution has following drawbacks:
you have to make sure that the connection gets released to the pool (or closed if you don't use pooling) when is no longer needed. How would you detect that? What if the client fails and never finishes some workflow where you decide to clean up the DB connection?
how many concurrent 'sessions' do you need to handle? If many (like hundreds) keeping a dedicated connection for each session won't scale (DB connections are expensive resources on both sides: client and servers)

send data from server with java ee 6 to client

Problem
We have a client-server application, server side is Glassfish 3.1.2. This app has many users, as well as many modules (e.g. View Transactions, View Banks etc). There are some long running processes invoked by client which run on server. Currently we have not found a nice solution to show the user what is going on on the server side. We want the users to get updated messages from server with given frequency. What would you suggest to use?
What we have done/tried
We (independently) used an approach with Singleton bean and a Map of client IDs similar to this, and it works of course. But then on the server side every method doSomething(Object... vars) must be converted to doSomething(Object... vars, String clientID) or whatever ID is type of. The client pulls data from server say once per second. I would like to avoid adding facades between server and client.
I was thinking about JAX-WS or JAX-RS, but I'm not familiar with these technologies deeply and not sure about what they can do.
Sockets
I should note that on the server side we have only Stateless beans (there is a reason for that), that is why I did not mention the use of Stateful bean (which is very good candidate I think).
Regards, Oleg
WebSocket could be a suitable choice, it allows the server to send unsolicited data to clients with no strong coupling, you just have to store a client id to map client connections to running tasks and be able to push updates to the right connection.
The client id/socket connection mapping can be maintained in a singleton bean using an in-memory structure, i.e. a hash map, or a permanent datastore for scalability purposes or in case you need a robust solution.
Some useful links to better understand WebSocket technology are this and this.

Distributing an application server

I have an application server. At a high level, this application server has users and groups. Users are part of one or more groups, and the server keeps all users aware of the state of their groups and other users in their groups. There are three major functions:
Updating and broadcasting meta-data relating to users and their groups; for example, a user logs in and the server updates this user's status and broadcasts it to all online users in this user's groups.
Acting as a proxy between two or more users; the client takes advantage of peer-to-peer transfer, but in the case that two users are unable to directly connect to each other, the server will act as a proxy between them.
Storing data for offline users; if a client needs to send some data to a user who isn't online, the server will store that data for a period of time and then send it when the user next comes online.
I'm trying to modify this application to allow it to be distributed across multiple servers, not necessarily all on the same local network. However, I have a requirement that backwards compatibility with old clients cannot be broken; essentially, the distribution needs to be transparent to the client.
The biggest problem I'm having is handling the case of a user connected to Server A making an update that needs to be broadcast to a user on Server B.
By extension, an even bigger problem is when a user on Server A needs the server to act as a proxy between them and a user on Server B.
My initial idea was to try to assign each user a preferred server, using some algorithm that takes which users they need to communicate with into account. This could reduce the number of users who may need to communicate with users on other servers.
However, this only minimizes how often users on different servers will need to communicate. I still have the problem of achieving the communication between users on different servers.
The only solution I could come up with for this is having the servers connect to each other, when they need to deal with a user connected to a different server.
For example, if I'm connected to Server A and I need a proxy with another user connected to Server B, I would ask Server A for a proxy connection to this user. Server A would see that the other user is connected to Server B, so it would make a 'relay' connection to Server B. This connection would just forward my requests to Server B and the responses to me.
The problem with this is that it would increase bandwidth usage, which is already extremely high. Unfortunately, I don't see any other solution.
Are there any well known or better solutions to this problem? It doesn't seem like it's very common for a distributed system to have the requirement of communication between users on different servers.
I don't know how much flexibility you have in modifying the existing server. The way I did this a long time ago was to have all the servers keep a TCP connection open to each other. I used a UDP broadcast which told the other servers about each other and allowed them to connect to new servers and remove servers that stopped sending the broadcast.
Then everytime a user connects to a server that server Unicasts a TCP message to all the servers it is connected to, and all the servers keeps a list of users and what server they are on.
Then as you suggest if you get a message from one user to another user on another server you have to relay that to the other server. The servers really need to be on the same LAN for this to work well.
You can run the server to server communications in a thread, and actually simulate the user being on the same server.
However maintaining the user lists and sending messages is prone to race conditions (like a user drops off while you are relaying the message from one server to another etc).
Maintaining the server code was a nightmare and this is really not the most efficient way to implement scalable servers. But if you have to use the legacy server code base then you really do not have too many options.
If you can look into using a language that supports remote processes and nodes like Erlang.
An alternative might be to use a message queue system like RabbitMQ or ActiveMQ, and have the servers talk to each other through that. Those system are designed to be scalable, and usually work off a Publish/Subscribe mechanism.

Filtering data with Microsoft Sync Framework

Context: I'm working on a project that use Offline Application Architecture. Our client program has 2 modes: connected and disconnected. When user in disconnected mode, they will use their local database (SQL CE) for retrieving and storing data. When user connects to application server again, the local database will be synchronized with central database as well. The transport layer in this project is WCF, we implement a proxy class to expose SQLSyncProvider on client for Sync Framework to sync data.
Question: How could I implement data filtering with MSF? In our project, each client will has a role, and each role will have access to different number of tables as well as rows in table. As far as I know, MSF allows us to filter data with a parameter column however, the provision for users will be same. In my case, the provision for each user will be so different, it depends on user's role.
Thanks.
You can use adapter filters on server side, and can send some parameter to fetch data on client bases from client.
Client
this.Configuration.SyncParameters.Add(
new SyncParameter("#CustomerName", "Sharp Bikes"));
Server
SqlSyncAdapterBuilder