There are numerous articles such as this one, which insist that while implementing microservices, each team should build its own UI. These fragments can be composed into a single page using iFrames or div.
My question is, how to implement using AWS.
This could be done at deployment time, or runtime.
1. Deployment time: HTML fragments need to be collected as part of CI/CD process and put together in S3 which serves as web server for static pages (API GW for dynamic JSON content).
2. Run time: The html fragment (div)will have to be delivered to client browser on demand, as and when client clicks through (can be cached in client browser, and/or API GW.
What is the technology for this? I don't think Lambdas can deliver HTML fragments through API GW. Also, the CI/CD approach appears to be fuzzy to me.
Btw any other way? Thanks
Use SPA like angular/react (but not limited to. Same method can be implemented in MPA).
SPA helps us to segregate the UI from the back-end and both runs independently.
And in the service layer of the application(UI), call the respective micro-service say M1 or M2.
In monolithic applications like MPA (multi page application- spring/struts/jsp) where the UI is generated from the back-end and are tightly coupled you need to call different rest-api using jquery/ajax and process the JSON response.
In SPA multiple scenarios like parallel trigger all micro-services or wait for M1 and pass the payload to M2 or race between M1 and M2 can be achieved using promises in javascript
Related
I am familiar with firebase platform, but I am relatively a new user of the google cloud platform as whole.
I am working on a project built using a microservices structure, and I do have so many question for which I cannot find an answer or better I cannot find any example.
Unfortunately all the example that I am able to find are way to simple to be able to extrapolate a viable answer for my issues.
I adopted the new cloud run offer, and I decided to play with the full managed version (not kubernetes). I built few microservices (each service is built using express for node or flask for python - depending on what the services does). Each microservices expose it's own endpoint and has it's own api to call the methods - and I use a service account to allow the application to perform the internal calls.
I now want to expose the application to the external (specifically to my client built using vuejs technology), and I was trying to leverage another google product to create and expose an api: the google endpoints.
My question (specifically referred to the cloud run structure) is related to how is possible and what I need to do to create an api endpoints to communicate with the client app, that internally calls multiple services and combine their response in one.
Just to be clear, let's make an example:
Cloud run service 1 -> crud user api
Cloud run service 2 -> crud product api
Cloud endpoint external visible api -> get user from service 1, and after get products from service 2 and return the combined response all green products for user Jane Doe.
How I can aggregate the response directly in the endpoint gateway, check for failure and if everything goes smooth send the aggregate response to the client?
I need to build the aggregate endpoint in something else, like a cloud function for example? or I can do it directly in the google endpoints gateway?
Note that for cloud run the google endpoints is another cloud run container.
Thanks guys for some help, running pretty much out of option here.
As per my understanding, API Gateway should just work as a proxy, presenting all micro services as a single endpoint. To this scenarios I think you can have following 2 approaches :
1: Implement a new micro service (or on any of the existing one) which will do invocations and aggregation of responses.
2: Client(like UI) can invoke the services and do the aggregation on their side as well.
I feel, it is not a good idea to do it at api-gateway.
In my opinion, from an architectural point of view, the best option for you is to create a new microservice which will take the responses from the other two and then, it will aggregate them.
I understand that you want to aggregate the responses in a api-geteway and you are not able to find code examples for it. Here I was able to find a guide on what are you wanting to implement. The full code implementation can be found in this repository.
Keep in mind though, this idea of implementation is not a best practice.
This is ok, only if those two services that are going to be combined are independent. Meaning there is no functional/business relation between them and the concurrency or inconsistency problem will not occur in the process of aggregating.
I’m developing a web service using django. In addition to the web application, I have a separate module with about 40 functions that take in some parameters, perform some network-bound tasks and return results. These functions (or an entry point function) can be called from django views.
Here is the flow I’m trying to achieve.
From the web application, users can submit an URL to start the operation.
That request should initiate those functions in parallel (with the URL as an argument) in the server (not necessarily all at once)
User can do a request from the web application to get a list of completed tasks and results of the ongoing operation.
Multiple users can submit URLs to the web application and initiate the operation separately (each user gets a list of 40 results)
Currently I am experimenting with Thread and Queue classes to achieve this. What I want to know is how can I manage this flow without getting so many threads? How should I maintain the separation between two operation sessions? Is there any way I can in-cooperate the capabilities of django for this?
All I ask is a basic guideline of how things should be organized to achieve this.
It sounds like you could call your functions in celery, a distributed task queue module for python. Take a look at the docs for integration with django here: http://docs.celeryproject.org/en/latest/django/first-steps-with-django.html
There is a module named django- celery-beat of you need to schedule tasks.
I'd like to use the microservices architectural pattern for a new system, but I'm having trouble figuring out how to share and merge data between the services when the services are isolated from each other. In particular, I'm thinking of returning consolidated data to populate a web app UI over HTTP.
For context, I'm intending to deploy each service to its own isolated environment (Heroku) where I won't be able to communicate internally between services (e.g. via //localhost:PORT. I plan to use RabbitMQ for inter-service communication, and Postgres for the database.
The decoupling of services makes sense for CREATE operations:
Authenticated user with UserId submits 'Join group' webform on the frontend
A new GroupJoinRequest including the UserId is added to the RabbitMQ queue
The Groups service picks up the event and processes it, referencing the user's UserId
However, READ operations are much harder if I want to merge data across tables/schemas. Let's say I want to get details for all the users in a certain group. In a monolithic design, I'd just do a SQL JOIN across the Users and the Groups tables, but that loses the isolation benefits of microservices.
My options seem to be as follows:
Database per service, public API per service
To view all the Users in a Group, a site visitor gets a list of UserIDs associated with a group from the Groups service, then queries the Users service separately to get their names.
Pros:
very clear separation of concerns
each service is entirely responsible for its own data
Cons:
requires multiple HTTP requests
a lot of postprocessing has to be done client-side
multiple SQL queries can't be optimized
Database-per-service, services share data over HTTP, single public API
A public API server handles request endpoints. Application logic in the API server makes requests to each service over a HTTP channel that is only accessible to other services in the system.
Pros:
good separation of concerns
each service is responsible for an API contract but can do whatever it wants with schema and data store, so long as API responses don't change
Cons:
non-performant
HTTP seems a weird transport mechanism to be using for internal comms
ends up exposing multiple services to the public internet (even if they're notionally locked down), so security threats grow from greater attack surface
Database-per-service, services share data through message broker
Given I've already got RabbitMQ running, I could just use it to queue requests for data and then to send the data itself. So for example:
client requests all Users in a Group
the public API service sends a GetUsersInGroup event with a RequestID
the Groups service picks this up, and adds the UserIDs to the queue
The `Users service picks this up, and adds the User data onto the queue
the API service listens for events with the RequestID, waits for the responses, merges the data into the correct format, and sends back to the client
Pros:
Using existing infrastructure
good decoupling
inter-service requests remain internal (no public APIs)
Cons:
Multiple SQL queries
Lots of data processing at the application layer
harder to reason about
Seems strange to pass large quantities around data via event system
Latency?
Services share a database, separated by schema, other services read from VIEWs
Services are isolated into database schemas. Schemas can only be written to by their respective services. Services expose a SQL VIEW layer on their schemas that can be queried by other services.
The VIEW functions as an API contract; even if the underlying schema or service application logic changes, the VIEW exposes the same data, so that
Pros:
Presumably much more performant (single SQL query can get all relevant data)
Foreign key management much easier
Less infrastructure to maintain
Easier to run reports that span multiple services
Cons:
tighter coupling between services
breaks the idea of fundamentally atomic services that don't know about each other
adds a monolithic component (database) that may be hard to scale (in contrast to atomic services which can scale databases independently as required)
Locks all services into using the same system of record (Postgres might not be the best database for all services)
I'm leaning towards the last option, but would appreciate any thoughts on other approaches.
To evaluate the pros and cons I think you should focus on what microservices architecture is aiming to achieve. In my opinion Microservices is architectural style aiming to build loosely couple applications. It is not designed to build high performance application so scarification of performance and data redundancy are something we are ready accept when we decided to build applications in a microservices way.
I don't think you services should share database. Tighter coupling scarify the main objective of the microservices architecture. My suggestion is to create a consolidated data service which pick up the data changes events from all the other services and update the database behind it. You might want to design the database behind the consolidated data service in a way that is optimised for query (like a data warehouse) because that's all this service will be used for. You might want to consider using a NoSQL database to support your consolidated data service.
I've just started messing around with AWS DynamoDB in my iOS app and I have a few questions.
Currently, I have my app communicating directly to my DynamoDB database. I've been reading around lately and people are saying this isn't the proper way to go about getting data from my database.
By this I mean is I just have a function in my code querying my Dynamo database and returning the result.
How I do it works but is there a better way I should be going about this?
Amazon DynamoDB itself is a highly-scalable service and standing up another server in front of it requires scaling the service also in line with the RCU/WCU configured for your tables, which we can and should avoid.
If your mobile application doesn't need a backend server and you can perform all the business functions from the mobile device, then you should probably think about
Using the AWS DynamoDB SDK for iOS devices to write your client application that runs on the mobile device
Use AWS Token Vending Machine to authenticate your mobile users to grant them credentials to be used to run operations on DynamoDB tables.
Control access (i.e what operations should be allowed on tables etc.,) using IAM policies.
HTH.
From what you say, I can guess that you are talking about a way you can distribute data to many clients (ios apps).
There are few integration patterns (a very good book on this: Enterprise Integration Patterns), one of which is called shared database. It is essentially about using a common database for multiple clients to share the data. Main drawback for that pattern (in your case) is that you are doing assumption about how the database schema looks like. It can potentially bring you some headache supporting the schema in the future, if your business logic changes.
The more advanced approach would be sending events on every change in your data instead of directly writing changes to the database from client apps. This way you can add additional processing to the events before the data they carry is written to the database. For example, you may want to change the event format in the new version of your app, but still want to support legacy users, so you add translation procedure which transforms both types of events to the format which fits the database schema. It's basically a question of whether to work with diffs vs snapshots.
You should be aware of added complexity of working with events, and it can be an overkill if your app is simple and changes in schema are unlikely.
Also consider that you can do data preprocessing using DynamoDB Streams, which gives you some advantages of using events still keeping it simple to implement.
Nowadays a lot of web applications are providing API for other applications to use.
I am new to the usage of API so I want to understand the use cases for it.
Lets take Basecamp as an example.
What are the use cases for using their API in my web application?
For inserting current data in my web application into a newly created Basecamp account instead of inserting everything manually which could take days or weeks if the data is huge?
For updating my application data when the user changes something in Basecamp. If so, how do I know for example when a user add/edit/remove a contact in Basecamp. Do I make a request and check every minute from the backend?
For making backup of the Basecamp data so I can move it to other applications if necessary?
Are all the above examples good use cases for the usage of API?
Are there more use cases?
I want to have a clear picture of why it's good to use another web service API and how I can leverage that on my application.
Thanks.
I've found the biggest reason to use and provide web services is to be able to programmatically drive the application with another process. This allows the coupling of different actions in different applications driven by one event/process/trigger.
For example I could create a use a webservice provided by Basecamp, my bug tracking database and the continuous integration server. I could tie all those things together and kick them off from a commit hook script.
I can have a monitor in production automatically open a ticket in our ticket tracker. This could trigger an autoremediation process from the ticket tracker which logs into the box remotely and restarts the service.
The other major reason I've seen to use and provide web service is to reduce double entry. If you do change management in your production environment that usually means you create Change tickets. The changes that occur may also need to be reflected in the Change Management Database which is usually a model of how production is suppose to look. Most of these systems don't automatically drive the update of your configuration item with the data from the change. Using web services you can stitch them together to eliminate the double (manual) entry that would normally occur.
APIs are used any time you want to get data to/from an application without using the default interface.
*I'd bet there's a mobile app would use the basecamp api.
*You could use the api to pull information from basecamp into another application (like project manager software or an individual's todo webpage)
*the geekiest of us may prefer to update basecamp from a script/command line rather than interrupting our work flow to open a web page and click around.