I am new to WS02. So I just want to understand what the difference is between API Manager and Integration Server.
I did not know about this software and it looks awesome. What I want is to orchestrate automation across systems using an ESB but also have a store to manage the API's. Do I install both or just the API Manager ?
To actually 'orchestrate automation' you will need the Enterprise Integrator. Enterprise integrator is also a product that you generally don't want to expose to the outside world ie. the big bad internet.
The API Manager serves exactly that purpose. It is a gateway you can use to expose resources while managing access, availability, throttling etc. However, it is not meant to build actual integrations/orchestration etc.
In your case it sounds like you would need both.
Related
Can we deploy QlikSense/QlikView on serverless architecture?
Currently using Monolithic architecture, any other way to move on to serverless?
While I am not familiar with Qlik's products, it is unlikely they would be suitable for serverless architecture.
Companies generally offer the products either as:
Downloadable products that you run on your own server (which could be a virtual server in the cloud), or
Software-as-a-Service, where you access their website directly and no server is required (eg Salesforce)
"Serverless architecture" is a design decision that can be made when designing a software product. It means the the application is broken-down into small components ('microservices') that can be run on services like AWS Lambda, with no actual server.
However, such architecture would normally only be used for your own applications that you create. If another company has designed their system to be 'serverless', then they would normally run it on a cloud system (eg AWS) and offer it to users as Software-as-a-Service. It would be highly unusual to have a 'download' product that runs on a serverless architecture.
I notice that Qlik has product offerings that run on AWS (AWS Marketplace: Qlik), which runs on an Amazon EC2 instance, rather than serverless.
If you look into the Qlik Core product then yes Qlik can be deployed on an elastic containerised enviroment. But then as I understand it you don't get the standard objects and visualisations, user management etc. So you have to code your own stuff that ties into the Qlik Data Analytics Engine via apis.
From https://core.qlik.com/why-qlik-core/
So, how do you get it? Licensing info here but let’s talk components
Linux-based Associative Engine – provided as a Docker image with built-in support for Amazon Web Services, Microsoft Azure and Google Cloud Platform
Supporting APIs – these ingest your data into the Qlik Associative Engine through connectors
Supporting Open Source Libraries – these various libraries by Qlik expose the engine to help you build solutions faster
It’s all language agnostic but JavaScript lovers will find it easier to work with given the number of our open source tools available in JavaScript. Other top languages and tools used include R, Go, Shell, C#, Python, Java and D3. Qlik Core can also be managed with the orchestration tool of your choice for implementing, scaling and managing containerized applications.
It really depends on what exactly are you building. As The Budac mentioned you can use Qlik Core
If you just want invoke Qlik API (for example some automation jobs) then serverless functions are making sense.
Qlik Sense (both Enterprise and Kubernetes versions) expose a lot more different API which can be called from technically everywhere.
QlikView on the other hand is more ... conservative. QV is an older software and the API/integrations are more limited. For example: to call the Management API you have to be on the same domain as QV. Personally I've only connected to QV Management API only with C# and pretty sure you cant use JS/Node
I am familiar with firebase platform, but I am relatively a new user of the google cloud platform as whole.
I am working on a project built using a microservices structure, and I do have so many question for which I cannot find an answer or better I cannot find any example.
Unfortunately all the example that I am able to find are way to simple to be able to extrapolate a viable answer for my issues.
I adopted the new cloud run offer, and I decided to play with the full managed version (not kubernetes). I built few microservices (each service is built using express for node or flask for python - depending on what the services does). Each microservices expose it's own endpoint and has it's own api to call the methods - and I use a service account to allow the application to perform the internal calls.
I now want to expose the application to the external (specifically to my client built using vuejs technology), and I was trying to leverage another google product to create and expose an api: the google endpoints.
My question (specifically referred to the cloud run structure) is related to how is possible and what I need to do to create an api endpoints to communicate with the client app, that internally calls multiple services and combine their response in one.
Just to be clear, let's make an example:
Cloud run service 1 -> crud user api
Cloud run service 2 -> crud product api
Cloud endpoint external visible api -> get user from service 1, and after get products from service 2 and return the combined response all green products for user Jane Doe.
How I can aggregate the response directly in the endpoint gateway, check for failure and if everything goes smooth send the aggregate response to the client?
I need to build the aggregate endpoint in something else, like a cloud function for example? or I can do it directly in the google endpoints gateway?
Note that for cloud run the google endpoints is another cloud run container.
Thanks guys for some help, running pretty much out of option here.
As per my understanding, API Gateway should just work as a proxy, presenting all micro services as a single endpoint. To this scenarios I think you can have following 2 approaches :
1: Implement a new micro service (or on any of the existing one) which will do invocations and aggregation of responses.
2: Client(like UI) can invoke the services and do the aggregation on their side as well.
I feel, it is not a good idea to do it at api-gateway.
In my opinion, from an architectural point of view, the best option for you is to create a new microservice which will take the responses from the other two and then, it will aggregate them.
I understand that you want to aggregate the responses in a api-geteway and you are not able to find code examples for it. Here I was able to find a guide on what are you wanting to implement. The full code implementation can be found in this repository.
Keep in mind though, this idea of implementation is not a best practice.
This is ok, only if those two services that are going to be combined are independent. Meaning there is no functional/business relation between them and the concurrency or inconsistency problem will not occur in the process of aggregating.
This conceptual question has crept into my mind after becoming more familiar with AWS. In general, I’m curious if there is a best-practice and/or convention as to when an API provider should group endpoints into a new, separate API (vs. lumping the endpoints into an existing API).
To illustrate, let’s say a Service creates digital wallet coupons on behalf of Manufacturers, to be redeemed by Consumers at a bunch of Mom & pop stores — some of the activities the Service might engage in include:
Receiving data from the Manufacturers (in order to build the digital coupons)
Providing a mechanism for Consumers to find and download coupons
Providing a way for the Mom & pop stores’ payment terminals to validate the coupons
And, oh by the way, the Service might also be required to ...
Implement a variety of endpoints, based on technologies involved (e.g., PassKit with Apple Wallet)
So?
With AWS, it’s easy to modularize one’s backend (e.g., have an RDS instance for the database, run a few lambda functions for microservices, etc.) and load balance it all. API Gateway adds to this in that each endpoint can point to different things (lambda functions, EC2 instances via HTTP proxy, etc.).
Consequently, one approach might be to define one API in AWS API Gateway and have all the endpoints underneath it:
API: “Master”
/coupon
POST = create a new one (for Manufacturers)
PUT = update an existing one (for Manufacturers)
GET = retrieve one (for Consumers)
/coupon/validate
POST = verify it’s still valid (Mom & Pop store use-case)
/apple-wallet
/{version}
/passes
... per documentation
/devices
... per documentation
But would it make more sense for the Service to shave off the /apple-wallet endpoint and create an entirely new, separate API?
Alternatively, if the Service was going to publish documentation for public developers to use, would it make sense to move the Manufacturer-relevant endpoints into a separate API altogether?
Since AWS makes the effort of splitting endpoints so simple via API Gateway, are there any standard practices for when you should (or should not)?
Thank you for any insights / opinions!
My two cents. Think about your end-user for your APIs. You will have different developer end-users for each API set.
Your ideal situation will have each developer end-user only seeing the APIs that are relevant to them. So you should split your APIs into different Gateways according to the end-users
In the theoretical situation you describe:
Create an API for Manufacturers so they can integrate with you to create coupons. If you do the integration internally it will be the corporate sales and presales people who talk to the manufacturers
The users for the Service and End User coupons might end up being the
same app developers that create an interface for both stores and
users. So create a coupon API for them
Separating both should also give you security benefits as you will protect the knowledge of your Manufacturer API from the users who might try to hack it
Why would anyone use WSO2 Application Server instead of other application servers?
I rather encountered only problems with it, mainly due to class loading issues, so I would appreciate if someone could point out what are the advantages or the use cases when using WSO2-AS really makes a difference.
I can see the benefits of other standalone WSO2 products, but as far as the AS is concerned, I would rather rely on more lightweight servers and just package the libraries I need.
There are number of advantages on WSO2 Application Server.
1.) It provides in-built support for multi-tenancy, in case if you have isolated departments like organization there is no real need to have number of server instances you could simply create a new tenant.
2.) Automatic lazy loading support for tenants, web applications and web services. In a production system a particular tenant/web application/web service can be ideal for sometime it's a waste to allocate hardware resources continuously to such ideal applications specially if you use IaaS. WSO2 application server can detect such ideal tenant/web application/web service and release their resources and tenant/web application/web service will load again when a new request dispatch to the particular tenant/web application/web service.
3.) Wide range of deployment options, support to deploy on-premise, public or private IaaS , public or private PassS such as Apache Stratos. An an example one can deploy his application into WSO2 App Cloud (http://wso2.com/cloud/app-cloud/) instantly without downloading anything, later he can get same experience one of above platforms.
4.) Deployment synchronization feature, a clustered environment you may have very large number of nodes and upgrading application version and configuration changes across the cluster can be headache. Using Deployment synchronization feature you can modify only one node labeled as manger node and Deployment synchronization will take care about synchronize changes across the cluster automatically and consistently.
5.) When developing applications on WSO2 Application Server you can leverage carbon platform level features such as identity, registry, logging, distributed caching, multi-tenancy etc. As an example one can use identity features provided by the platform to mange users, roles permissions also for authentication and authorization without write something own.
6.) Inbuilt support for security standards such SSO among other WSO2 products.
7.) In-build monitoring capability for web services and web application through WSO2 BAM.
8.) Enhanced and rich dashboard for applications and services which facilitate to basic statistics, application management, security wizards, code generations, Try -It tools, run time logging configurations etc.
9.) Enhanced classloading mechanism (starting from AS 5.1.0), within one Application server instance you can have number of virtual server environments per application level. As an example one can specify an application run on minimal Tomcat mode or can assign to run Carbon mode which is ( Tomcat + Carbon platform).
When it come to your specific issue if you can specify your Application Server version and elaborate more on your classloading issue I can provide you more specific answer.
Having said above I want to mention that I'm from WSO2.
Can anyone tell me what the need/advantage is to using a web service with an asp.net gui and using Linq to SQL? The web service layer seems unnecessary. Linq to SQL is completely new to me and I am researching as I am setting up a new project. Does anyone have any experience with this?
You would expose services for those cases in which other applications may need to access your data (such as a smart client, another application, a winforms app, etc.). A lot of people will develop using web services to prevent themselves from having to restructure to web services in the future.
In almost any professional/enterprise web application you want to separate the UI tier from the data access layer so you would not embed Linq to SQL calls in the UI tier. Instead you would have a service tier in between, whether its web services, WCF, or just a DLL with business logic that orchestrates your data access layer. Independent tiers are easier to maintain, update, refactor, and learn so the up front investment in creating them is worth the effort.
It is certainly not necessary, but can be handy in case you want to keep your data access layer on a separate server from your presentation server (ASP.NET). A web service allows you to restrict communication between the two servers to only port 80.
Note that this could apply to plain old ADO.NET or anything else too.
Webservices became a separation layer because they were intended as a platform agnostic way of sending data to other software. They are websites that serve information to other software and not directly to the user.
A webservice is an overhauled separation layer for a website and can not completely replace a good data, business logic and UI separation.
Do it as your logic tells you to, but beware of the performance drops that you pay if you do not need to communicate to other software.
Completely agree with Ovidiu Pacurar. Web services are NOT a good choice for modeling layers of concern. You should do this using good old fashioned OO design. There is no reason for a web application to call web services within itself for data access unless they are intended for client ajax calls or if you need to run the business/data layer on another server for extreme security concerns.
Agreed with previous poster. You'd probably want to do this to apply the "Separation of Concerns" idea...