We plan to move from Apigee Edge to Apigee X. Unfortunately I still do have a open security related question where I do not find any suitable informations.
Do I need an additional service for Intrusion Detection like Cloud IDS or is that already built in Apigee X?
For the best practices there could be no. of ways but, in general the service would be the same and Apigee x is different from Apigee in multiple ways.
You can deploy 50 proxies to an environment.
API proxies are immutable when they are deployed.
You can do your own configuration/setup in Apigee.
By using a customer encryption key all the KVM’s are encrypted.
Admin authentication and Admin API endpoints are different.
GCP IAM and RBAC govern admin identity and operators.
You can have look at this document.
Related
Does apigee x offer hosting with different cloud providers? Can one try a Customer-managed runtime plane such as AWS machines?
What are necessary considerations to keep while migrating apigee edge to apigee X
You specifically mention the runtime plane: Apigee Hybrid is what you're looking for. In essence it allows you to put the Apigee API-proxy runtime engine in your own environment (e.g., AWS, or even on-prem).
Generally, Apigee Hybrid uses the same cloud management plane as Apigee-X, so as I see it, Apigee Hybrid is the "hybrid flavor of Apigee-X" which is otherwise a SaaS offering.
No, their documentation states Apigee X is hosted on Google Cloud. Your only options for custom hosting providers are Apigee Edge and Apigee hybrid.
What is the Difference between Apigee Api Gateway and Apigee Api Proxy ?and how to use gateway?
how can I use apigee api gateway in my project of microservices or some full stack app ?
Is apigee api proxy same as apigee api gateway ? I could not found docs on apigee api gateway, Please help me on that
I am thinking of using api proxy as gateway in apigee but not crystal on that
Also I am getting confused with edge microgateway, is this the one which is apigee gateway here ?
The names stick to the concept.
API Gateway, doesn't exist alone, it's the concept to have an interface between the API consumer and the backend.
The proxy is a way to implement the gateway. The proxy get the request from the consumer, check and transform it, and request to the backend(s). Finally, aggregate the answer(s) and send it to the consumer.
APIGee edge implement the concept of proxy, but the brand new APIGee X also.
"Gateway" is a general concept, as is "Proxy". Both words are used in many forms in the networking industry, and to talk about integration. In my experience the word gateway is generally a broader concept, like a controlled entry point into a network, and a proxy usually means a specific implementation of come controls or handling of traffic.
In Google's Apigee product in particular, which is a whole suite of API-management capabilities, including what you might call a gateway capacity too, an API Proxy is a specific entity with a special definition. An Apigee proxy have a precise meaning in the Apigee product's model - it is a functional part of a chain of capabilities and infrastructure which defines how the overall deployment and integration work. An Apigee proxy as a specific product concept is explained here: https://docs.apigee.com/api-platform/fundamentals/understanding-apis-and-api-proxies
I doubt you will find docs which talk about "Apigee gateway" because Apigee doesn't define such a component specifically.
Apigee Edge Microgateway is one specific flavor of deployment for an Apigee proxy - meaning it is one of the runtime engines of several offered. Apigee Edge Microgateway is more or less defined as a "hybrid" option (it runs locally, not SaaS, but it still have a cloud dependency for startup and analytics collection with the Apigee Management Plane), is based on Node.js, and it is rather lean on features. Capable and extensible, but the main, full-featured Apigee proxy product/engine found in Apigee Edge (SaaS), Apigee X (SaaS), and Apigee Hybrid is more feature rich. Docs: https://docs.apigee.com/api-platform/microgateway/edge-microgateway-home
We have a current project: pj-xyz-dev
We would also probably require a pj-abc-dev, pj-ghi-prod , pj-czi-prod.
We need to check how easy it is to attach these to the respective APIGEE environments for the Logging capabilities.
And how to transport the confirmation from one environment/project to another on GCP?
We want to check the standard out of the box capabilities between APIGEE and GCP
I'm building an app and the idea is to go serverless.
I'm looking mainly at AWS and GCP (Google Cloud Platform), and as AWS costs are a bit obscure (at least for me), and there is no way to ensure not being billed, I'm going with GCP.
For the "server" part of the app, I would like to build an API on GCP as I could do with AWS API Gateway, but I couldn't find any matching product for that.
The closer one was Google Cloud Endpoint, but it seems to have a very different concept from AWS API Gateway. I've watched some videos about it (for example https://www.youtube.com/watch?v=bR9hEyZ9774), but still can't get the idea behind it or if it fits my needs.
Could someone please help clarify which GCP product would be suitable for creating an API and how it compares to AWS API Gateway?
Some link with info/example on how to do it would be really appreciated.
Google Product Manager here.
We don't have an exact analog for AWS API Gateway.
You're right about Cloud Endpoints. It's a bit of a different architecture than AWS uses -- it's a sidecar proxy that gets deployed with the backend. That's different than API Gateway, which is a fully managed proxy deployed in front of your backends.
If you are deploying in App Engine Flexible environments: good news! The Endpoints Proxy can be deployed as part of your deployment. It can do things similar to AWS API Gateway (API key validation, JWT validation, rate limiting).
We are working on some plans to allow for the proxy to be used in other places (Cloud Functions and the newer App Engine Standard runtimes).
And, finally: on our older App Engine Java and Python runtimes, we have API Frameworks that provide the same functionality. Those frameworks do the same thing as the proxy, but get expressed as code annotations and built into your app. We're moving away from the framework model in favor of the proxy model.
An example of springboot project with google cloud app engine can be found here-https://github.com/ashishkeshu/googlecloud-springboot
I have a question on WSO2 API Manager Clustering. I have gone through the deployment documentation in detail and understand the distributed deployment concept where in one can seggregate the publisher, store, key manager and gateway. But as per my asessment, that makes the deployment architecture pretty complex to maintain. So I would like to have a simpler deployment.
What I have tested is to simply have two different instances of the WSO2 API Manager to run in two different boxes pointing to the same underlying data sources in MySQL. What I have seen is that, the API calls work perfectly and the tokens obtained from one WSO2 instance would work for API invocation on the other API Manager instance. The only issue with this model is that we need to deploy the APIs from individual publisher components for as many WSO2 API Manager instances that are running. I am fine to do that since the publishing will be done by one single small team. We will have a hardware load balancer in front having the API endpoint URLs and token endpoint URLs for both the API managers and the harware LB will do the load balancing.
So my question is - are there any problems in following this simple approach from the RUNTIME perspective? Does the clustering add any benefit from RUNTIME perspective for WSO2 API Manager?
Thank you.
Your approach has following drawbacks (there can be more which I do not know);
It is not scalable. Meaning - you can't independently scale (adding more instances of) store or publisher or gateway or key manager.
Distributed throttling won't work. It will lead to throttling inconsistencies since the throttling replication won't happen if you don't enable clustering. Lets say you define 'Gold' tier for an API. Doesn't matter how many gateway instances you are using, a user should be restricted to access no more than 20req/min to this API. This should have been implemented based on a distributed counter (not sure the exact implementation details). So if you don't enable clustering, one gateway node doesn't know the number of requests served by other gateway nodes. So each gateway node will have their own throttle counter. Meaning - a user might be able to access your API more than 20req/min. So this is one of the throttling inconsistencies. Further, lets say one gateway node is throttled out a user but the other gateway node is not. Now, if your LB routes the request to 1st gateway node, user will not be able to access the API. If your LB routes the request to 2nd gateway node, user will be able to access the API. This is another instance of throttling inconsistency. To overcome all these issues, you just need to replicate the throttling across all the gateway nodes by enabling clustering.
Distributed caching won't work. For example, API Key validation information are cached. If you revoke a token in one API Manager node, cache will be cleared in that node. So a user can't use revoked token via that API Manager node, BUT he is able to use the token via the other API Manager node until the cache is invalidated (I guess 15 min by default). This is just one instance where things can go wrong if you don't cluster your API Manager instances. To solve these issues, you just need to enable clustering, then the cache will be in sync across the cluster. Read this doc for more details on various caching available in WSO2 API Manager.
You will be having several issues if you don't have above features. WSO2 highly recommends distributed deployment in production.