I’m looking at EKS architecture patterns for a saas multi tenant saas application and found some great resources in the AWS saas factory space. However, I have couple of questions that that I couldn’t find answers for in those resources:
The proposed application will broadly have following components:
Landing and Tenant Registration App (React SPA)
Admin UI To Manage tenants (React SPA)
Application UI (the product - React)
Core/Support Micro-services (Lambda)
Tenant App Micro-services(Go/Java)
RDS - Postgres per tenant
I’m currently leaning towards EKS with Fargate where namespaces used for tenant isolation.
Questions:
Is the namespaces right way to go about for tenant separation (as opposed to separate cluster/vpc per tenant)
Regarding tenant data (RDS) isolation, if I go with namespaces isolation, what’s the best way to go about it?
Apologies of the question isn’t clear, happy to provide further clarification of needed.
Question 1: Could you please let me know if AWS global Accelator has capability to route the request to different ALB based on dns(base url)?
Answer: Unfortunately Global Accelerator does not have the ability to do smart routing based on URL.
Question 2: Do we require multiple AWS accelator for each customer?
Answer: That is correct.
Question 3: Do we have any other solution to access the application in-isolation for each customer?
Answer: Not in isolation it is possible to use one ALB with host header rules for your listeners. Each rule going to its own target group. This way you will control traffic by making a target group for each customer. However it does not fullfill your isolation requirement.
Question 4: Do you suggest to have different pods for each customer with routing the request to various target group based on path?
Answer: Yes that is the option I mentioned above you can use path based or host header option depending on the URL. If the URL's are completely different then host header if the URL is just different paths then path based would be best.
hope these answer help you to resolve #sheeni's queries.
Related
I'm building a simple analytic service that needs to work for multiple countries. It's likely that someone from a restricted jurisdiction (e.g. Iran) hits the endpoint. I am not offering any service that would fall under sanctions-related restrictions, but it seems like Cloud Run endpoints do not allow traffic from places like Iran. I tried various configurations (adding a domain mapping, an external HTTPS LB, calling from Firebase, etc) and it doesn't work.
Is there a way to let read-only traffic through from these territories? Or is there another Google product that would allow this? It seems like the Google Maps prohibited territory list applies to some services, but not others (e.g. Firebase doesn't have this issue).
You should serve traffic through Load Balancer with Cloud Armour policy. Cloud Armour provide a feature for filtering traffic based on location.
In a project we start using GKE to host some services.
Those services should be accessible by all team members, but should not be accessible for anyone else in the world.
Our team works from home, hence we cannot restrict IP addresses or something like that.
What is the best way to make sure only team members can access the service?
I tried to set up IAP. That works, but it is much setup for each service and I did not find a way to allow "technical user" like allowing sonarscanner to reach sonarqube.
Maybe another option would be setting up a dedicated nginx-ingress controller that I can secure using BasicAuth or client certificates. - But it feels like my situation is quite common and I am missing something existing. - Any hints?
The current challenge with IAP is, that I have services like Sonarqube, that offer both a web interface and an API. Using a browser to access the web interface works fine. But it's not clear to me how to configure for example sonarscanner to access the IAP-protected API.
The second issue with IAP is, that it requires each service to configure quite a bit of GKE-specific boilerplate (Frontendconfig/Backendconfig/Annotations/etc.).
I would really like to shift that kind of configuration from the services (i.e. developers) to the Cluster/IngressController (i.e. cluster admin).
Our web is just a web app. So we used CloudFront and S3 to host it. Now I want to use the canary release to redirect 5% users to a new version for some testing first. But I find that it seems AWS can't approach that I can't figure out how to approach that.
For example, in the screenshot, I need an SSL certificate bind to CloudFront A. But one certificate can only bind to one CloudFront, which is a limitation of AWS. It means that the certificate can't bind to CloudFront B.
I have no idea how to resolve the problem. I am not sure if I misunderstand AWS service or my solution is totally wrong.
Any comment will be much appreciated.
p.s. One solution I think about is to write a proxy or APIGateway/lambda function to accept the request and redirect by percentage.
Although CloudFront doesn't support this natively, you can implement Canary Release using AWS Lambda#Edge which runs at CloudFront Edge Locations. You might need to code the routing logic to forward a certain percentage to specific buckets.
The term canary release does not fit front-end development, it relates to your backing services, and should only be done at the API REST service level. Because in a canary configuration it isn't that a user always hits the canary release or the normal release, instead each request has a chance of hitting either release, one request could hit the canary, and then the next could hit the old release.
In regards to front-end, you may wish to have users turn on beta-features, or have an entirely different hosted site located at www.beta.yoursite.com, which the DNS resolves to your bucket with snapshot releases, while www.yoursite.com.resolves to the normal site. Then you can have what are beta users who will be chosen at random and receive an email suggesting they try out the new site at its beta location. In your application, you can mark these users as having beta-credentials to enforce that only beta-users have access to the beta site if you wish.
Note that even if you could do what you are proposing (I think there is a way with CloudFront) it would be a bad user-experience as a user may use 2 different devices when accessing your site and then have 2 different experiences but not know what is going on.
EDIT: Comment Answer - Like I say, I really don't think you want to do that, but anyway what you would do is resolve your domain to a apigateway/proxy/loadbalancer instead of a bucket, which would then route traffic based on the authenticated user to either the beta site or the old site. That way they won't see a different domain. AFAIK there is no way to do DNS resolution based on the logged in user in Route53 but also DNS in general. I could be wrong somebody correct me if so. Probably API gateway would be the simplest and use a lambda to route the traffic to the correct site.
For our product we are currently storing customer credentials hashed in db (3 tier architecture) . We want the authentication to be done at 1st tier itself ,which aws solution can be used for this ,May be AWS HSM but what changes need to be done at app layer to do this .
This is a website
using cloudfront to route across across edge
using database replication
also we have active-active multi region .
any suggestions would be useful
thanks
I agree that some further details on your architecture would help. Is this a web application, mobile app, other fat client? How are you achieving the active-active multi-region architecture at the DB? I would like to suggest AWS Cognito but the multi-region needs become a bit more complex in that scenario.
Today how do you determine which region your users are routed to? If using AWS Cognito you'd likely need to create a user pool per region but this means your users would need to be routed to the correct user pool based on their region.
I have had great luck with AWS Cognito identities from web, mobile, and fat client apps and have even used many of the Lambda integrations with Cognito for commercial grade applications. Some good examples -
http://docs.aws.amazon.com/cognito/latest/developerguide/using-amazon-cognito-user-identity-pools-javascript-examples.html
http://docs.aws.amazon.com/cognito/latest/developerguide/walkthrough-using-the-ios-sdk.html
http://docs.aws.amazon.com/cognito/latest/developerguide/setting-up-android-sdk.html
I haven't been able to find the answer to this question in the Amazon DynamoDB documentation, so my apologies for asking such a basic question here:
Can I access DynamoDB from my own web server, or do I need to use an EC2 instance?
Other than the obvious higher latency, are there any security or performance considerations when using my own server?
You can use Amazon DynamoDB without restrictions from about everywhere - a nice and helpful demonstration are the AWS Toolkits for Eclipse and Visual Studio for example, which allow you to create tables, insert and edit data, initiate table scans, and more straight from your local development environment (see the introductory post AWS Toolkits for Eclipse and Visual Studio Now Support DynamoDB).
Other than the obvious higher latency, are there any security or
performance considerations when using my own server?
Not really, other than facilitating SSL via the HTTPS endpoint, if your use case requires respective security.
In case you are not using it already, you should check out AWS Identity and Access Management (IAM) as well, which is highly recommended to securely control access to AWS services and resources for your users (i.e. your web server here), rather than simply using your main AWS account credentials.
Depending on your server location, you might want to select an appropriate lower latency endpoint eventually - the currently available ones are listed in Regions and Endpoints, section Amazon DynamoDB.