Restrict access to amazon WorkSpace by IP Address? - amazon-web-services

I have a simple question which I don't think has a simple answer.
I would like to use Amazon Workspaces but a requirement would be that I can restrict the IP addresses that can access a or any workspace.
I kind of get the impression this should be possible through rules on the security group on the directory, but I'm not really sure, and I don't know where to start.
I've been unable to find any instructions for this or other examples of people having done this. Surely I'm not the first/only person to want to do this?!
Can anyone offer any pointers??

Based on the Comments given by the #Mayb2Moro; he obtained information from AWS Support that the restriction based on the Security Group or VPC wouldn't be possible as the Workspaces connectivity would go via. another external endpoint [management interface in the backend].
Yes you are right, you need to work on the security group configured while the workspace is setup. The process goes like this,
Pick the security group used while the Workspace bundle was created
Go to the EC2 -> Security Group and select the security group and restrict them to your Office's Exit IP.
PS : Image Source - http://www.itnews.com.au/Lab/381939,itnews-labs-amazon-workspaces.aspx

Now you can assign IP Access Control Groups to a Directory that is associated to your workspaces.
In the IP Access Control Group, you can specify the IPs that you wish to allow access to the workspaces.
Refer to the IP Access Control Groups for Your WorkSpaces for the official documentation.

Related

VPC Access connector failed to get healthy

I am getting below error while trying to create VPC Access connector in GCP region us-central1:
An internal error occurred: VPC Access connector failed to get healthy. Please check GCE quotas, logs and org policies and recreate.
I also tried to create the VPC access connector in region us-east1 but got the same issue.
I tried searching for existing bugs on gcp issues portal but could not find this issue.
I have tried to follow image access constraint but I don't have an organisation so I am unable to edit the required policy.
I am having the same issue. After reading this thread I checked different regions with exactly the same configuration:
Network: Default
Subnet: Custom IP range
IP range: 10.8.0.0/28
I can confirm that changing the area solves the issue. In my case, I proceeded successfully with australia-southeast2. Basically, when creating a VPC connector in Google Cloud, we have some regions working and some others are not.
It may be a capacity problem over some Google regions.
It can be an internal IP subnet assignment issue. This subnet must be used exclusively by the connector per the documentation
Every VPC connector requires its own /28 subnet to place connector instances on; this subnet must not have any other resources on it other than the VPC connector. If you don't use Shared VPC, you can either create a subnet for the connector to use, or specify an unused custom IP range for the connector to create a subnet for its use. If you choose the custom IP range, the subnet that is created is hidden and cannot be used in firewall rules and NAT configurations.
Or it can also be that you are missing the required image access constraint. In this case, you may follow this step by step guide in seting image access constraints
Go to the Organization policies page.
In the policies list, click Define trusted image projects.
Click Edit to customize your existing trusted image constraints.
On the Edit page, select Customize.
In the Policy values drop-down list, select Custom to set the constraint on specific image projects.
In the Policy type drop-down list, specify a value as follows:
-To restrict the specified image projects, select Deny.
-To remove restrictions for the specified image projects, select Allow.
In the Custom values field, enter the names of image projects using projects/IMAGE_PROJECT format. Replace IMAGE_PROJECT with the image project you want in this case “serverless-vpc-access-images“ to set constraints on. If you are setting project-level constraints, then they might conflict with the existing constraints set on your organization or folder.
Click New policy value to add multiple image projects.
Click Save to apply the constraint.
Additionally, please make sure that there are no firewall rules on your VPC network with a priority before 1000 that deny ingress from your connector's IP range.

AWS RDS as Datasource in AWS Managed Grafana

I thought this was going to be easy but unfortunately, I was wrong. I just made an AWS-hosted Grafana workspace. I'd like to query an AWS RDS instance for some data.
I am struggling to find out how I would add the Hosted Grafana instance into a security group so it would be allowed to access the RDS.
I did check the Docs!
Has anyone done this before that could help me out?
Thanks!
Ran into a similar problem, AWS Team told me that if your database is sitting in a non-default VPC and is publically accessible, then you have to whitelist IP address in your security group based on your region of managed grafana.
Here is the list of ip addresses based on the region.
• us-east-1: 35.170.12.166 54.88.16.229 3.234.162.252 54.160.119.132
54.196.72.13 3.213.190.135 54.83.225.191 3.234.173.51 107.22.41.194
• eu-central-1: 18.185.12.232, 3.69.106.181, 52.29.127.210
• us-west-2: 44.230.70.68, 34.208.176.166, 35.82.14.62
• us-east-2: 18.116.131.87, 18.117.203.54
• eu-west-1: 52.30.158.152, 54.247.159.227, 54.170.69.237, 52.210.87.10,
54.73.6.128, b54.78.34.200, 54.216.218.40, 176.34.91.249, 34.246.52.247
• us-east-2: 35.170.12.166, 54.88.16.229, 3.234.162.252, 54.160.119.132,
54.196.72.13, 3.213.190.135, 54.83.225.191, 3.234.173.51, 107.22.41.194
You can refer the documentation provided by aws on how to connect to the database at:
AMG Postgresql Connection
I had to do the same thing, and in the end the only way I could find out the IP address was to look through the VPC flow logs to see what was hitting the IP address of the RDS instance.
AWS has many IP addresses it can use for this and unfortunately there is no way to assign a specific IP address or security group to grafana.
So you need to set up a few things to get it to work, and there is no guarantee that the IP address for your AWS hosted Grafana won't change on you.
If you don't have it already, set up a VPC for your AWS infrastructure. Steps 1-3 in this article will set up what you need to do.
Set up Flow Logs for your VPC. These will capture the traffic in and out of the network interfaces and you can filter on the IP address of your RDS instance and the Postgres port. This article explains how to set it up.
Once you capture the IP address you can add it to the security group for the RDS instance.
One thing I have found is that I get regular time outs when querying RDS Postgres from AWS hosted grafana. It works fine, then it doesn't, then it works again. I've not found a to increase the timeout or solve the issue yet.

How to expose a API that is running in a Pod and limit access?

I have an API running in a service in my GKE Cluster and it needs to be accessible for some other developers in my team. They are using a VPN so they have a static IP they can provide to me.
My idea was to just expose the service using a static external IP and restricting access to this IP using a Firewall rule so just the IP of my colleagues.
Unfortunately this just seems to be possible for Compute-VMs because only they can have tags.
Is there a way how I can simply deny all traffic to my service except for traffic from the specific IP?
I appreciate any hints to features, thank you
Well, you don't need tags, you can create your firewall rule to only allow access to the IP your developers provide you, just when you're creating your firewall rule, select all instances in the network for Targets and for source IP ranges specify the IP with the prefix /32 at the end.
You could provide them RBAC access to the pods in the required namespace and allow them to port forward. Assuming you don't want to set up a public end point and try secure it. This does require kubectl to be installed and cluster access and this will give access to all pods in the namespace.
https://medium.com/#ManagedKube/kubernetes-rbac-port-forward-4c7eb3951e28
Depends what level of security and permanency you need I guess.

Set an IP restriction on service account

At first, I'm using service account with delegated credentials executing Apps Script API to run a function on Google Apps Script from a Python script via Google's Python client library, and it just works fine.
I'd like to add some IP restriction for it, to make sure it can only execute by the specific IP.
I have tried to add a firewall rule in VPC, which deny all ingress from 0.0.0.0/0 and set the target to the service account. However, running the script after setting the vpc rule is no different than before it.
The firewall rule seems to only target the VM instance used by the service account.
Is there any better way to set IP restriction for service account?
You can't restrict access to APIs based on the requestor IP, only through IAM permissions (with service accounts). Therefore you cannot configure the service account to be used only from a specific IP address.
As mentioned here : “is a special type of Google account that belongs to your application or a virtual machine (VM), instead of to an individual end user.” I ignore the reason why you are looking to restrict by IP but please keep in mind that the service account uses the private key which should not be shared between environments/users/apps, should be stored in a safe place and must used only in the server(s) running the application.

Amazon AWS elasticsearch Kibana access from browser

I know this issue has been already discussed before , Yet I feel my question is a bit different.
I'm trying to figure out how am I to enable access to the Kibana over the self manged AWS elastic search which I have in my AWS account .
Could be that what am I about to say here is inaccurate or complete nonsense .
I am pretty novice in the whole AWS VPC wise section and to ELK stuck.
Architecture:
Here is the "Architecture":
I have a VPC.
Within the VPC I have several sub nets.
Each server sends it's data to the elastic search using log stash which runs on the server itself. For simplicity lets assume I have a single server.
The elastic search https url which can be found in the Amazon console is resolved to an internal IP within the sub net that I have defined.
Resources:
I have found the following link which suggest to use one of two option:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
Solutions:
Option 1: resource based policy
Either to allow resource based policy for elastic search by introducing condition which specify certain IP address.
This was discussed in the following thread but unfortunately did not work for me.
Proper access policy for Amazon Elastic Search Cluster
When I try to implement it in the Amazon console, Amazon notifies me that because I'm using Security group , I should resolve it by using security group.
Security group rules:
I tried to set a rule which allows my personal computer(Router) public IP to access Amazon elastic search ports or even opening all ports to my public IP.
But that didn't worked out.
I would be happy to get a more detailed explanation to why but I'm guessing that's because the elastic search has only internal IP and not public IP and because it is encapsulated within the VPC I am unable to access it from outside even if I define a rule for a public IP to access it.
Option 2: Using proxy
I'm decline to use this solution unless I have no other choice.
I'm guessing that if I set another server with public and internal IP within the same subnet and VPC as that of the elastic search , and use it as a proxy, I would be then be able to access this server from the outside by defining the same rules to the it's newly created security group . Like the article suggested.
Sources:
I found out of the box solution that some one already made for this issue using proxy server in the following link:
Using either executable or docker container.
https://github.com/abutaha/aws-es-proxy
Option 3: Other
Can you suggest other solution? Is it possible to use Amazon Load balancer or Amazon API gateway to accomplish this task?
I just need proof of concept not something which goes into production environment.
Bottom line:
I need to be able to aceess Kibana from browser in order to be able to search elastic search indexes.
Thanks a lot
The best way is with the just released Cognito authentication.
https://aws.amazon.com/about-aws/whats-new/2018/04/amazon-elasticsearch-service-simplifies-user-authentication-and-access-for-kibana-with-amazon-cognito/
This is a great way to authenticated A SINGLE USER. This is not a good way for the system you're building to access ElasticSearch.