I'm setting up Qualys scanner in Amazon Web Services in an environment that restricts outbound access to the internet from the VPC. It does so completely.
So I'll need to open a ticket to get the outbound access it needs, and I have to specify each IP that the Qualys server will need to connect to.
I'm seeing this message in the logs:
Starting crond:
Preparing scanner personalization
About to test connectivity to qualysguard.qualys.com
Error: No connectivity to qualysguard.qualys.com - please fix.
About to test connectivity to qualysguard.qualys.com
My question is, do I need to open up access to just that one domain? Or do I have to open up access to more than that one domain. I have to be specific and cannot use wildcards in the request. This environment is extremely locked down for security reasons.
There are several ways you can restrict the access of your environment but also to allow certain ports.
AWS does not resolve on DNS names, so make sure you get the set of IP addresses that are to be allowed for access
Use ELB - allow certain ports and permit access for those ports/ip addresses
Port address translation - look in for applications that will allow particular ports from a set of ip addresses
move your application to public subnet and allow the specific port/ip addresses
Related
I haven't found anything that details out how to add a range through the AWS portal. I have a range of salesforce ids that I need to add. When i set the server to only allow from specific ids I'm unable to reach the server by adding the basic single addresses. I found an address in the first range which allowed traffic, but when I've restricted access to only the listed ips I'm unable to ping salesforce from the server. I have all outbound traffic allowed. Also, when I allow all traffic, I am able to ping in both directions. I have very limited network experience, so any help is appreciated. Here is an example of the first ARIN range 13.108.0.0 - 13.111.255.255
If you have a range like 13.108.0.0 - 13.111.255.255 first convert that to a CIDR range using a web site that can do the conversion: wmtips
Then add the CIDR address to the inbound rules of the security group attached to your EC2 (Make sure the correct protocol is selected). This is to allow access for the remote system. Set the outbound rules on the security group to 0.0.0.0/0.
I have an API running in a service in my GKE Cluster and it needs to be accessible for some other developers in my team. They are using a VPN so they have a static IP they can provide to me.
My idea was to just expose the service using a static external IP and restricting access to this IP using a Firewall rule so just the IP of my colleagues.
Unfortunately this just seems to be possible for Compute-VMs because only they can have tags.
Is there a way how I can simply deny all traffic to my service except for traffic from the specific IP?
I appreciate any hints to features, thank you
Well, you don't need tags, you can create your firewall rule to only allow access to the IP your developers provide you, just when you're creating your firewall rule, select all instances in the network for Targets and for source IP ranges specify the IP with the prefix /32 at the end.
You could provide them RBAC access to the pods in the required namespace and allow them to port forward. Assuming you don't want to set up a public end point and try secure it. This does require kubectl to be installed and cluster access and this will give access to all pods in the namespace.
https://medium.com/#ManagedKube/kubernetes-rbac-port-forward-4c7eb3951e28
Depends what level of security and permanency you need I guess.
In last few days my Google VM is continuously being compromised, I have received warning and faced suspension of VM by Google saying "cryptocurrency mining activities was found on VM". I suspect someone has hacked my VM and doing this activity. So, now I want to create a new VM with secure SSH firewall such that only limited computers can access the VM.
I have tried setting the IP of my office routers on firewall ssh allow rule, but after setting this rule also SSH connection to VM do get established from other IP addresses. I just want to specify two IPs in firewall rule but it expects IP ranges in CIDR format (with which I am not clear).
I have also found some suggestions that I should change the ssh port of the VM.
Can anybody please explain how can I restrict the access to my Google VM to only a specific set of computers when this computers are connected to a router and external IP is same for all i.e. of router?
Thanks
I understand you want to create a new VM with secure firewall SSH and want to restrict and allow access from particular IP addresses of your office router.
To do that you can create firewall rules as explained here 1. To manage the access for a specific instance, I recommend you to use Network Tags for firewall rules 2.
Going back to your concern, that SSH connection to VM do get established from other IP addresses even when you create the firewall rule for the specific IP address. The reason for that might be due to this:
Every project you create in GCP comes with the default firewall rules.
So there might be one default-allow-ssh rule which you need to block, I guess that might be causing the issue. Note that the default network includes some additional rules that override this one, allowing certain types of incoming traffic. See the attached link[3][4] for more details.
[3]https://cloud.google.com/vpc/docs/firewalls#default_firewall_rules
[4]https://cloud.google.com/vpc/docs/firewalls#more_rules_default_vpc
You can also add guest-level firewall rule using for example "iptables" to add another security level to your VM instance. However, GCP project-level firewall rule takes care of inspecting network traffic before it goes to your VM instances. Operating system Firewall blocks all internet traffic to any port 22.
In order to allow a specific address to be able to connect on your VM instance, you may add a CIDR of /32 on the "IP ranges" value of your "default-allow-ssh" GCP firewall rule. For example, 45.56.122.7/32 and 208.43.25.31/32.
I want to set the security groups for the web server running in aws instance.My website should be accessible to through http/https. But file modification access to be set to particular IP address.I am currently connected to a Wifi router, and as I know IP that my PC assigned changes everytime.
Can someone please guide me on how to get a static IP address that I can allow access to my website.Thanks in advance.
You would need to see if you ISP can sell you a static IP - it's not always possible. I can't get one from my ISP when working from home, your ISP may be different for example.
However, if it is just you that needs static IP address (i.e. you as the developer/admin as opposed to users in the public), it is only a few clicks of the mouse to update the security rule thru the aws console each time you need elevated access. I do this for several servers running on EC2 which I keep locked down, and when I need to RDP into them, I open up the security groups to just my (dynamic) IP, and remove the rule when I am done - this will work if you only occasionally need access. You could also automate this process using a little scripting and/or lambda function.
Other option that I also do: I have an service that I need to access continually from a static IP - I use an another EC2 instance (with fixed IP) as the whitelist IP for this, and then I connect to that services by first connecting via RDP to the EC2 instance - and the EC2 instance with the fixed IP then accesses the service using its static IP.
You first have to know if your external IP changes if so you have to ask your ISP to change your IP to a static one
If it's the internal IP the one that changes but the external IP is the same you will have no problem accessing the aws.
EC2 --> RDS:
RDS (DB Engine): I have inbound and outbound open on port 3306 for the web server's security group.
EC2 (Web Server): I have inbound open for 80, 443 and 22(myIP). Outbound is open for 80,443 and 3306, and it needs all traffic as well to function properly.
My question is about the outbound rules of my web server. Why do I need all traffic to be open? Does this have any security concern?
Some people lock down outbound to prevent against data loss. It works better for immutable architecture since you've removed the ability to update packages from public sources.
Obviously you can choose your own security profile; generally speaking I consider this the levels of security:
Port 22 open to the world
Port 22 access by white listed IPs
Bastion host with white listed IPs
VPN (from here down, all using VPN)
Private IPs + NAT
Proxies server outbound access
That's my ec2 security maturity model. I'm sure I missed some- feel free to comment below.
The security group outbound rules let you to specify "destination", not source. Basically you don't need to worry being attack by Denial of Server through the outbound rules.
On the other hand, unless your Web server need to connect out to Internet without restriction, then you set 80+443 destination to 0.0.0.0/0.
Otherwise , if your web server only need to connect to OS repositories for security update (e.g. ubuntu, apache,etc), then you can explicitly specify the repositories IP address instead of using 0.0.0.0/0.
Other than that, there is little risk. Unless you load something that render webpage, e.g. load web browser in the web server that read random webpage, then it make you susceptible to browser/java engine/rendering engine exploit : if exploit can execute something like ssh reverse tunnel, then there is possibilities that attacker may gain access to your web server.