is there a way to open ICMP on an Azure Pipeline vm? my CI unit tests are expected to send ping requests to the google DNS (8.8.8.8)
According to your description, you are trying to access one external IP through Azure VM endpoint with Ping. This does not allowed.
Please due to this official blog which written by our Azure VM team engineer: HOW TO ALLOW PING FUNCTIONALITY TO WINDOWS AZURE MACHINES?
The Ping functionality on Windows Azure VM is blocked by default for
security reasons.
As we all know, the ICMP protocol which used by Ping can measure the latency of the connection between a local machine and a remote machine. Any connections exceeding a default latency are deemed to be unavailable. See the pic shown below, the only possible connection to that Azure virtual machine is via the Internet. Any internet traffic which trying to enter the virtual network must pass through the load balancer, and this balancer is filtering ICMP traffic, allow UDP and TCP traffic.
By default, Azure denies and blocks all public inbound traffic to an
Azure virtual machine, includes ICMP traffic. This is a good thing
because it can improve security by reducing the attack surface.
Note: This restrict only apply to the network traffic which going through the external IP through configured endpoints. But if the network traffic occurred between internal IPs of VMs which in the same virtual network or in the same cloud service, ICMP would be allowed.
This restrict does not limited permanently. We can set firefall or azure security group to allow this. But, unfortunately, for Azure Devops Pipeline, the hosted agent is using the VM DS2_V2 and DS3_V2, which are all could not be configured\modified with firefall and security group by external users. If build\release with private agent, ICMP will not be limit. You can set a private agent, and execute ping test in it.
(Sometimes, can use VPN or ExpressRoute to skip the load balancer filter and limit. But I don't recommend to use this way)
Since Ping is a very convenient and critical tool for troubleshooting connectivity, we are reviewing and considering to expand this feature in Azure VM. There has a such suggestion ticket raised in our uservoice forum: Enable ICMP traffic to Azure VMs over the Internet. You can vote for it as well to push it faster into the development queue.
Related
I am making a Slackbot on my AWS EC2, and I need to open port 3000 for public to listen post requests from Slack whenever users do some actions because Slack doesn't provide their IP range.
I wonder if there are any security issues with my EC2 if I open a port publicly ? I also use this EC2 to run Airflow.
Open ports can be dangerous when the service listening on the port is misconfigured, unpatched, vulnerable to exploits, or has poor network security rules.
Attackers use open ports to find potential exploits. To run an exploit, the attacker needs to find a vulnerability.
AWS works on Shared Responsibility Model - means AWS is responsible for “Security of the Cloud” and Customer is responsible for “Security in the Cloud”
It is suggested to put your EC2 instance in the private subnet and place a load balancer in the public subnet.
The public internet traffic shall only talk to the load balancer rather than the instance itself.
Then you can create a WAF and attach it to the load balancer to avoid the attack ( such as DDoS etc.)
How do I confirm that my VM connects to my GCP VPN Gateway? The two are already on the same network. I have tried pinging to the VPN Gateway IP from the vm but I cannot.
You would have to review and make sure that:
The VPN is active under Cloud VPN
Ensure that your GCP and on-prem firewall are allowing ingress/egress traffic between them
Depending on the type of VPN you choose, make sure that the IP address of the VM is shared to your on-prem via BPG, Route or Policy
If you see an issue with the VPN, you can review the VPN logs logs via logging (log viewer) and choose GCE Router. https://cloud.google.com/logging/docs/view/overview
If the issue is with the BGP/Route/Policy based, you would need to ensure your VPN IP is part of the shared range on both side (GCP and on-prem). https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview#classic-vpn
If the issue is with Firewall, make sure that nothing is blocking your VM from communicating with your VPN IP range on GCP side and on your on-prem side. https://cloud.google.com/network-connectivity/docs/vpn/how-to/configuring-firewall-rules
Here is more troubleshooting you can review/try: https://cloud.google.com/network-connectivity/docs/vpn/support/troubleshooting
I have a Google compute engine vm ubuntu host with stackdriver monitoring agent installed.
The vm host has a VPC firewall rule to deny all communication apart from a proxy server (to get system updates) and it has only an internal IP.
I have configured the stack driver agent according to doc's at https://cloud.google.com/monitoring/agent/install-agent.
The monitoring agent is unable to send monitor data to stackdriver unless i turn off the firewall rule.
What changes should i make to the VPC firewall rule in order for the agent to able to send data to stackdriver?
Stackdriver uses HTTPS to communicate with the Google API endpoints.
However, if your VM only has private IP addresses, you must also configure Private Google Access. I cover the requirements in this article:
https://www.jhanley.com/google-compute-stackdriver-logging-installation-setup-debugging/
These endpoints must be reachable for Stackdriver logging and monitoring to function:
oauth2.googleapis.com
monitoring.googleapis.com
stackdriver.googleapis.com
I am trying to run ntopng on an AWS instance (centos) to monitor my local network
So my questions are:
How to connect my local network to aws ntopng located in aws instance.
how to integrate n2disk, nprobe cento and ntopng together.
You have two issues: 1) Connecting an Amazon VPC to your local network 2) snooping on network traffic.
You can setup a VPN to connect your networks together. Consider using OpenSwan or Windows Server setup on each side of the network.
Network Snooping: This is not possible in Amazon VPCs. Network interfaces cannot be put into promiscuous mode. Also, this is FORBIDDEN by Amazon policies.
Note: You can monitor your own traffic using VPC Flowlogs. This will show you higher level packet information, but will not include the data portion.
I have configured and passed the health check for my AWS ELB(load balancer), but I was trying to do a ping or send a packet to the tcp port 9300 there is no ip address for the ELB.
I have an EC2 instance at the end of the ELB which has Elasticsearch running on it.
The ELB that I configured is an internal ELB so it doesn't have a public IP address for it.
I was wondering if there is a way I can ssh? or do something to ping the ELB?
I am pretty new to AWS and read all the trouble shooting from AWS official website, but couldn't find a solution.
The goal that I am trying to achieve is to test whether my internal Amazon EC2 load Balancer is working properly.
I got the internal ELB ip address with the ping command, however, I am not able to ping or crul to that IP address.
I what to know what I am doing wrong.
Is it the way that I want to access a private network is in correct?
An Elastic Load Balancer is presented as a single service, but actually consists of several Load Balancing servers spread across the subnets and Availability Zones you nominate.
When connecting to an Elastic Load Balancer, you should always use the DNS Name of the Elastic Load Balancer. This will then resolve into one of the several servers that are providing the load balancing service.
Load Balancers are designed to pass requests and return responses. The next time a user sends a request, it might be sent to a different back-end service. Thus, it is good for web-type traffic but not suitable for situations requiring a permanent connection, such as SSH. You can configure sticky sessions for HTTP connections that will use cookies to send the user to the same back-end server if required.
The classic Elastic Load Balancer also supports TCP protocol, but these requests are distributed in a round-robin fashion to the back-end servers so they are also not suitable for long-lasting sessions.
Bottom line: They are great for request/response traffic that needs to be distributed across multiple back-end servers. They are not suitable for SSH.
Site-note: Using PING to test services often isn't a good idea. Ping is turned off in Security Groups by default since it can expose services and isn't good from a security perspective. You should test connectivity by connecting via the expected protocols (eg HTTP requests) rather than using Ping. This applies to testing EC2 connectivity, too.