When to use DhcpOptions - amazon-web-services

I recently started working with AWS CloudFormation and encountered the resource DhcpOptions.
Can anyone tell me, when I would need to include this resource in my template?
Thank you.

The use of the resource AWS::EC2::DHCPOptions [1] is optional.
AWS by default creates this resource for you and associates it to your VPC using a AWS::EC2::VPCDHCPOptionsAssociation resource.
The setting is used to configure the (AWS-provided) DHCP-Server in your VPC. The attributes you assign to the resource are translated into DHCP Options which are passed to any of your EC2 instances once they request the information via DHCP protocol, e.g.
CloudFormation DomainName is translated into Domain Name Server Option (see Section 3.8 in RFC 2132 [3]). For an overview of all exisiting BOOTP and DHCP options, see the IETF document on the website of IANA [4] or the respective wikipedia article [5].
AWS defines the DHCP Option Set in their docs [5] as:
The Dynamic Host Configuration Protocol (DHCP) provides a standard for passing configuration information to hosts on a TCP/IP network. The options field of a DHCP message contains the configuration parameters. Some of those parameters are the domain name, domain name server, and the netbios-node-type.
You can configure DHCP options sets for your virtual private clouds (VPC).
When to use the DHCPOptions resource?
As far as I know, the DHCP options of a VPC are often modified in order to change the network's default DNS server. If you have your clients configured to obtain the DNS servers via DHCP (which is standard for Amazon Linux and Amazon Linux 2), you can change their DNS servers by creating a new DHCP options set and setting a custom domain-name-servers option.
That is of course only on aspect of network configuration. By changing other options, such as the ntp-servers option, you can modify other configuration values which are sent over your VPC network when requested by DHCP clients (such as EC2 instances).
Note: Once you create a AWS::EC2::DHCPOptions resource, do not forget to assign it to a VPC using the AWS::EC2::VPCDHCPOptionsAssociation resource.
What is the default DHCP option set?
The docs [6] state:
When you create a VPC, we automatically create a set of DHCP options and associate them with the VPC. This set includes two options: domain-name-servers=AmazonProvidedDNS, and domain-name=domain-name-for-your-region.
That is for the domain-name-servers option:
AmazonProvidedDNS is an Amazon DNS server, and this option enables DNS for instances that need to communicate over the VPC's Internet gateway. The string AmazonProvidedDNS maps to a DNS server running on a reserved IP address at the base of the VPC IPv4 network range, plus two. For example, the DNS Server on a 10.0.0.0/16 network is located at 10.0.0.2.
References
[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-dhcp-options.html
[2] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-vpc-dhcp-options-assoc.html
[3] https://www.rfc-editor.org/rfc/rfc2132
[4] https://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml
[5] https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol#Options
[6] https://docs.aws.amazon.com/vpc/latest/userguide/VPC_DHCP_Options.html

Related

AWS ECS: How to communicate between services within ECS Cluster without using ALB? Is there a way?

Suppose I have a service say auth(port:8080) which has 3 tasks running and let's say I have another service say config-server(port:8888), 2 tasks running, where auth will load the configuration properties from, similar to spring cloud config server.
Launch Type: EC2
auth service running on 8080 |
config-server service running on 8888
Now, in order to access config-server from auth, do I have to use ALB to call config-server or I can call using service name, like http://config-server:8888?
I tried but it's not working. Did I misunderstand any concept here?
I would like to get some insight on this.
This is how my Service Discovery Configuration looks like.
EDITS:
I created a private namespace test.lo and still not working..
curl http://config-server.test.lo
curl: (6) Could not resolve host: config-server.test.lo
These are general things to check.
Ensure that enableDnsHostnames and enableDnsSupport options for VPC are enabled.
Don't use local as a private namespace. It's a reserved name.
Check private hosted zone created in Route 53 and verify that it has all the A (and SRV if used) correctly set to the private IP address of the service's tasks.
Private hosted zone can be resolved only from the inside of the same VPC as the ECS service. Thus to check if they work, can create an instance in the VPC and inspect from there.
Use dig tool to check if the DNS actually resolves the private dns name into private IP addresses. It should return multiple addresses, one for each task in a service.
If using awsvpc network mode can using either A or SRV record types. Thus if SRV does not work, it could be worth checking with A record.

Cannot connect to AWS Transfer S3 SFTP server - might need to set security group

I'm trying to set up an SFTP server managed by AWS that has a fixed IP address which external clients can whitelist in a firewall. Based on this FAQ this is what I should do:
You can enable fixed IPs for your server endpoint by selecting the VPC endpoint for your server and choosing the internet-facing option. This will allow you to attach Elastic IPs (including BYO IPs) to your server’s endpoint, which is assigned as the endpoint’s IP address
So I followed the official instructions here under "Creating an Internet-Facing Endpoint for Your SFTP Server". The creation settings look like this:
The result looks like this:
Compare with the result screenshot from the docs:
(source: amazon.com)
My result is almost the same, except that under the table "Endpoint Configuration" the last column says "Private IPv4 Address" instead of 'Public'. That's the first red flag. I have no idea why it's a private address. It doesn't look like one, it's the IP address of the Elastic IP that I created, and the endpoint DNS name s-******.server.transfer.eu-west-1.amazonaws.com resolves to that IP address on my local machine.
If I ping the endpoint or the IP address, it doesn't work:
451 packets transmitted, 0 received, 100% packet loss, time 460776ms
If I try connecting with sftp or ssh it hangs for a while before failing:
ssh: connect to host 34.****** port 22: Connection timed out
Connection closed
The other potential problem is security groups:
At this point, your endpoint is assigned with the selected VPC's default security group. To associate additional or change existing security groups, visit the Security Groups section in the https://console.aws.amazon.com/vpc/.
These instructions don't make sense to me because there's nowhere in the Security Groups interface that I can assign a group to another entity such as a transfer server. And there's nowhere in the transfer server configuration that mentions security groups. How do I set a new security group?
I tried changing the security group of the Network Interface of the Elastic IP, but I got a permission error even though I'm an administrator. Apparently I don't actually own ENIs? In any case I don't know if this is the right path.
The solution was to find the endpoint that was created for the server in the "Endpoints" section of the VPC console. The security groups of the endpoint can be edited.
The "Private IPv4 address" seems to be irrelevant.
The default security group controls access to the internet-facing endpoint for the new sftp server in a vpc. Mess around with the default security group ingress rules for the vpc selected for the sftp server. Or, white list the exact ip address connecting to the sftp endpoint in the default security group.
If the admin says ho hum, create a second vpc for the sftp server if isolation is absolutely necessary. Fiddle with the default group in the new, isolated vpc.
Link:
Creating an Internet-Facing endpoint for Your sftp server
Happy transferring!

Client IP address in Istio

So I have a setup like this:
AWS NLB (forwards) --> Istio --> Nginx pod
Now, I'm trying to implement rate limiting at Istio layer. I followed this link. However, I can still request the API more than what I configured. Looking more into it, I logged X-Forwarded-For header in the nginx, and it's empty.
So, how do I get the client IP in Istio when I'm using NLB? NLB forwards the client IP, but how? In header?
EDITS:
Istio Version: 1.2.5
istio-ingressgateway is configured as type NodePort.
According to AWS documentation about Network Load Balancer:
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.
...
When you create a target group, you specify its target type, which determines whether you register targets by instance ID or IP address. If you register targets by instance ID, the source IP addresses of the clients are preserved and provided to your applications. If you register targets by IP address, the source IP addresses are the private IP addresses of the load balancer nodes.
There are two ways of preserving client IP address when using NLB:
1.: NLB preserves client IP address in source address
when registering targets by instance ID.
So client IP address are only available in specific NLB configuration. You can read more about Target Groups in aws documentation.
2.: Proxy Protocol headers.
It is possible to use to send additional data such as the source IP address in header. Even if You specify targets by IP addresses.
You can follow aws documentation for guide and examples how to configure proxy protocol.
To enable Proxy Protocol using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
Choose Description, Edit attributes.
Select Enable proxy protocol v2, and then choose Save.

How to parse custom PROXY protocol v2 header for custom routing in HAProxy configuration?

How can one parse the PROXY protocol version 2 header and use the parsed values to select a backend?
Specifically, I am making a connection from one AWS account to another using a VPC PrivateLink endpoint with PROXY v2 enabled. This includes the endpoint ID according to the docs.
The Proxy Protocol header also includes the ID of the endpoint. This information is encoded using a custom Type-Length-Value (TLV) vector as follows.
My goal is to connect from a resource A in account 1 to a resource B in account 2. The plan is resource A -> PrivateLink -> NLB (with PROXY v2 enabled) -> HAProxy -> resource B.
I need to detect the VPC PrivateLink endpoint ID in the HAProxy frontend to select the correct backend. How can this be done? I'm not clear on how to call a custom parser in the HAProxy configuration, or if this is even possible? Is it? If so, how can this be done?
Reason I can't just use source IP: It is possible for private IP spaces to overlap in my architecture. There will be several accounts acting as account 1 in the example above, so I have to do destination routing based on the endpoint ID rather than the source IP exposed by the PROXY usage.
Examples
Not good
This is our current scenario. In it, two inbound connections from different VPC's having the same private IP address space cannot be distinguished.
frontend salt_4506_acctA_front
bind 10.0.1.32:4506 accept-proxy
mode tcp
default_backend salt_4506_acctA_back
backend salt_4506_acctA_back
balance roundrobin
mode tcp
server salt-master-ecs 192.168.0.88:32768
If we need to route connections for acctB's VPC using the same IP, there would be no way to distinguish.
Ideal
An ideal solution would be to modify this to something like the following (though I recognize this is won't work; it is just pseudo-configuration).
frontend salt_4506_acctA_front
bind *:4506 accept-proxy if endpointID == vpce-xxxxxxx1
mode tcp
default_backend salt_4506_acctA_back
backend salt_4506_acctA_back
balance roundrobin
mode tcp
server salt-master-ecs 192.168.0.88:32768
Any other options in place of HAProxy for destination routing based on the endpoint ID are also acceptable, but HAProxy seemed like the obvious candidate.
Looks like AWS use the "2.2.7. Reserved type ranges" as described in https://www.haproxy.org/download/1.8/doc/proxy-protocol.txt therefore you will need to parse this part by your own.
This could be possible in lua, maybe I'm not an expert in lua, yet ;-)

Link a domain to Amazon lightsail, ubuntu

I'm unable to ping domain name or amazon static ip from the lightsail instance (attached to that static ip)
Bought a domain name (say, test.com) from Google.
Created an Amazon Lightsail ubuntu 16.04 instance & attached a static IP
Enabled firewall on lightsail instance & allowed ports ssh/http/https
DNS settings added on google domain as below
Using google domain name servers
Registered host: www.test.com -> Amazon static ip
Custom resource records: # -> A -> Amazon static ip
Custom resource records: www -> A -> Amazon static ip
After all the above steps, am able to access test.com from web-browser
Now the issue is, am unable to ping test.com from lightsail instance (the same created in step-2).
To add, am able to ping google.com from the same instance. I'm doubting if any route missed.
Can someone guide me here. Many thanks.
Ping is uses the ICMP protocol, and the Lightsail firewall rules do not have a way to allow that protocol so that instances to be pinged from the Internet -- they only allow TCP and UDP. All outbound traffic is allowed, and the firewall is stateful, so you can ping out but not in.
Even it is late, it might be useful for someone.
In lightsail just add new rule Allow TCP+UDP 0-65535.
As of April 19, 2021, the protocol options in Lightsail firewall rules now include:
Ping (ICMP)
Custom ICMP
All ICMP
I have successfully got a Lightsail instance to respond to pings by selecting the first option, Ping (ICMP). No port selection is required. I recommend adding an IP restriction for enhanced security, if your use case allows.