AWS ECS: Will Windows Containers Ever Support AWSVPC Networking Mode? - amazon-web-services

I don't know if the issue is one of time (i.e. it'll get added eventually) or that the Windows network stack just simply isn't able to support giving a container/task its own ip address - does anyone have any insight on this?
From my perspective, having the dedicated ip means each task can have a dedicated security group thereby providing a network level security layer between tasks (very useful in multitenant environments).
Thanks

It's been delayed but better late than never. AWS announced the support for awsvpc network mode with the Windows tasks on ECS EC2. The steps to get started are available here.

Related

Outgoing network performance (AWS)

There an external HTTP server (located somewhere in the US), which we must communicate with. We use AWS EC2 instances.
While we can buy a "bigger instance" to improve the internal network performance, is there a way to lessen (optimize?) the roundtrip time between our EC2 instance and the external server? Are therer any tools that could be useful?
You haven't specified what type of EC2 instance you use which is a big factor determining the network performance.
You also said
from my home network, it is much faster than when running on an AWS EC2 (regardless of where the ec2 is hosted)
I know nothing about your home network and your EC2 instance config so this is hard to judge but I'd expect, on average, the EC2 instance having faster network than what's available on the end user's site.
It's also not 100% clear what you are measuring. You said "round trip time" so you are only interested in end-to-end latency? Any particular throughput requirements?
That said, here's a useful cheat sheet which you can download and check your instance type: https://cloudonaut.io/ec2-network-performance-cheat-sheet/
Furthermore, you can use iperf (or iperf3) to perform some experiments on both sides of the connection:
https://www.tecmint.com/test-network-throughput-in-linux/
https://aws.amazon.com/premiumsupport/knowledge-center/network-throughput-benchmark-linux-ec2/

AWS : VPC Networking mode

I have an ecs architecture, which has an application running as a container and a nginx side car container. So each task has 2 containers(nginx+app). These two containers are linked through bridge network mode. We currently observe increase in response time. We suspected it may be because of docker bridge network.So, we are trying to change to aws vpc mode networking. But when we tried to update to aws vpc network mode, it gave us an error 'Links are not supported when networkMode=awsvpc.'. So how to use aws vpc networking for these kind of side car architectures?
Linking is not supported in AWSVPC network mode.you need to use service discovery.
links
Type: string array
Required: no
The link parameter allows containers to communicate with each other
without the need for port mappings. Only supported if the network mode
of a task definition is set to bridge.
Note
This parameter is not supported for Windows containers or tasks using
the awsvpc network mode.
task_definition_parameters

How i can configure Google Cloud Platform with Cloudflare-Only?

I recently start using GCP but i have one thing i can't solve.
I have: 1 VM + 1 DB Instance + 1 LB. DB instance allow only conections from the VM IP. bUT THE VM IP allow traffic from all ip (if i configure the firewall to only allow CloudFlare and LB IP's the website crash and refuse conections).
Recently i was under attack, i activate the Cloudflare ddos mode, restart all and in like 6 h the attack come back with the Cloudflare activate. Wen i see mysql conections bump from 20-30 to 254 and all conections are from the IP of the VM so i think the problem are the public accesibility of the VM but i don't know how to solved it...
If i activate my firewall rules to only allow traffic from LB and Cloudflare the web refuses all conections..
Any idea what i can do?
Thanks.
Cloud Support here, unfortunately, we do not have visibility into what is installed on your instance or what software caused the issue.
Generally speaking you're responsible for investigating the source of the vulnerability and taking steps to mitigate it.
I'm writing here some hints that will help you:
Make sure you keep your firewall rules in a sensible manner, e.g. is not a good practice to have a firewall rule to allow all ingress connections on port 22 from all source IPs for obvious reasons.
Since you've already been rooted, change all your passwords: within the Cloud SQL instance, within the GCE instance, even within the GCP project.
It's also a good idea to check who has access to your service accounts, just in case people that aren't currently working for you or your company still have access to them.
If you're using certificates revoke them, generate new ones and share them in a secure way and with the minimum required number of users.
Securing GCE instances is a shared responsability, in general, OWASP hardening guides are really good.
I'm quoting some info here from another StackOverflow thread that might be useful in your case:
General security advice for Google Cloud Platform instances:
Set user permissions at project level.
Connect securely to your instance.
Ensure the project firewall is not open to everyone on the internet.
Use a strong password and store passwords securely.
Ensure that all software is up to date.
Monitor project usage closely via the monitoring API to identify abnormal project usage.
To diagnose trouble with GCE instances, serial port output from the instance can be useful.
You can check the serial port output by clicking on the instance name
and then on "Serial port 1 (console)". Note that this logs are wipped
when instances are shutdown & rebooted, and the log is not visible
when the instance is not started.
Stackdriver monitoring is also helpful to provide an audit trail to
diagnose problems.
You can use the Stackdriver Monitoring Console to set up alerting policies matching given conditions (under which a service is considered unhealthy) that can be set up to trigger email/SMS notifications.
This quickstart for Google Compute Engine instances can be completed in ~10 minutes and shows the convenience of monitoring instances.
Here are some hints you can check on keeping GCP projects secure.

Kubernetes - Cross AZ traffic

Does anyone have any advice on how to minimize cross-az traffic for inter-pod communication when running Kubernetes in AWS? I want to keep my microservices pinned to the same availability zone so that microservice-a that resides in az-a will transmit it's payload to microservice-b also in az-a.
I know you can pin pods to a label and keep the traffic in the same AZ, but in addition to minimizing the cross az-traffic I also want to maintain HA by deploying to multiple AZs.
In case you're willing to use alpha features you could use inter-pod affinity or node affinity rules to implement such a behaviour without loosing high availability.
You'll find details in the official documentation
Without that you could just have one deployment pinned to one node and a second deployment pinned to another node and one service which selects pods from both deployments - example code can be found here

Usefulness of Amazon ELB (Elastic Load Balancing

We're considering to implement an ELB in our production Amazon environment. It seems it will require that production server instances be synched by a nightly script. Also, there is a Solr search engine which will need to replicated and maintained for each paired server. There's also the issue of debugging - which server is it going to? If there's a crash, do you have to search both logs? If a production app isn't behaving, how do you isolate which one is is, or do you just deploy debugging code to both instances?
We aren't having issues with response time or server load. This seems like added complexity in exchange for a limited upside. It seems like it may be overkill to me. Thoughts?
You're enumerating the problems that arise when you need high availability :)
You need to consider how critical is the availability of the service and take that into account when defining what is the right solution or just over-engineering :)
Solutions to some caveats:
To avoid nightly syncs: Use an EC2 with NFS server and mount share in both EC2 instances. (Or use Amazon EFS when it's available)
Debugging problem: You can configure the EC2 instances behind the ELB to have public IPs, limited in the Security Groups just to the PCs of the developers, and when debugging point your /etc/hosts (or Windows equivalent) to one particular server.
Logs: store the logs in S3 (or in the NFS server commented above)