Is FIPS mode on ECS Fargate Tasks in AWS GovCloud possible? - amazon-web-services

Trying to figure out whether enabling FIPS 140-2 mode (crypto.fips_enabled = 1) on ECS Fargate Task in AWS GovCloud is at all possible?
The AWS ECS service shows up as a FedRAMP High compliant, so it would be easy to assume that all Fargate host machines are running in FIPS mode by default. However, when I ran a Fargate task and check for fips availability, it comes out as 0 (disabled).
Given that FIPS mode is a kernel feature, I am guessing is there still a way to turn it on? Or maybe there is a task config option that would let me run my container on a FIPS-enabled host?
Please advise.

From this long-languishing enhancement request, it does not appear possible now.
[ECS] [request]: FIPS support for containers running in Fargate #659

Related

Deploying to bare EC2 instances in an ASG?

I have a service that needs to run on our own EC2 instances, since it requires some support from the kernel. My previous experience is all with containers in AWS. The application itself is distributed as a single JAR file and I'm looking for advice for how I should automate deployments. The architecture is:
An ALB in front of the ASG.
EC2 instance running a single Java application.
Any open sockets are open for an hour tops and to not cause any trouble, we have to drain the connections to the EC2 instances before performing an update, so a hard requirement is for the ALB to stop opening new connections for an hour before updating the software. The application is mission critical and ECS has had some issues last year, so I want to minimize the AWS services I depend on. While I could do what I want on my own ECS cluster with custom AMIs, I don't want to do it, since I will run a single instance of the app per host and don't need the extra layer.
My question: What is the simplest method to achieve this using CodePipeline? My understanding is that I need to use a CodeDeploy deployment step to push something to bare EC2 instances. How does draining with an ALB work in this case? We're using CloudFormation for the deployment.
You need to use codedeploy. You can find tutorial on AWS codedeploy documentation.
Codedeploy deployment lifecycle hooks for EC2.
https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html#appspec-hooks-server

Jenkins setup on EC2 vs ECS

Currently we have Jenkins that is running on-premise(VMware), planning to move into the cloud(aws). What would be the best approach to install Jenkins whether on ec2 or ECS?
Best way would be running on EC2. Make sure you have granular control over your instance Security Group and Network ACL's. I would recommend using terraform to build your environment as you can write code and also version control it. https://www.terraform.io/downloads.html
Have you previously containerized your Jenkins? On VMWare itself? If not, and if you are not having experience with containers, go for EC2. It will be as easy as running on any other VM. For reproducing the infrastructure, use Terraform or CloudFormartion.
I would recommend dockerize your on-premise Jenkins first. See how much efforts are required in implementation and administrating/scaling it. Then go for ECS.
Else, shift to EC2 and see how much admin overhead + costs you are billed. Then if required, go for ECS.
Another point you have to consider is how your Jenkins is architected. Are you using master-slave? Are you running builds contentiously so that VMs are never idle? Do you want easy scaling such that build environment is created and destroyed per build execution?
If you have no experience with running containers then create it on EC2. Before running on ECS make sure you really understand containers and container orchestration.
Just want to complement the other answers by providing link to official AWS white paper:
Jenkins on AWS
It might be of special interest as it discusses both options in detail: EC2 and ECS:
In this section we discuss two approaches to deploying Jenkinson AWS. First, you could use the traditional deployment on top of Amazon Elastic Compute Cloud (Amazon EC2). Second, you could use the containerized deployment that leverages Amazon EC2 Container Service (Amazon ECS).Both approaches are production-ready for an enterprise environment.
There is also AWS sample solution for Jenkins on AWS for ECS:
https://github.com/aws-samples/jenkins-on-aws:
This project will build and deploy an immutable, fault tolerant, and cost effective Jenkins environment in AWS using ECS. All Jenkins images are managed within the repository (pulled from upstream) and fully configurable as code. Plugin installation is automated, including versioning, as well as configured through the Configuration as Code plugin.

docker-compose on AWS

I would like an web a software on AWS. Locally, I run it on Ubuntu VirtualBox VM with docker-compose, it requires 2-4 cores, 8G RAM, 30-40G disk. Do you think it will run on AWSS? Should I install docker-compose and app on a EC2? Elasticbean, ECS, (https://aws.amazon.com/about-aws/whats-new/2018/06/amazon-ecs-cli-supports-docker-compose-version-3/ ), or something else?
I am vary because my attempts to run it on an IT department managed KVM failed.
What resources are best to request for either of solutions?
At the moment it is more of prove of concept/demo, but eventually I hope deploying production on a Kubernetes cluster
I'm looking for, in the order of decreasing importance :
Simplicity and chances to succeed with deployment asap
costs
stability, QoS
You may want to consider using AWS Fargate. This lets you run container-based applications without having to managing the underlying EC2 instance. You can use Fargate with either ECS or EKS.
The ECS CLI that you link to in your question also helps you create your application and should make it easy get started.
You can look at ecsworkshop.com for an introduction to using ECS and Fargate.

Can you use Amazon's MSK on EKS?

I'm looking at the possibly of replacing/moving our existing Apache Kafka set up (version 2.1.0) to Amazon's MSK and for it work on EKS.
I've been looking around to see if this is actually possible and if someone has done this or attempted it but so far I've only seen reference to using Apache Kafka on EKS. Does anyone know if it is possible/makes sense to use MSK on EKS?
Many thanks.
Amazon MSK provides fully-managed Kafka clusters, which means that from your side, you do not have to operate the cluster at all. Broker and Zookeeper nodes are packaged, deployed, created, updated and patched for you.
This step-by-step tutorial illustrates the creation of a cluster.
The answer is not, MSK is a fully managed service provided by AWS, you cannot install managed service :-) but you can run your own Kafka cluster on top of Kubernetes cluster in AWS, eg. on EKS service while installing a Kafka Operator:
https://banzaicloud.com/docs/supertubes/kafka-operator/
I haven’t done it for MSK before but surely done it for AWS Aurora Postgres. Not sure why you can’t define your external persistence (in this case MSK) as a service with no selector then manually register an Endpoint object pointing to the MSK host.
https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors

IBM Integration Bus on AWS Cloud

Can IBM Integration Bus((and /or Websphere message Broker) be implemeted on AWS ? Can my on-premise ESB be migrated to AWS Cloud ?
Thanks in Advance
AWS EC2 allows importing VMs into an AMI then you can start an EC2 instance using that image. If you are new to AWS you can check the link below
https://aws.amazon.com/ec2/vm-import/
However, you should be careful about IIB license and how many machines you can install it on before regesting the AMI in a launch configuration and create an autoscaling group and set a scaling policy that can start instances more that what you purchased.
That's very much possible. There are several possible approaches.
1. IIB on EC2
Installing and configuring IIB on an EC2 instance is very much similar to doing the same in on-premise servers. Only difference is that the physical server is in AWS Cloud. While this approach gives you maximum flexibility to design your architecture any way, it does not take advantage of the basic features of the cloud.
2. Quick Start
IIB is available for deployment under AWS Quick Start. You can read more about this here. This helps you get started quickly by setting up the entire environment in a few clicks. But, if you're planning to migrate your existing architecture to AWS, this may not suit you as the architecture is pre-defined with limited options for customization.
3. IIB on Containers
ACE 11 provides better support for containerization. You can read more about running IIB 10 on containers here and ACE 11 on containers here. After this, the containers can be deployed into fully managed containers such as AWS Elastic Container Service or your own container configuration such as Docker on EC2.
Yes of course, AWS provides the IAAS and you just install whatever you want inside. Make sure you open ports, use specific credentials for the instalation (dont use admin) and everything should work.
IBM also provides docker images of integration bus v10 and APP Connect Enterprise v11. This is true for all their integration tools, MQ, API Management and more.
Not restricted to AWS.