How to setup elastic inference combined with an EC2 on AWS - amazon-web-services

I just found out about the new AWS elastic inference to make access and use of Teraflops cheaper. I found it unbelieavably complex to understand what the intro page is talking about, especially the part where they explain what needs to be done in order to have it set up and running. Until now I just have been using p2xlarge instances to run deep learning training and inference.
I am mostly interested in to combining an EC2 c4.xlarge or a c5.xlarge with the eia1.large
Did anybody go through the steps already? is there a full tutorial on how to do that step by step on that context. Unfortunately the current "tutorial" just points to other tutorials that are too general.

I was struggling at the beginning too getting errors when trying to run their example but eventually I managed. Here's where you should start and I can not emphasize the importance of following the instructions on this page in detail more :
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/setting-up-ei.html
very very important and complete set of instructions. Pay attention to all inbound and outbound rules and different security group settings.
then simply follow this example and take it from there :
https://docs.aws.amazon.com/dlami/latest/devguide/tutorial-tf-elastic-inference.html
Good luck!
P.s. my instance setup was c5.xlarge + eia1.medium

Related

Understanding where to begin with batch processing on AWS

I have a set of calculations that needs to run in a batch, and the workload is easily parallelized across machines. The work to be done is already done within a Docker container. I'm trying to understand the easiest way for me to run this workload in a highly parallel way on AWS. However, in trying to figure out where to begin I'm having trouble finding the right entrypoint. I read about AWS Batch and AWS Fargate, but each time I try to go down one of those paths to learn about them in more detail, more AWS services start popping up (Lamdas, Step Functions, ECS, AutoScaling groups), with each article having a different combination. Furthermore, I start thinking about the problem as a Batch vs Fargate problem, and then I find another article that talks about Batch + Fargate, or X + ECS + ....
I'm having trouble finding the appropriate introduction to the choices so I can get started with setting something up and getting some experience. Any pointers on which direction I might go or some resources for me to look at?
AWS containers services team member here. Your question triggers all my button cause I have been working on a deliverable to address some of this confusion ("where do I start with xyz?"). I can try to answer your question briefly here but if you want to read more (perhaps way more than you'd need feel free to contact me offline (mreferre at amazon dot com will work).
First and foremost it's not a Vs but it's an AND. Think of all these products you mention being distributed at different layers of the stack (this is a draft visual in the deliverable):
Fargate represents capacity (where your container is running), ECS represents a core containers orchestrator and Batch is one of the provisioners on top of the container orchestrator. Lambda is something separate and that live on its own. The options for your specific use case seem to be:
Lambda
ECS/Fargate
Batch/ECS/Fargate
Step Functions/ECS/Fargate (this one is outside of analysis and you don't see it in my visual - wondering if I should add it).
As others have hinted you probably want to use Lambda if your model is event-driven (e.g. if you want to fire up a dedicated function for every event like a new file uploaded to S3).
You probably do not want to use a naked ECS/Fargate solution because it would require more work to deal with the triggering and the scheduling of your batch jobs.
You probably want to use either Batch or Step Functions to schedule jobs on ECS/Fargate. I'd argue SF is good if you have basic workflows that you need to deal with and Batch if you need to manage complex jobs at scale. Perhaps this 35 mins presentation that I did last year can provide a bit more background on these Batch Vs SF differences.
Let me know if you have any additional questions because this discussion is super useful for the positioning I am trying to build.

Is there a way to discover VMs using Terraform?

Infrastructure team members are creating, deleting and modifying resources in GCP project using console. Security team wants to scan the infra and check weather proper security measures are taken care
I am tryng to create a terraform script which will:
1. Take project ID as input and list all instances of the given project.
2. Loop all the instances and check if the security controls are in place.
3. If any security control is missing, terraform script will be modifying the resource(VM).
I have to repeat the same steps for all resoources available in project like subnet, cloud storage buckets, firewalls etc.
As per my initial investigation to do such task We will have to import the resources to terraform using "terraform import" command and after that will have to think of loops.
Now it looks like using APIs of GCP is the best fit for this task, as it looks terraform is not the good choice for this kind of tasks and I am not sure weather it is achievable using teffarform.
Can somebody provide any directions here?
Curious if by "console" you mean the gcp console (aka by hand), because if you are not already using terraform to create the resources (and do not plan to in the future), then terraform is not the correct tool for what you're describing. I'd actually argue it is increasing the complexity.
Mostly because:
The import feature is not intended for this kind of use case and we still find regular issues with it. Maybe 1 time for a few resources, but not for entire environments and not without it becoming the future source of truth. Projects such as terraforming do their best but still face wild west issues in complex environments. Not all resources even support importing
Terraform will not tell you anything about the VM's that you wouldn't know from the GCP cli already. If you need more information to make an assessment about the controls then you will need to use another tool or have some complicated provisioners. Provisioners at best would end up being a wrapper around other tooling you could probably use directly.
Honestly, I'm worried your team is trying to avoid the pain of converting older practices to IaC. It's uncomfortable and challenging, but yields better fruit in the long run then the path you're describing.
Digress, if you have infra created via terraform then I'd invest more time in some other practices that can accomplish the same results. Some other options are: 1) enforce best practices via parent modules that security has "blessed", 2) implement some CI on your terraform, 3) AWS has Config and Systems Manager, not sure if GCP has an equivalent but I would look around. Also it's worth evaluating using different technologies for different layers of abstraction. What checks your OS might be different from what checks your security groups and that's ok. Knowing is half the battle and might make for a more sane first version then automatic remediation.
With or without terraform, there is a an ecosystem of both products and opensource projects that can help with the compliance or control enforcement. Take a look at tools like inspec, sentinel, or salstack for inspiration.

Deploy hyperledger on AWS - production setup

My company is currently evaluating hyperledger(fabric) and we're using it for our POC. It looks very promising and we're targeting rolling out to production in next few months.
We're targeting AWS as our production environment.
However, we're struggling to find good tutorial/practices/recommendations about operating hyperledger network in such environment.
I'm aware that Cello is aiming to solve/ease deploying/monitoring hyperledger network but i also read that its not production ready yet. Question is, should we even consider looking at Cello at this point?
If not, what are our alternatives? Docker swarm, kubernetes?
I also didn't find information about recommended instance types. I understand this is application and AWS specific but what are the minimal system requirements
(memory&CPU&network) for example for 'peer' node (our application is not network intensive, nor a lot of transactions will be submitted per hour/day, only few of them per day).
Another question is where to create those instances on AWS from geographical&decentralization point of view. Does it make sense all of them to be created in same region? Or, we must create instances running in different regions?
Tnx a lot.
Igor.
yes, look at Cello.. if nothing else it will help you see the aws deployment model.
really nothing special..
design the desired system, peers, orderer, gateways, etc..
then decide who many ec2 instance u need to support that.
as for WHERE (region).. depends on where the connecting application is and what kind of fault tolerance you need for your business model.
one of the businesses I am working with wants a minimum of 99.99999 % availability. so, multi-region is critical. its just another ec2 instance with sockets open from different hosts..
aws doesn't provide much in terms of support for hyperledger. they have some templates which allow you to setup the VMs initially, but that's stuff you can do yourself as well.
you are right, the documentation is very light and most of the time confusing. I got to the point where I can start from scratch with a brand new VM and got everything ready and deploy my own network definition and chaincode and have the scripts to do that.
IBM cloud has much better support for hyperledger however. you can design your network visually, you can download your connection profiles, deploy and instantiate chaincode, create and join channels, handle certificates, pretty much everything you need to run and support such a network. It's light years ahead of AWS. They even have a full CI / CD pipepline that you could replicate for your own project. if you look at their marbles demo, you'll see what i mean.
Cello is definitely worth looking at, with the caveat that it's incubation meaning, not real yet, not production ready and not really useful until it becomes a fully fledged product.

Amazon inspector

I would like to use AWS tool, like in topic. To me it looks like there are two releases of this tool. One with AWS agent installed on EC2 instance, allows tracking security issues. New one with some benchmarking, and so on. So I'm interested in the new one.
I've red docs, set up sample, test env. but still it looks a bit unclear for me. I understand that they are using public database of vulnerabilities. As well as benchmarking, or testing against best practices.
The question is - how can I know that all of that is tested in lowest 15min. target? Or in the other words - if time is short - what is less tested?
Is anyone use this tool and would like to share knowledge, insights?
A report provided at the end of the testing gives you an overview of the scanning results. The results indicates which of your preselected resources has security issues.

Can you get a cluster of Google Compute Engine instances that are *physically* local?

Google Compute Engine lets you get a group of instances that are semantically local in the sense that only they can talk to each other and all external access has to go through a firewall etc. If I want to run Map-Reduce or other kinds of cluster jobs that are going to induce high network traffic, then I also want machines that are physically local (say, on the same rack). Looking at the APIs and initial documentation, I don't see any way to request that; does anyone know otherwise?
There is no support in GCE right now for specifying rack locality. However, we built the system to work well in the face of large numbers of instances talking to each other in a fully connected way, as long as they are in the same zone.
This is one of the things that allowed MapR to approach the record for a hadoop terasort. You can see that in action in the video for the Criag Mcluckie's talk from IO:
https://developers.google.com/events/io/sessions/gooio2012/302/
The best way to see is to test out your application and see how it works.