Amazon inspector - amazon-web-services

I would like to use AWS tool, like in topic. To me it looks like there are two releases of this tool. One with AWS agent installed on EC2 instance, allows tracking security issues. New one with some benchmarking, and so on. So I'm interested in the new one.
I've red docs, set up sample, test env. but still it looks a bit unclear for me. I understand that they are using public database of vulnerabilities. As well as benchmarking, or testing against best practices.
The question is - how can I know that all of that is tested in lowest 15min. target? Or in the other words - if time is short - what is less tested?
Is anyone use this tool and would like to share knowledge, insights?

A report provided at the end of the testing gives you an overview of the scanning results. The results indicates which of your preselected resources has security issues.

Related

Usefulness of IaaS Provisoning tools like Terraform?

I have a quick point of confusion regarding the whole idea of "Infrastructure as a Code" or IaaS provisioning with tools like Terraform.
I've been working on a team recently that uses Terraform to provision all of its AWS resources, and I've been learning it here and there and admit that it's a pretty nifty tool.
Besides Infrastructure as Code being a "cool" alternative to manually provisioning resources in the AWS console, I don't understand why it's actually useful though.
Take, for example, a typical deployment of a website with a database. After my initial provisioning of this infrastructure, why would I ever need to even run the Terraform plan again? With everything I need being provisioned on my AWS account, what are the use cases in which I'll need to "reprovision" this infrastructure?
Under this assumption, the process of provisioning everything I need is front-loaded to begin with, so why do I bother learning tools when I can just click some buttons in the AWS console when I'm first deploying my website?
Honestly I thought this would be a pretty common point of confusion, but I couldn't seem to find clarity elsewhere so I thought I'd ask here. Probably a naive question, but keep in mind I'm new to this whole philosophy.
Thanks in advance!
Manually provisioning, in the long term, is slow, non-reproducible, troublesome, not self-documenting and difficult to do in teams.
With tools such as terraform or CloudFormation you can have the following benefits:
Apply all the same development principles which you have when you write a traditional code. You can use comments to document your infrastructure. You can track all changes and who made these changes using software version control system (e.g. git).
you can easily share your infrastructure architecture. Your VPC and ALB don't work? Just post your terraform code to SO or share with a colleague for a review. Its much easier then sharing screenshots of your VPC and ALB when done manually.
easy to plan for disaster recovery and global applications. You just deploy the same infrastructure in different regions automatically. Doing the same manually in many regions would be difficult.
separation of dev, prod and staging infrastructure. You just re-use the same infrastructure code across different environments. A change to dev infrastructure can be easily ported to prod.
inspect changes before actually performing them. Manual upgrades to your infrastructure can have disastrous effects due to domino effect. Changing one, can change/break many other components of your architecture. With infrastructure as a code, you can preview the changes and have good understanding what implications can be before you actually do the change.
work team. You can have many people working on the same infrastructure code, proposing changes, testing and reviewing.
I really like the #Marcin's answer.
Here some additional points from my experience:
As for software version control case you not only can see history/authors, perform code review, but also treat infrastructural changes as product features. Let's say for example you're adding CDN support to your application so you have to make some changes in your infrastructure (to provision a cloud CDN service), application (to actually support and work with CDN) and your pipelines (to deliver static to CDN, if you're using this approach). If all changes related to this new feature will be in a one single branch - all feature related changes will be transparent for everyone in the team and can be easily tracked down later.
Another thing related to version control - is have ability to easily provision and destroy infrastructures for review apps semi-automatically using triggers and capabilities of your CI/CD tools for automated and manual testing. It's even possible to run automated tests for your changes in infrastructure declaration.
If you working on multiple similar project or if your project requires multiple similar but isolated from each other environment, IaC can help save countless hours of provisioning and tracking down everything. Although it's not always silver bullet, but in almost all cases it helps with saving time and avoiding most of accidental mistakes.
Last but not least - it helps with seeing bigger picture if you working with hybrid or multicloud environments. Not as good as infrastructural diagrams, but diagrams might not be always up date unlike your code.

How to use codestar in a serverless project having a microservice architecture?

I'm totally new with AWS Serverless architecture.
I was trying to generate the project architecture, and I read about AWS codestar and how it can Easily create new projects using templates for AWS Lambda using Python (which is my case)
But I didn't know if I should :
generate one project (the main project ) with AWS codestar and then
I create separate folders for every microservice I have
(UsersService, ContactService ...etc)
OR
every microservice can be generated via AWS Codestar so each
service is a separate codestar project for my lambdas ?
Maybe it's a very stupid question for some of you, please any help or usefull links are welcome.
Thanks
This is generally your decision over how you deploy, although I feel like the general consensus will be option 2. I'll try to explain why.
Option 1 is what you would call a Monolith, this means everything for your app is all in one place. This might initially seem great but has a few limitations which I've detailed below:
All or nothing deployments, if you update a tiny part of the app you need to deploy everywhere.
Leads to coupling between unrelated components, generally the design pattern can lead to overlapping changes that can cause breaking changes for other parts of your stack.
Harder to scale, you generally scale larger chunks (i.e. not search and book independently but everything all together).
You can mitigate against these but it can be a bit of a headache.
The second option leads more towards a Microservice/Decoupled Architecture.
Some of the benefits of this method are:
Only deploy the changes you've made, if the search service changes only deploy that.
Easier to scale infrastructure to meet specific demand.
Able to implement functional testing of the component easier.
Restrict access to users who develop specific components.
Option 2 is your microservice based repository setup, so I would suggest using this.
I don't have enough reputation to comment so I will post this as an answer.
What you are asking about is software architecture question, and whether or not to use a monorepo vs a polyrepo. You've already made the decision about microservices, so this is not a monolith.
The answer is.... it depends. There is no general consensus. Just do search on monorepo vs polyrepo (or multirepo) and be prepared to go down the rabbit hole.
Being a serverless application should have no bearing on the type of structure you decide on. However, CodeStar may have some limitations that make it more difficult to use a monorepo. I'm using the CDK for that.
Here are a couple of articles to get you started:
https://medium.com/#mattklein123/monorepos-please-dont-e9a279be011b
https://medium.com/#adamhjk/monorepo-please-do-3657e08a4b70
Here is another that pertains directly to serverless applications:
https://lumigo.io/blog/mono-repo-vs-one-per-service/

Is there a way to discover VMs using Terraform?

Infrastructure team members are creating, deleting and modifying resources in GCP project using console. Security team wants to scan the infra and check weather proper security measures are taken care
I am tryng to create a terraform script which will:
1. Take project ID as input and list all instances of the given project.
2. Loop all the instances and check if the security controls are in place.
3. If any security control is missing, terraform script will be modifying the resource(VM).
I have to repeat the same steps for all resoources available in project like subnet, cloud storage buckets, firewalls etc.
As per my initial investigation to do such task We will have to import the resources to terraform using "terraform import" command and after that will have to think of loops.
Now it looks like using APIs of GCP is the best fit for this task, as it looks terraform is not the good choice for this kind of tasks and I am not sure weather it is achievable using teffarform.
Can somebody provide any directions here?
Curious if by "console" you mean the gcp console (aka by hand), because if you are not already using terraform to create the resources (and do not plan to in the future), then terraform is not the correct tool for what you're describing. I'd actually argue it is increasing the complexity.
Mostly because:
The import feature is not intended for this kind of use case and we still find regular issues with it. Maybe 1 time for a few resources, but not for entire environments and not without it becoming the future source of truth. Projects such as terraforming do their best but still face wild west issues in complex environments. Not all resources even support importing
Terraform will not tell you anything about the VM's that you wouldn't know from the GCP cli already. If you need more information to make an assessment about the controls then you will need to use another tool or have some complicated provisioners. Provisioners at best would end up being a wrapper around other tooling you could probably use directly.
Honestly, I'm worried your team is trying to avoid the pain of converting older practices to IaC. It's uncomfortable and challenging, but yields better fruit in the long run then the path you're describing.
Digress, if you have infra created via terraform then I'd invest more time in some other practices that can accomplish the same results. Some other options are: 1) enforce best practices via parent modules that security has "blessed", 2) implement some CI on your terraform, 3) AWS has Config and Systems Manager, not sure if GCP has an equivalent but I would look around. Also it's worth evaluating using different technologies for different layers of abstraction. What checks your OS might be different from what checks your security groups and that's ok. Knowing is half the battle and might make for a more sane first version then automatic remediation.
With or without terraform, there is a an ecosystem of both products and opensource projects that can help with the compliance or control enforcement. Take a look at tools like inspec, sentinel, or salstack for inspiration.

Deploy hyperledger on AWS - production setup

My company is currently evaluating hyperledger(fabric) and we're using it for our POC. It looks very promising and we're targeting rolling out to production in next few months.
We're targeting AWS as our production environment.
However, we're struggling to find good tutorial/practices/recommendations about operating hyperledger network in such environment.
I'm aware that Cello is aiming to solve/ease deploying/monitoring hyperledger network but i also read that its not production ready yet. Question is, should we even consider looking at Cello at this point?
If not, what are our alternatives? Docker swarm, kubernetes?
I also didn't find information about recommended instance types. I understand this is application and AWS specific but what are the minimal system requirements
(memory&CPU&network) for example for 'peer' node (our application is not network intensive, nor a lot of transactions will be submitted per hour/day, only few of them per day).
Another question is where to create those instances on AWS from geographical&decentralization point of view. Does it make sense all of them to be created in same region? Or, we must create instances running in different regions?
Tnx a lot.
Igor.
yes, look at Cello.. if nothing else it will help you see the aws deployment model.
really nothing special..
design the desired system, peers, orderer, gateways, etc..
then decide who many ec2 instance u need to support that.
as for WHERE (region).. depends on where the connecting application is and what kind of fault tolerance you need for your business model.
one of the businesses I am working with wants a minimum of 99.99999 % availability. so, multi-region is critical. its just another ec2 instance with sockets open from different hosts..
aws doesn't provide much in terms of support for hyperledger. they have some templates which allow you to setup the VMs initially, but that's stuff you can do yourself as well.
you are right, the documentation is very light and most of the time confusing. I got to the point where I can start from scratch with a brand new VM and got everything ready and deploy my own network definition and chaincode and have the scripts to do that.
IBM cloud has much better support for hyperledger however. you can design your network visually, you can download your connection profiles, deploy and instantiate chaincode, create and join channels, handle certificates, pretty much everything you need to run and support such a network. It's light years ahead of AWS. They even have a full CI / CD pipepline that you could replicate for your own project. if you look at their marbles demo, you'll see what i mean.
Cello is definitely worth looking at, with the caveat that it's incubation meaning, not real yet, not production ready and not really useful until it becomes a fully fledged product.

Help emulating Heroku, GAE, etc : Building a web service privately (PaaS)

I'm not the only one with this question, but haven't found a lot of information in my research so far, so help me out.
We are a small IT crowd in an organization. We're looking to build a small, private service that would emulate a heroku/gae workflow. The basics of this: deploy an app as a git repository, and have it scale in a 'cloud' environment. Basically, a platform as a service (Paas).
Pretend we are amateur PM's, programmers, and sysadmins tasked with this. What would you recommend? We know generally what is needed: some sort of routing, database, caching, authentication, etc. What other tools do we need?
We would prefer tools along a ruby/python/haskell/erlang dimension, on a linux/bsd stack, with postgres databases(couchdb or cassandra in the future). We are not touching anything in the ms/.net area, nothing on the JVM (We've looked at Steamcannon, but no; Scala and Clojure tools are not entirely out of the question). We have a basic grasp of bootstrapping a cloud (e.g. Eucalyptus) to build on. We have an understanding of the basics in server admin, and the physical infrastructure limitations aren't a factor right now.
We're not looking into why gaerokuyardspace is the best choice, a list of such services, why we should ditch our plans for one of these services, or an argument against this plan. For this situation the decision has been made that the cost to build privately is more attractive than the cost of deploying elsewhere. We already know why and how for these services. We're looking to emulate and build upon these for private needs.
A short list of tools to be expanded:
Beehive
Steamcannon
Gitosis/Gitolite
?
Basically, I'd like to generate a list of tools for building heroku/gae like service on a small, private, definitely experimental/toy level.
I don't know that it will meet all of your stated needs today, but you should take a look at Cloud Foundry from VMware. You can check the FAQ for the commercial project or look in to the Open Source version that you can host and manage yourself.
Some combination of Cloud Foundry (above) gitolite, and fabric
will probably do well for you. Any such solution will take some time to get right.
(Disclaimer: I'm a lead developer on the AppScale project)
AppScale is pretty much right up your alley, especially if you're looking to run Google App Engine apps in your own private cloud. It's open source, so grab it and extend it if there are other types of apps you want to support (and definitely commit it back to us if you do).