How to protect against mistakenly removal of my aws ressources? - amazon-web-services

What if I delete by mistake my aws API or any other aws ressource that took several weeks/months to build ? Or if a malicious developer get fraudulent acces to my AWS and decide to delete all my hard work ?
Is there a kind of backup AWS make automatically to prevent against these scenarios ?

There is no blanket mechanism for backing up all resources in AWS for this scenario. You need to think of these scenarios and deploy infrastructure accordingly.
Unfortunately this topic is too wide to discuss in one comment.
You can think of preventing these accidental deletions by using IAM's and SCP's.
There are some services like AWS Backup which can help you with getting backups of your persistent data resources.
Refer: https://aws.amazon.com/backup/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc

The way you have asked this question it is difficult to answer. If you could provide a specific scenario, we might be able to better assist. Here are my tips though:
You need to secure your environment with least privileges. Only turn on policies, access, process, etc, that you are using.
Turn on Monitoring with Cloud Watch so you can properly monitor your servers.
Turn on two factor Authentication for your IAM Account. Do not do this for Root. Only use Root to fix any two factor authentication issues.
Use Snapshots, AMIs, etc.
Use Versioning software for all of your code. Put your Lamda, Policies, and any other code or scripts you write in GIT so you don't loose them.
Enjoy

Related

Separating Dev Prod Environments In AWS

I’m my scenario wants to separate out the production environment from our development environments.
We'd like to only have our production systems on one AWS account and all other systems and services on another.
I'd like to split/separate for billing purposes. If I do add more monitoring services many charge by the number of running instances. I have considerably more running instances than I need to monitor though so I'd like the separation. This also would make managing permissions in the future a lot easier I believe (e.g. security hub scores wouldn't be affected by LMS instances).
I'd like to split out all public facing assets to a separate AWS account. So RDS, all EC2 instances relating to prod-webserver (instances, target group, AMI, scaling, VPC, etc.), S3 cloudfront.abc.com bucket, jenkins, OpenVPN, all Seoul assets.
Perhaps I could achieve the goal with 'Organizations' or the 'Control Tower' as well. Could anyone please advise what would be best in my scenario? Is there Better alternative for this ?
The fact that you was to split for billing purposes means you should use separate AWS Accounts. While you could split some billing by tags within a single account, it's much easier to use multiple accounts to split the billing.
The typical split is Production / Testing / Development.
You can join the accounts together by using AWS Organizations, which gives some overall security controls.
Separating workloads and environments is considered a best practice in AWS according to the AWS Well-Architected Framework. Nowadays Control Tower (which builds upon AWS Organizations) is the standard for building multi-account setups in AWS.
Regarding multi-account setups I recommend reading the Organizing Your AWS Environment Using Multiple Accounts.
Also have a look at the open-source AWS Quickstart superwerker which sets up a well-architected AWS landing zone using AWS Control Tower, Security Hub, GuardDuty, and more.
AWS provides a lot information about this topic. E.g. a very detailed Whitepaper about Organizing Your AWS Environment in which they say
Using multiple AWS accounts to help isolate and manage your business applications and data can
help you optimize across most of the AWS Well-Architected Framework pillars, including operational
excellence, security, reliability, and cost optimization.
With accounts, you logically separate all resources (unless you allow something else) and therefore ensure independence between e.g. the development environment and the production environment.
You should also take a look at Organizational Units (OUs)
The following benefits of using OUs helped shape the Recommended OUs and accounts and Patterns for organizing your AWS accounts.
Group similar accounts based on function
Apply common policies
Share common resources
Provision and manage common resources
Control Tower is a tool which allows you to manage all your AWS accounts in one place. You can apply policies for every account, OU, or prohibit regions. You can use the Account Factory to create new accounts based on blueprints.
But still you need to collect a lot of knowledge about these tools and best practices because they're just that. Best practices and recommendations you can use to get started and build a good foundation, but they're nothing you can fully rely on because you may have individual factors.
So understanding these factor and consequences is very important.

AWS Pen test - vulnerability scanning

I am trying to find out if it is correct to say that - In AWS we can only perform vulnerability scanning for EC2 instances.
From my research, it seems like there can be pen tests on other AWS services, but vulnerability scanning seems to be focused on EC2? (https://aws.amazon.com/security/penetration-testing/). If so, would it be safe to assume that vulnerabilities scans can be only focused on EC2 instances, but also periodic pen tests on the AWS services listed in the link above?
Any help is appreciated.
You are correct in seeking out pentesting which goes beyond EC2. However, the type of testing (if any) is highly dependent on which specific services you use.
It's very common that pentests do not cover all services only because they are improperly scoped. Not all AWS services will be relevant to a penetration test, but some may be critical. Here are some worthwhile misconfigurations to consider:
S3 - Buckets have their own access controls and unique API. Without insight to bucket names and AWS expertise, a pentester cannot determine if they are misconfigured. It is fairly common for buckets to allow access to AllUsers which is very dangerous.
RDS - You should make sure that databases are not publicly accessible from the internet (for obvious reasons).
Cognito, SNS, SQS - If you are pentesting an application, you will need to take a close look at the permission and configuration of authentication and messaging services (if they are in use). Misconfigurations here can allow someone to self-enroll in applications they shouldn't.
It would be worthwhile to spend some time to evaluate each service and get an understanding of it's attack surface. Here's an AWS pentesting guide for reference.

Access for developers to AWS

I need to allow developers to access resources on my AWS account.
They will be lunching instances and RDS, possibly some other resources.
What is the best way to achieve this?
IAM roles seem complicated with policies.
Should I lunch instances then give them SSH access?
What are your suggestions?
Thank you!
You should create an IAM User for each developer. Put them in an IAM Group and assign permissions to the Group.
However, this assumes that you are willing to trust them in your account, for which you should think twice. If you give them permissions to launch services, they might launch more than necessary, causing extra expense. If you give them permission to delete resources, they might accidentally delete resources being used by other people.
If they are just "playing around" with AWS to get an idea of what can be done, create a sandbox account where they can't do much harm. Keep this separate to your production account, where you'll keep resources that you don't want destroyed.
Or, if you just want them to develop software and not play with AWS directly, then do as you suggested and create the resources yourself, but give them access for software development purposes.
Bottom line: It all depends on what the developers want to do and what you're willing to let them do.
If it isa small environment, you can give ssh access to developeres.
But the infra is pretty big, then i prefer to go with IAM

Which AWS services for mobile app backend?

I'm trying to figure out what AWS services I need for the mobile application I'm working on with my startup. The application we're working on should go into the app-/play-store later this year, so we need a "best-practice" solution for our case. It must be high scaleable so if there are thousands of requests to the server it should remain stable and fast. Also we maybe want to deploy a website on it.
Actually we are using Uberspace (link) servers with an Node.js application and MongoDB running on it. Everything works fine, but for the release version we want to go with AWS. What we need is something we can run Node.js / MongoDB (or something similar to MongoDB) on and something to store images like profile pictures that can be requested by the user.
I have already read some informations about AWS on their website but that didn't help a lot. There are so many services and we don't know which of these fit our needs perfectly.
A friend told me to just use AWS EC2 for the Node.js server + MongoDB and S3 to store images, but on some websites I have read that it is better to use this architecture:
We would be glad if there is someone who can share his/her knowledge with us!
To run code: you can use lambda, but be careful: the benefit you
don't have to worry about server, the downside is lambda sometimes
unreasonably slow. If you need it really fast then you need it on EC2
with auto-scaling. If you tune it up properly it works like a charm.
To store data: DynamoDB if you want it really fast (single digits
milliseconds regardless of load and DB size) and according to best
practices. It REQUIRES proper schema or will cost you a fortune,
otherwise use MongoDB on EC2.
If you need RDBMS then RDS (benefits:
scalability, availability, no headache with maintenance)
Cache: they have both Redis and memcached.
S3: to store static assets.
I do not suggest CloudFront, there are another CDN on market with better
price/possibilities.
API gateway: yes, if you have an API.
Depending on your app, you may need SQS.
Cognito is a good service if you want to authenticate your users at using google/fb/etc.
CloudWatch: if you're metric-addict then it's not for you, perhaps standalone EC2
will be better. But, for most people CloudWatch is abcolutely OK.
Create all necessary alarms (CPU overload etc).
You should use roles
to allow access to your S3/DB from lambda/AWS.
You should not use the root account but create a separate user instead.
Create billing alarm: you'll know if you're going to break budget.
Create lambda functions to backup your EBS volumes (and whatever else you may need to backup). There's no problem if backup starts a second later, so
Lambda is ok here.
Run Trusted Adviser now and then.
it'd be better for you to set it up using CloudFormation stack: you'll be able to deploy the same infrastructure with ease in another region if/when needed, also it's relatively easier to manage Infrastructure-as-a-code than when it built manually.
If you want a very high scalable application, you may be need to use a serverless architecture with AWS lambda.
There is a framework called serverless that helps you to manage and organize all your lambda function and put them behind AWS Gateway.
For the storage you can use AWS EC2 and install MongoDB or you can go with AWS DynamODB as your NoSql storage.
If you want a frontend, both web and mobile, you may be want to visit the react native approach.
I hope I've been helpful.

Take backup of AWS configuration across all services

Having spent a couple of days setting up and configuring a new AWS account I would like to grab an export of the account configuration across all services. I've Googled around for existing scripts, etc, but have yet to find anything that would automate this process.
Primarily this would be as a backup incase the account was corrupted in some way (including user error!) but this would also be useful to document the system.
From an account administration perspective, there are various parts of the AWS console that don't display friendly names for various resources. Being able to cross reference against offline documentation would simplify these scenarios. For example, friendly names for vpc's and subnets aren't always displayed when configuring resources to use them.
Lastly I would like to be able to use this to spot suspicious changes to the configuration as part of intrusion detection. For example, looking out for security group changes to protected resources.
To clarify, I am looking to backup the configuration of AWS resources, not the actual resources themselves. Resource backups (e.g. EC2 instances) is already covered.
The closest i've seen to that is CloudFormer.
That would create a CloudFormation template from your account's resources. Mind that this template would be only a starting point, not meant to be reproducible out-of-the-box. For example, it won't log into your instances or anything like that.
As for the intrusion detection part, see CloudTrail
Check out AWS Config: https://aws.amazon.com/config/
AWS Config records the configuration of AWS resources automatically, allowing you to query and react to configuration changes. As AWS Config stores data on S3, that is probably enough backup, but you can also sync the bucket elsewhere for paranoid redundancy.