virtually isolate the network in the same AWS Cloud Account - amazon-web-services

I am new in the AWS Cloud services.
I assigned a project to prepare a new environment in the cloud, to which my team will later migrate their applications. The Stakeholders have come up with some Technical and Business requirements:
They are concerned about the security of the environment, so they have decided to virtually isolate their network from the rest of the customers and rest of the environments in the same AWS Cloud Account
Which AWS Cloud service I could try to use to implement this requirement?
Please let me know if I need to provide more details.
Thank you in advance.

First of all, I would question why the Stakeholders would assign someone with very little AWS experience the task of creating a secure network from scratch in it, and then reveal they are concerned about how secure it will be. (Nothing personal against you, just seems like a strange approach)
Secondly, this is a deep topic, with multiple answers depending upon the specifics of your Technical and Business requirements...
From what I can gather, at a high level you're trying to implement a multi-VPC setup in a single AWS Account.
In short, there are too many scenarios to go into for a StackOverflow answer. The best advice I could give would be to seek advice from an AWS networking/security architect (or consultant) if that is an option for you. They should be able to review your requirements in detail and formulate an appropriate solution.
I'll give you an idea of the sorts of services/resources you should be looking to read up on if you want to implement a secure multi-VPC network in AWS:
VPC peering connections or Transit Gateway to handle routing between VPCs
NACLs to control layer 3 traffic into and out of your VPCs
Security Groups to control layer 3 & 4 traffic into and out of the instances in your VPCs

Related

Separating Dev Prod Environments In AWS

I’m my scenario wants to separate out the production environment from our development environments.
We'd like to only have our production systems on one AWS account and all other systems and services on another.
I'd like to split/separate for billing purposes. If I do add more monitoring services many charge by the number of running instances. I have considerably more running instances than I need to monitor though so I'd like the separation. This also would make managing permissions in the future a lot easier I believe (e.g. security hub scores wouldn't be affected by LMS instances).
I'd like to split out all public facing assets to a separate AWS account. So RDS, all EC2 instances relating to prod-webserver (instances, target group, AMI, scaling, VPC, etc.), S3 cloudfront.abc.com bucket, jenkins, OpenVPN, all Seoul assets.
Perhaps I could achieve the goal with 'Organizations' or the 'Control Tower' as well. Could anyone please advise what would be best in my scenario? Is there Better alternative for this ?
The fact that you was to split for billing purposes means you should use separate AWS Accounts. While you could split some billing by tags within a single account, it's much easier to use multiple accounts to split the billing.
The typical split is Production / Testing / Development.
You can join the accounts together by using AWS Organizations, which gives some overall security controls.
Separating workloads and environments is considered a best practice in AWS according to the AWS Well-Architected Framework. Nowadays Control Tower (which builds upon AWS Organizations) is the standard for building multi-account setups in AWS.
Regarding multi-account setups I recommend reading the Organizing Your AWS Environment Using Multiple Accounts.
Also have a look at the open-source AWS Quickstart superwerker which sets up a well-architected AWS landing zone using AWS Control Tower, Security Hub, GuardDuty, and more.
AWS provides a lot information about this topic. E.g. a very detailed Whitepaper about Organizing Your AWS Environment in which they say
Using multiple AWS accounts to help isolate and manage your business applications and data can
help you optimize across most of the AWS Well-Architected Framework pillars, including operational
excellence, security, reliability, and cost optimization.
With accounts, you logically separate all resources (unless you allow something else) and therefore ensure independence between e.g. the development environment and the production environment.
You should also take a look at Organizational Units (OUs)
The following benefits of using OUs helped shape the Recommended OUs and accounts and Patterns for organizing your AWS accounts.
Group similar accounts based on function
Apply common policies
Share common resources
Provision and manage common resources
Control Tower is a tool which allows you to manage all your AWS accounts in one place. You can apply policies for every account, OU, or prohibit regions. You can use the Account Factory to create new accounts based on blueprints.
But still you need to collect a lot of knowledge about these tools and best practices because they're just that. Best practices and recommendations you can use to get started and build a good foundation, but they're nothing you can fully rely on because you may have individual factors.
So understanding these factor and consequences is very important.

AWS Pen test - vulnerability scanning

I am trying to find out if it is correct to say that - In AWS we can only perform vulnerability scanning for EC2 instances.
From my research, it seems like there can be pen tests on other AWS services, but vulnerability scanning seems to be focused on EC2? (https://aws.amazon.com/security/penetration-testing/). If so, would it be safe to assume that vulnerabilities scans can be only focused on EC2 instances, but also periodic pen tests on the AWS services listed in the link above?
Any help is appreciated.
You are correct in seeking out pentesting which goes beyond EC2. However, the type of testing (if any) is highly dependent on which specific services you use.
It's very common that pentests do not cover all services only because they are improperly scoped. Not all AWS services will be relevant to a penetration test, but some may be critical. Here are some worthwhile misconfigurations to consider:
S3 - Buckets have their own access controls and unique API. Without insight to bucket names and AWS expertise, a pentester cannot determine if they are misconfigured. It is fairly common for buckets to allow access to AllUsers which is very dangerous.
RDS - You should make sure that databases are not publicly accessible from the internet (for obvious reasons).
Cognito, SNS, SQS - If you are pentesting an application, you will need to take a close look at the permission and configuration of authentication and messaging services (if they are in use). Misconfigurations here can allow someone to self-enroll in applications they shouldn't.
It would be worthwhile to spend some time to evaluate each service and get an understanding of it's attack surface. Here's an AWS pentesting guide for reference.

Deployment Architecture for cloud & on premise b2b application

I'm working on a SaaS application which at the moment is cloud only. It's a traditional Java web application which we deploy to AWS. We rely on AWS concepts like RDS, S3, ELB, Autoscaling and for infrastructure provisioning AMIs, Cloudformation, Ansible and CodeDeploy.
There is now more and more demand for on-premise deployments by potential clients.
Are there any common approaches to package b2b applications for on-premise deployments?
My first thought would be to containerize the app infrastructure (web server, database, etc) and assume a client would be able run images. What are you guys doing and how do you tackle HA and DR aspects which come with cloud infrastructure like AWS?
I'm tackling a similar problem at the moment and there really is no one-fits all answer. Designing software for cloud-nativity comes with a lot of architectural design decisions to use technologies on offer by the platform (as you have with S3, RDS, etc) which ultimately do not cross-over to majority of on-premise deployments.
Containerising your application estate is great for cross-cloud and some hybrid cloud portability but there is no guarantee that a client is using containerised work-loads on their on-premise data centre which makes the paradigm still a way off the target of supporting both seamlessly.
I find another issue is that the design principles behind cloud-hosted software are vastly different to those on-premise, with static resource requirements, often a lack of ability to scale etcetera (ironically some of the main reasons you would move a software solution to a cloud environment) so trying to design for both is a struggle and I'm guessing we will end up with a sub-optimal solution unless we decide to favour one and treat the other as a secondary concern.
I'm thinking maybe the best cross-breed solution is to concentrate on containerisation for cloud hosts taking into account the products and services on offer (and in the roadmap) - and then for making the same software available to clients who wish to use on-premise datacenters still.... perhaps they could be offered VM Images with the software solution packaged in... then make this available on a client portal for them with instructions on installation/configuration.
... I wish everyone would just use Kubernetes already! :)

Differences between AWS Global and AWS China

[Not sure this is the correct forum for this question, but I'll give it a shot.]
I'm looking at duplicating an existing solution built on AWS into an AWS China account. From what I've read in AWS' getting started blog post and AWS China's list of services per region, it seems to me that deploying a solution in Beijing or Ningxia using the AWS services we're used to and dependent on would be feasible. But since you cannot create an AWS China account without having a business license (which seems to be a topic in itself, hmm), it seems impossible to actually try things out to get a feel for if there are any differences. I also cannot seem to find any blog posts with testimonies, experiences from developers or architects who've done this, which is surprising.
Basically I want to understand if taking an existing solution built on AWS and setting it up on Chinese infrastructure is straightforward or if I should expect some differences in how things work etc. I know that AWS does not operate these two regions themselves, but through Chinese partner companies. But I'm not sure if the service capabilities, APIs etc are identical (even including the timing of releases of new versions etc).
The only real limitations I can find on the AWS blog is that the free tier is not available, and that EC2 classic instances is not supported. But let's say I have a solution using very stadnard AWS services like Cloudfront, S3, DynamoDB, Lambda, ECS, Elastic Beanstalk, Cognito, KMS etc. Will it be fairly simple to migrate it to an AWS China account or should I expect a struggle?
Regarding the difference, basically AWS China and AWS Global are two seperate cloud and they are not connected to earch other, thus they will have separate Marketplace,Endpoints and ARNs, different service capablities etc. However those differences are not capatured in such details in official AWS documentation.
For example most security related features, landing zone related feature are not available in AWS China. I have tried to customize some AWS global solutions to China, and met lot of issues and challenges, so plug and play won't work here. The best way is to have some parnters or local presence to overcome those challenges especially the team with similar capabilities.

How can one seamlessly transfer services hosted on AWS to Google Cloud Platform and vice versa?

Last month there was an outage in AWS and some sites had to be taken down because of that. I was wondering if a company is availing both AWS and Google Cloud Platform for hosting, how easy would it be for them to easily transfer their services from the Amazon platform to the Google platform or vice versa ( In case Google Cloud has some outage) . First of all is it possible or not? And also if it's what would be the cost for performing such an activity and how much time will it take to get the services running back again.
In this I also did some digging up and what I came across was each of the providers (Google and Amazon) have tools of their own to do so i.e. for transferring the stored data from other platforms to their platform -
https://cloud.google.com/storage/docs/migrating?hl=en
https://aws.amazon.com/importexport/
Are these the only options available or there is anything else as well. Hope some AWS/Google cloud expert would be able to answer my question.
You would need to run your application in both environments, keep the deployments in sync, keep the databases in sync, etc. That can get complicated and expensive...
Then to automatically fail over from one environment to another you could use a DNS service such as DynDNS Active Failover that monitors the health of your application and starts sending traffic to the other environment if your primary environment becomes unhealthy.
How you manage deployments, how you continually ship data across environments, how much all that will cost, all those questions are extremely specific to the technologies (programming languages, operating systems, database servers) you are currently using. There's no way to give details on how you would accomplish those tasks without having all the details of your system.
Further, if you are using proprietary technologies on a specific platform, such as Amazon Redshift or DynamoDB, you might not find a service on the other platform that provides the same functionality.
I've seen this subject come up a lot since the last AWS outage, but I think maintaining two environments on two different platforms is overkill for all but the most extremely critical applications. Instead, I would look into maintaining a copy of your application in a different AWS region, and use Route53 health checks to fail-over.