I have some experience with AWS and an AWS Developer Associate certification. I have been told that I am being moved to a project where I will be using GCP. How easy/hard would it be to learn GCP with AWS experience? Alternatively, how can I facilitate my entry into GCP with an AWS background?
The fundamentals are similar in AWS, GC and Azure, although the terminology is different. There are differences of course (for example, subnets in GC are regional whereas in AWS they're in AZs) but they're easy not too difficult to understand once you get into it.
There's a course by Google in Coursera, which is designed for people familiar with AWS - https://www.coursera.org/learn/gcp-fundamentals-aws
The GC learning resources should also help - https://cloud.google.com/training?hl=en
I think the main difference between AWS and GCP is how projects are managed. I'm referring to Identity and Access Management (IAM) and Resource Manager. In GCP you manage projects in a hierarchical way, using an approach called Resource Hierarchy.
In GCP you always have an Organization, a Project and resources. You might also have Folders. In GCP, basically everything is a resource (like in a REST API). All GCP resources belong to a project, and an individual GCP account can manage multiple projects.
You can manage each GCP project individually, or you can group related projects into folders and manage them from there, or even manage everything from the top-level GCP Organization.
By managing, I mean applying policies: what this resource can do, which accounts can use it.
GCP accounts are sometimes called IAM principals. An IAM principle can be a user account, a Google group (i.e. a bunch of user accounts), a service account (i.e. an account assigned to a program).
The relationship between 1 resource (e.g. a GCP project) and N IAM principals (e.g. 2 user accounts, 1 service account) that have that set of privileges is called IAM binding. A IAM policy is a set of IAM bindings.
As for the services AWS, Azure and GCP offer, there is this nice comparison chart.
So to recap, focus on learning IAM and resource hierarchy first. You will need it whatever GCP service you will end up using.
Related
My organization is using GCP, and we have service accounts created for me and my co-workers. We need to use BigQuery storage transfer service, Cloud Dataflow and other Google Cloud resources.
1)So, what will be the recommended way of creating the scheduling job or creating the resources. Shall we create it via our service accounts or create another service account for the project and use that to schedule and use resources?
2)If it is done via my organization provided service account, what happens when I leave the organization and my service account is deleted. Does the jobs and pipelines continue to run under that project, or the resources are stopped?
NOTE Stackoverflow is focused on programming questions and this is not a programming question but more a question for help with architecture (guidance).
Service Accounts are non-user identities supported by Google.
Service Accounts are intended to be used by software|processes.
Service Accounts are Google resources that are "owned" by Google Projects (not Organizations nor users).
Service Accounts are deleted by Project members (users or indeed other Service Accounts that may inherit Project-specific roles from an Organization).
If a user (i.e. you) were to leave the organization, your org admins would likely delete your user account. This would not delete any Service Accounts. However, if your user identity had unique roles in the organization (represented by IAM permissions in the Google Organization and/or Project(s)), access to resources including Service Accounts could become inaccessible. For this reason, good org hygiene recommends that admin-like roles be assigned to groups rather than individual users.
I think it's good practice to create Service Accounts for software|processes on a per function basis. Some job should have its own Service Account.
This approach results in more Service Accounts but, it enables each Service Account to be exquisitely suited (IAM roles|permissions) to its job.
Our company has reached a point where too many demo projects have been launched and left rolling on their own without having anyone actually assigned to the projects anymore.
How does one set up something that would detect unused projects/resources and send an email to the owning IAM User?
It might be fair to assume that CloudFormation is being used by almost all the projects (generally through a CodeStar, Elastic Beanstalk, or Lambda set up).
If no reply/activity is manifested by the contacted IAM User, maybe another email could be sent to some administrator notifying about such a situation, too.
I have the same need to monitor all ressources used/unused, non compliance ... in my vpc.
I would go to AWS config to , among all feature, get a view of ressources relationship, and AWS service catalog to "regroup" authorized ressources and follow a bunch of informations.
Furthermore AWS config is deploy as a cloud formation stack.
If you dev deploy through cloud formation, you can dig a new feature called Cloud Formation Hook as compliance feature to inspect stack ressource before provisionning, and list all ressource within DynamoDb.
You can check also this (AWS organizations - List resources by AWS account?)
Have a look also aws-nuke that list ressource - and delete them with the correct quote - for any aws account.
I know it might sound like a basic question but I haven't figured out what to do.
We're working on having a testing environment for screening candidates for Cloud Engineer and BigData interviews.
We are looking into creating on demand AWS environments probably using Cloudformation service and test if the user is able to perform specific tasks in the environment like creating s3 buckets, assigning roles, creating security groups etc using boto3.
But once the screening is finished, we want to automatically tear down the entire setup that has been created earlier.
There could be multiple candidates taking the test at same time. We want to create the environments (which might contain ec2 instances, s3 buckets etc which are not visible to other users) and tear down them once the tests are finished.
We thought of creating IAM users for every candidate dynamically using an IAM role and create a stack automatically and delete those users once the test is finished.
However, I think the users will be able to see the resources created by other users which is not what we are expecting.
Is there any other better approach that we can use for creating these environments or labs and deleting them for users? something like ITversity and Qwiklabs.
The logged in user should have access to and view the resources created only for him.
Please suggest.
Query1:
Let's say I have created 10 IAM roles using and one user using each of those roles. Will the user in created from IAM role 1 be able to see the VPCs or EC2 instances or S3 or any other resources created by another user which is created by IAM role 2?
Will the resources be completely isolated from one IAM role to another?
Or does service like AWS Organizations be much helpful in this case?
The Qwiklabs environment works as follows:
A pool of AWS accounts is maintained
When a student starts a lab, one of these accounts is allocated to the lab/student
A CloudFormation template is launched to provision initial resources
A student login (either via IAM User or Federated Login) is provisioned and is assigned a limited set of permissions
At the conclusion of the lab, the student login is removed, a "reaper" deletes resources in the account and the CloudFormation stack is deleted
The "reaper" is a series of scripts that recursively go through each service in each region and deletes resources that were created during the lab. A similar capability can be obtained with rebuy-de/aws-nuke: Nuke a whole AWS account and delete all its resources.
You could attempt to create such an environment yourself.
I would recommend looking at Scenario 3 in the following AWS document:
Setting Up Multiuser Environments in the AWS Cloud
(for Classroom Training and Research)
It references a "students" environment, however it should suite an interview-candidate testing needs.
The “Separate AWS Account for Each User” scenario with optional consolidated billing provides an excellent
environment for users who need a completely separate account environment, such as researchers or graduate students.
It is similar to the “Limited User Access to AWS Management Console” scenario, except that each IAM user is created in
a separate AWS account, eliminating the risk of users affecting each other’s services.
As an example, consider a research lab with 10 graduate students. The administrator creates one paying AWS account,
10 linked student AWS accounts, and 1 restricted IAM user per linked account. The administrator provisions separate
AWS accounts for each user and links the accounts to the paying AWS account. Within each account, the administrator
creates an IAM user and applies access control policies. Users receive access to an IAM user within their AWS account.
They can log into the AWS Management Console to launch and access different AWS services, subject to the access
control policy applied to their account. Students don’t see resources provisioned by other students.
One key advantage of this scenario is the ability for a student to continue using the account after the completion of the
course. For example, if students use AWS resources as part of a startup course, they can continue to use what they have
built on AWS after the semester is over.
https://d1.awsstatic.com/whitepapers/aws-setting-up-multiuser-environments-education.pdf
However, I think the users will be able to see the resources created by other users which is not what we are expecting.
AWS resources are visible to their owners and to those, with whom they are shared by the owner.
New IAM users should not see any AWS resources at all.
We have multiple AWS accounts (about 15-20), one AWS account per client that we are managing, each account having VPC having dedicated setup of instances. Due to regulatory requirements all accounts needs to be isolated from each other.
What is the best way to manage account credentials for these AWS accounts? Following is what I am thinking
-For any new client
Create a new AWS account
Create AWS IAM roles (admin, developer,
tester) for newly created account using cloudformation
Using master
AWS account, assume roles created in step 2 to access other
accounts.
Is this the right approact to manage multiple accounts?
Thanks in advance.
Facilitating IAM Roles is a very common and (I think) the right approach to manage authentication for multiple accounts indeed, AWS has just recently released resp. updates that greatly help with this, see Cross-Account Access in the AWS Management Console:
Many AWS customers use separate AWS accounts (usually in conjunction with Consolidated Billing) for their development and production resources. This separation allows them to cleanly separate different types of resources and can also provide some security benefits.
Today we are making it easier for you to work productively within a multi-account (or multi-role) AWS environment by making it easy for you to switch roles within the AWS Management Console. You can now sign in to the console as an IAM user or via federated Single Sign-On and then switch the console to manage another account without having to enter (or remember) another user name and password.
Please note that this doesn't just work for the AWS Management Console, but also with the AWS Command Line Interface (AWS CLI), as greatly explored/explained in by Mitch Garnaat in Switching Roles in the AWS Management Console and AWSCLI.
Furthermore, Mitch has followed up with a dedicated new tool 'rolemodel' to help with setting things up pretty much like you outlined, which you might want to evaluate accordingly:
Rolemodel is a command line tool that helps you set up and maintain cross-account IAM roles for the purpose of using them in the new switch role capability of the AWS management console. These same cross-account roles can also be used with the AWSCLI as described here.
I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.