Let's imagine a scenario where there are multiple teams with apps using AWS batch in same aws account.
Say TeamA and TeamB.
Teams use AWS SDKs to have a dashboard to manage their jobs.
I need to restrict TeamA from submitting,terminating jobs created by TeamB and vice-versa.
I'm able to restrict actions that rely on job definitions or job queue, by having teams use a prefix while creating the resources they own
and having a policy with prefix before a wildcard
But TerminateJob relies on a jobid which is dynamic. How do restrict them from terminating someother teams jobs?
I'm looking for a resource level policy or condition keys that can restrict access.
Looked around a lot not finding anything that explains how to do this.
Any advice is much appreciated.
Thanks
Related
I am working on QLDB from last 3 months on a single region using it as a leisure database.
Now, business wants to move applications in multi-region support.
I found many of the aws services support multi region like DynamoDB, secret manager.
but there is limitations on QLDB for multi region use.
I saw from some aws articles that QLDB does not have support for multi region as its not distributed technology.
Now, to cater business requirement with minimal changes in code, I have to approaches/workaround for QLDB to support multi region,
Do I need to create region based ledger, with same functionality? I understand there are major challenges with maintaining the geo based traffic.
I will keep QLDB ledger in single region and gives cross region access permissions to Lambda functions to access it. Its a simplest one but eat latency.
Which approach helps in long term and in scalability? Or please suggest if anyone has different approach to achieve this.
Do I need to create region based leisure, with same functionality? I understand there are major challenges with maintaining the geo based traffic.
Yes, at this moment, like you said there is no multi region support or global in aws jargon, you need to create region based leisure on your own.
to cater business requirement with minimal changes in code
You can achieve cross region replication by following as mentioned in docs
Amazon QLDB does not support cross-region replication as of now. QLDB's export to S3 feature enables customers to export the contents of the QLDB journal to a S3 bucket. The S3 buckets can be configured for cross-region replication.
Side note :
I will keep QLDB leisure in single region and gives cross region access permissions to Lambda functions to access it. Its a simplest one but eat latency.
If your business wants multi-region support this option would not satisfy their conditions.
We have applications for multiple tenants on our AWS account and would like to distinguish between them in different IAM roles. In most places this is already possible by limiting resource access based on naming patterns.
For CloudWatch log groups of SageMaker training jobs however I have not seen a working solution yet. The tenants can choose the job name arbitrarily, and hence the only part of the LogGroup name that is available for pattern matching would be the prefix before the job name. This prefix however seems to be fixed to /aws/sagemaker/TrainingJobs.
Is there a way to change or extend this prefix in order to make such limiting possible? Say, for example /aws/sagemaker/TrainingJobs/<product>-<stage>-<component>/<training-job-name>-... so that a resource limitation like /aws/sagemaker/TrainingJobs/<product>-* becomes possible?
I think it is not possible to change the log streams names for any of the SageMaker services.
I need to generate a report of all AWS Services that were provisioned after a certain date (say last 3 months).
AWS Service Catalog seems relevant here; but can this be used only if the services were provisioned using CloudFormation Templates?
We did our provisioning using Terraform - can AWS Service Catalog still be used to generate an inventory?
If not, is there an alternate way to generate this report?
You can try to use the Resource Groups for that https://eu-central-1.console.aws.amazon.com/resource-groups/home?region=eu-central-1#
There you will find the Tag Editor https://eu-central-1.console.aws.amazon.com/resource-groups/tag-editor/find-resources?region=eu-central-1 and list all of your resources.
If you have tagged your resources, you can filter by them. Alternative solution would be to tag all resources with the current date...wait one day...search again and find resources without the specific date tag. So you will find the differences.
To automate this, you can use e.g. https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/resourcegroupstaggingapi.html#client
To get a full solution, you can use Tag Editor, get all resources and request the resources itself with the specific API of each resource, e.g. EC2, Lambda, RDS, etc.
This could be time consuming, so maybe a solution like from aquasec could fit your needs.
I am planning to spin-up AWS Managed SFTP Server. AWS Documentation say, I can create upto 20 users. Can I configure for 20 users 20 different buckets and assign seperate previleges ? Is this a possible configuration ?
All I am looking for exposing same endpoint with different vendors having access to different AWS S3 buckets to upload their files to designated AWS S3 buckets.
Appreciate all your thoughts and response at the earliest.
Thanks
Setting up separate buckets and AWS Transfer instances for each vendor is a best practice for workload separation. I would recommend setting up a custom URL in Route53 for each of your vendors and not attempt to consolidate on a single URL (it isn't natively supported).
https://docs.aws.amazon.com/transfer/latest/userguide/requirements-dns.html
While setting up separate AWS Transfer Family instances will work, it comes at a higher cost (remember you are charged even if you stop until the time you delete, you are billed $0.30 per hour which is ~ $216 per month).
The other way is to create different users (one per vendor) and use different home directories (one per vendor) and lock down permissions through IAM role for that user (also there is provision to use a scope-down policy along with AD). If using service managed users see this link https://docs.aws.amazon.com/transfer/latest/userguide/service-managed-users.html.
I need to list the amount of resources that are part of an AWS account in Go, while a resource should be anything that has a price tag on it and can be counted, e.g.
S3 buckets
EC2 instances
RDS instances
ELBs
...
state, region, type and tags are not relevant for this kind of overview, just the raw numbers.
I could of course use the Go SDK and use each corresponding service to get the instances and sum them up, but this would mean lots of boilerplate code and lots of time to create it.
My question: Is there any more generic approach to get the item counts for most services (fine if it doesn't work for all) that can be used with the Go SDK, so I don't have to recode the same query for each service manually?