So this might be a silly question, but what is the point of using Amazon SQS if it requires a private and public key? If the client has the private and public key they could probably discover the keys via decompile or some other means...
The only secure way I could think of would be to use a proxy (like php) that has the private and public keys. But then what is the point of using SQS in the first place? The main benefit of SQS (I can see) is that it can scale upwards and you don't have to worry about how many messages you are receiving. But if you are going to be using a proxy then you will have to scale that too... I hope my concerns make sense?
Thanks
Your concerns would be valid if you had to give your secret key out for clients to pull data from the queue. However the typical workflow involves using your AWS account ID for creating and modifying queues and perhaps pushing data onto the queues. Then you can set permissions with either the SQS addPermission action or setup a more finely controlled access policy. This means you would give read access to only a specific AWS account or to anonymous usage, but you would not allow for other modifications.
So basically you have a couple options. You could compile in the AWS public and private keys which you have setup in advanced that has restricted permissions on your client application. A better approach in my opinion is to make the public and private key files a configurable option on your client and tell the users of the client they are responsible for getting their own AWS account and keys and they can tell you what their AWS key is and you can give them as fine grain control as you want on a per client basis.
These resources would be good for you to look at:
Using the Access Policy Language
Controlling User Access to your AWS account
addPermission action for SQS
Related
In AWS IoT it is possible to attach multiple things to a single certificate. Also the Device SDKs support attaching multiple devices/things to the same IoT client.
How can a policy be defined for such a client to allow only for topics belonging to the shadows for the attached things?
A policy variable ${iot:Connection.Thing.ThingName} exists, but that will only work for the thing matching the client id, but it doesn't work for other attached things. The policy variable ${iot:Connection.Thing.IsAttached} on the other hand doesn't seem useable within the Resource section for specific topics.
In our case not all of our things are connected to AWS IoT directly, so we would like to interact with multiple thing shadows from within the same IoT client/certificate, which acts as an IoT Gateway. We do have a hook where the (un-)linking happens, but updating the policy there to add things "hard-coded" doesn't feel right.
What would be a good approach to keep this setup secure? Allowing access to all topics for linked things and denying access to others?
We are open to suggestions for a different approach on implementing this single client, multiple things/shadows interactions securely.
I have to create an alert when a human user tries to access a DB instance in AWS using a service account. Programmatic access should be fine and no need to be alerted.
Could anyone suggest a best possible solution to achieve this?
You can't, at least not directly.
There are two ways to access an Aurora database:
Via a TCP connection, using the Postgres or MySQL connection protocol.
Via the RDS Data API.
In both cases, there is a program at the other end of the connection, and the database has no way to determine whether that program is an business application, a user-written program connecting using a connection library, a user making API calls from a Jupyter notebook, or a user typing directly into psql.
The best that you can get is an indirect indication.
For example, if you use usernames and passwords to access the database, and store that information in a Secrets Manager secret, then you can use CloudTrail to find all calls to GetSecretValue and alert based on user identity. You can do the same thing for ExecuteStatement if using the Data API, but I don't believe that there's a CloudTrail event if you're using IAM-generated tokens for authorization.
However, even that has limitations. First, because you have to wait 15 minutes for events to appear in CloudTrail (which makes it a forensic tool, not a good alerting tool). Second, because there are ways to conceal your true identity (although it's not that easy with the Data API).
The real solution to your problem (which you have not described) will be an architecture that makes it difficult to create ad hoc database connections, and a culture that discourages such behavior.
I'm working on a Slack app which will have to store access token per each customer using the app (ex. 1000 teams using it = 1000 tokens). Token enables the app to access Slack API for customers workspace and will be used frequently every day.
App will be running on AWS, using Lambda's and DynamoDB.
What would be the best practice to store those access tokens securly?
I cannot find any strict recomendation for this scenario. Was thinking initially to put those in DynamoDB in a dedicated table but thinking now if I should use other AWS services for that use case. I've checked Secrets Manager but looks like a rather expensive option and not sure if it applies to my scenario.
Appreciate any suggestions.
I would probably use a dedicated DynamoDB table for this purpose. At a minimum, I would configure it to use a KMS CMK to encrypt the data at-rest, and also restrict access to the table through fairly granular IAM permissions in your AWS account. If you also wanted to encrypt each value separately you could look into client-side encryption.
Your findings on the Secrets Manager costs are a good point. You could also look at Systems Manager Parameter Store as an alternative that is generally cheaper than Secrets Manager. Secrets Manager does have the added security of being able to set an IAM resource policy on the secret itself.
Ultimately it's up to you to determine how secure your solution needs to be, and how much you are willing to pay for that. You could even spin up an AWS HSM to encrypt the values, but that would increase the cost by quite a bit.
I am looking for ways to automate the rotation of access keys (AWS credentials) for a set of users. There is a seperate process that creates the Access Keys. I need to be able to rotate the keys in an automated way. This link explains a way to do this for a specific user. How would I be able to achieve this for a list of users. Any thoughts or recommendations?
You can use AWS Config to mark the old access keys non-compliant (https://docs.aws.amazon.com/config/latest/developerguide/access-keys-rotated.html) and then use CloudWatch Events (my article how to do this) to run a Lambda function that deletes the old key, creates a new one, then send it to the user.
Access keys are generally used for programmatic access by applications. If these applications are running in, says EC2, you should use roles for EC2. This will install temporary credentials on the instance that are automatically rotated for you. The AWS CLI and SDKs know how to automatically retrieve these credentials so you don't need to add them in the application either.
Other compute solutions (Lambda, ECS/EKS) also have ways to provision roles for applications.
I need to develop a solution to store both symmetric and asymmetric keys securely in AWS. These keys will be used by applications that are running on EC2s and Lambdas. The applications will need to be set up with policies that will allow the application or lambda to pull the keys out of the key store. The key store should also manage the key expiry, notifying various people when keys are going to expire. The initial key exchange is between my company and its partners meaning that we may have either a public or private key for a key pair depending upon the data transfer direction.
We have looked at KMS but from what I have seen KMS does not support asymmetric keys. I have also seen online that some people are using either S3 (protected by KMS) or parameter store to store the keys but this does not address the issue of key management.
Do you guys have any thoughts on this? or even SaaS/PaaS suggestions?
My recommendation would be to use AWS Secrets Manager for this. Secrets Manager allows you to store any type of credential/key, you can set up fine-grained cross account permissions to secrets, encryption at rest is used (via KMS), and secrets can be automatically rotated (by providing an expiration time and an AWS Lambda function owned by you to perform the rotation).
More details on the official docs:
Basic tutorial on how to use AWS Secrets Manager
Encryption at rest on Secrets Manager
Secrets rotation
Managing secrets policies