In AWS IoT it is possible to attach multiple things to a single certificate. Also the Device SDKs support attaching multiple devices/things to the same IoT client.
How can a policy be defined for such a client to allow only for topics belonging to the shadows for the attached things?
A policy variable ${iot:Connection.Thing.ThingName} exists, but that will only work for the thing matching the client id, but it doesn't work for other attached things. The policy variable ${iot:Connection.Thing.IsAttached} on the other hand doesn't seem useable within the Resource section for specific topics.
In our case not all of our things are connected to AWS IoT directly, so we would like to interact with multiple thing shadows from within the same IoT client/certificate, which acts as an IoT Gateway. We do have a hook where the (un-)linking happens, but updating the policy there to add things "hard-coded" doesn't feel right.
What would be a good approach to keep this setup secure? Allowing access to all topics for linked things and denying access to others?
We are open to suggestions for a different approach on implementing this single client, multiple things/shadows interactions securely.
Related
I am trying to make a connection between AWS IoT and my React JS APP.
I followed this tutorial (https://medium.com/serverlessguru/serverless-real-time-reactjs-app-aws-iot-mqtt-17d023954045), and it is not clear to me how to attach the Cognito Identity ID to the AWS IoT Policy.
During all my investigation, I found that this process must be done through command line.
In the article above, theses process is done by the following command line:
• Note that the “identity_pool_id” has to be considered in this command.
In the aws documentation (https://aws-amplify.github.io/docs/js/pubsub), it says to write the “identity_id” in the command line:
When I use the “identity_pool_id” in the command line, and I try to publish a message from AWS IoT, I got the following error:
When I use the “identity_id” in the command line, I can perform the communication between AWS IoT and the Frontend successfully:
The problem is that the “identity_id” is a different code for each user. Considering that I am going to have a lot of user in my application I don’t know how to perform this task.
• Am I doing the right process to consider the “identity_id” instead of “identity_pool_id”?
• If yes, how could I automatically attach the Cognito ID to the AWS IoT Policy every time I have a new user signedIn in my application?
• Are there any problem to have thousands of Cognito certificates attached in a AWS IoT Policy?
Following answer is in chronological order corresponding to 3 questions.
You can attach only identity_id (user) to IoT policy. Also, I can see you have used "attach-principal-policy" API which is deprecated now, so instead of that please use AttachPolicy API
I'm unsure here, still I'd recommend to evaluate and verify it on Cognito's post confirmation trigger
Absolutely right, you can attach a IoT policy to myriad of certificates; technically it is known as Simplified Permission Management
For #3, Relevant Snippet from AWS (Ref - https://aws.amazon.com/iot-core/faqs/ where find Q. What is Simplified Permission Management?)
"You can share a single generic policy for multiple devices. A generic policy can be shared among the same category of devices instead of creating a unique policy per device. For example, a policy that references the “serial-number” as a variable, can be attached to all the devices of the same model. When devices of the same serial number connect, policy variables will be automatically substituted by their serial-number."
I would like to use AWS Lambda to perform a computation on behalf of a 3rd party and then prove to them that I did so as intended. A proof would be a cryptographically signed digest of the function body, the request, and the response. Ideally, Amazon would sign the digest with its own private key and publish their public key to allow verification of the signature. The idea is similar to the "secure enclave" that new Intel chips provide through SGX (Software Guard Extensions).
The existing Lambda service has some of the ingredients needed. For example, the GetFunction response includes a CodeSha256 field that uniquely identifies the function implementation. And the Amazon API Gateway allows you to make HTTPS requests to the Lambda service, which might allow a TLSNotary-style proof of the request-response contents. But to do this right I think AWS Lambda needs to provide the signature directly.
Microsoft Azure is working on trusted software enclaves ("cryptlets") in their Project Bletchley:
https://github.com/Azure/azure-blockchain-projects/blob/master/bletchley/bletchley-whitepaper.md
https://github.com/Azure/azure-blockchain-projects/blob/master/bletchley/CryptletsDeepDive.md
Is something like this possible with the current AWS Lambda?
Let's make some definitions first, Lambda isn't a server but a service that runs your code. it does not provide any signature directly but rather what you configure for it on AWS.
The Secure Enclave is one implementation or a type of TPM (Trusted Platform Module), this can be done in many ways and the Secure Enclave is one of the best.
The short answer to your question is yes it can be done as long as you implement the needed code and add all the required configuration, SSL etc.
I would advide you to read the following:http://ieeexplore.ieee.org/document/5703613/?reload=true
And in case you want to have a TPM out of the box you can use microsoft project: https://github.com/Microsoft/TSS.MSR
AWS has different approach according to security. You can set what can use particular resource, and which way.
For sure you can do what was described. You can identify request, response, and exact version of code that was used. Question is if you want to sign code, when processing request. Easier way is to have that calculated on deploy.
For first case - you need language with access to source. Like with Python, you can get it, sign and return that, or store somewhere.
Second case - I would use tagging.
There is also another solution to the problem by using IAM. You can provision an IAM role for your customer that has read-access to the Lambda source code. By using the public lambda endpoint (the one that looks like https://api-id.execute-api.region.amazonaws.com/STAGE) - you can assure the customer that the request is directly hitting this specific lambda function.
The IAM role available to your customer has permissions to do the following:
View the lambda code and other details across all revisions
Read the API gateway configuration to validate that the request directly hits the lambda, and doesn't go elsewhere.
All your customer needs to do then is setup auditing at their end against lambda by using the given IAM role. They can setup a periodic cron that downloads all versions of your lambda as it is updated. If you have a pre-review process - that can be easily configured against their alerting.
Note that this relies on "AWS" running in good faith and the underlying assumptions being:
AWS Lambda is running the code it is configured against.
AWS management APIs return correct responses.
The time-to-alert is reasonable. This is easier, since you can download previous lambda code versions as well.
All of these are reasonable assumptions.
I have a server using intensively AWS SNS. Of course I have multiple environments (dev, QA, production, custom, etc.)
Knowing that SNS let you register only one endpoint per token (so, AFAIK, I can't have two differents Endpoints with the same token, even if created from different PlatformApplication), how could I manage separation between my different environments?
EDIT: all our environment are in the same AWS IAM account.
SNS does allow you to subscribe multiple HTTP/HTTPS endpoints to a single Topic but it sounds like you need an SNS topic per environment so that the dev Topic dispatches requests to a dev HTTP endpoint.
The recommended AWS strategy for multiple environments in 2017 is to use multiple accounts -- one per environment. You can use consolidated billing for all of your accounts.
If you separated them then you would wind up creating an SNS topic for each environment and each would publish requests to the appropriate endpoint for that environment.
The single account solution would be to create one Topic per environment and to update your app config or environment variables to use the ARN appropriate to the environment.
If your platformApplication is android, then you can use the same GCM/FCM server key to create multiple platformApplicationARNs with different names(one per env I'd assume).
If it is iOS, you would have a dev key and a prod key for one application. I doubt you will be able to create multiple platformApplicationARNs with the same key using different names. Try it, if it works you're set!
Next, you should be able to register the same deviceToken with each of these different platformApplicationARNs.(I have tried this, it worked). This behaviour is similar to one mobile device registering to different applications for notifications.
Can AWS IAM be used to control access for custom applications? I heavily rely on IAM for controlling access to AWS resources. I have a custom Python app that I would like to extend to work with IAM, but I can't find any references to this being done by anyone.
I've considered the same thing, and I think it's theoretically possible. The main issue is that there's no call available in IAM that determines if a particular call is allowed (SimulateCustomPolicy may work, but that doesn't seem to be its purpose so I'm not sure it would have the throughput to handle high volumes).
As a result, you'd have to write your own IAM policy evaluator for those custom calls. I don't think that's inherently a bad thing, since it's also something you'd have to build for any other policy-based system. And the IAM policy format seems reasonable enough to be used.
I guess the short answer is, yes, it's possible, with some work. And if you do it, please open source the code so the rest of us can use it.
The only way you can manage users, create roles and groups is if you have admin access. Power users can do everything but that.
You can create a group with all the privileges you want to grant and create a user with policies attached from the group created. Create a user strictly with only programmatic access, so the app can connect with access key ID and secure key from AWS CLI.
Normally, IAM can be used to create and manage AWS users and groups, and permissions to allow and deny their access to AWS resources.
If your Python app is somehow consuming or interfacing to any AWS resource as S3, then probably you might want to look into this.
connect-on-premise-python-application-with-aws
The Python application can be upload to an S3 bucket. The application is running on a server inside the on-premise data center of a company. The focus of this tutorial is on the connection made to AWS.
Consider placing API Gateway in front of your Python app's routes.
Then you could control access using IAM.
So this might be a silly question, but what is the point of using Amazon SQS if it requires a private and public key? If the client has the private and public key they could probably discover the keys via decompile or some other means...
The only secure way I could think of would be to use a proxy (like php) that has the private and public keys. But then what is the point of using SQS in the first place? The main benefit of SQS (I can see) is that it can scale upwards and you don't have to worry about how many messages you are receiving. But if you are going to be using a proxy then you will have to scale that too... I hope my concerns make sense?
Thanks
Your concerns would be valid if you had to give your secret key out for clients to pull data from the queue. However the typical workflow involves using your AWS account ID for creating and modifying queues and perhaps pushing data onto the queues. Then you can set permissions with either the SQS addPermission action or setup a more finely controlled access policy. This means you would give read access to only a specific AWS account or to anonymous usage, but you would not allow for other modifications.
So basically you have a couple options. You could compile in the AWS public and private keys which you have setup in advanced that has restricted permissions on your client application. A better approach in my opinion is to make the public and private key files a configurable option on your client and tell the users of the client they are responsible for getting their own AWS account and keys and they can tell you what their AWS key is and you can give them as fine grain control as you want on a per client basis.
These resources would be good for you to look at:
Using the Access Policy Language
Controlling User Access to your AWS account
addPermission action for SQS