I am creating an application where I need to connect multiple devices to AWS IoT, but I noticed that only the last device that was connected remains connected. I saw that I was using the same certificates for all devices and after I created a certificate for each one the problem was solved, but it turns out that it will be multiple devices and it will be unproductive to keep registering device by device. I would like to know if there is a solution for multiple devices to remain connected to the aws iot simultaneously without having to register the certificates one by one.
This mainly comes from: https://iot-device-management.workshop.aws/en/provisioning-options/bulk-provisioning.html.
There are other options (just in time etc...) on the link above.
Create a bulk thing registration task
To create a bulk registration task a role is required that grants permission to access the input file. This role has been already created by CloudFormation and the name of the role has been copied during the setup of the workshop to the shell variable $ARN_IOT_PROVISIONING_ROLE.
aws iot start-thing-registration-task \
--template-body file://~/templateBody.json \
--input-file-bucket $S3_BUCKET \
--input-file-key bulk.json --role-arn $ARN_IOT_PROVISIONING_ROLE
When successful the command returns a taskId. The output looks similar to:
{
"taskId": "aaaf0a94-b5a9-4bd6-a1f5-cf188322a111"}
Provisioning templates
https://docs.aws.amazon.com/iot/latest/developerguide/provision-template.html
A provisioning template is a JSON document that uses parameters to describe the resources your device must use to interact with AWS IoT. A template contains two sections: Parameters and Resources. There are two types of provisioning templates in AWS IoT. One is used for just-in-time provisioning (JITP) and bulk registration and the second is used for fleet provisioning.
Script to create a provisioning template
https://github.com/aws-samples/aws-iot-device-management-workshop/blob/master/bin/mk-bulk.sh
Create bucket
aws s3api create-bucket\
--bucket bulk-iot-test\
--region ap-northeast-1
Upload bulk.json (if using cloudshell) and copy to S3
Upload bulk.json via the UI
aws s3 cp bulk.json s3://bulk-iot-test
aws s3 ls s3://bulk-iot-test
Create the role to register the things
From CloudFormation template… This is incomplete and needs further refination.
"DMWSIoTServiceRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": [ "iot.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
} ]
},
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/service-role/AWSIoTThingsRegistration",
"arn:aws:iam::aws:policy/service-role/AWSIoTLogging",
"arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess"
],
"Path": "/"
}
},
Start the thing registration task
aws iot start-thing-registration-task \
--template-body file://~/templateBody.json \
--input-file-bucket bulk-iot-test \
--input-file-key bulk.json --role-arn "arn:aws:sts::ACCOUNTID:assumed-role/ROLE/USER#DOMAIN.com"
Related
I'm learning about AWS; the specific use-case I'm learning about is having the addition of an object to a S3 bucket trigger a SNS notification that's subscribed to by a Lambda, thereby triggering the Lambda.
Online reading led me to the s3api put-bucket-notification-configuration page which says
The SNS topic must have an IAM policy attached to it that allows
Amazon S3 to publish to it
That led me to the sns add-permission page, whose request signature is:
add-permission
--topic-arn <value>
--label <value>
--aws-account-id <value>
--action-name <value>
Question: is it necessary to explicitly add-permission to a SNS topic even if the publisher is in the same account in which the topic was created?
The wording at the linked documentation implies that it is only necessary when the publisher is from a different account, but I'm not certain if I'm interpreting that correctly.
E.g. in my experiments, everything I work with is part of the same account. E.g.:
$ aws sns list-topics --profile=admin --endpoint-url=http://localhost:4575
{
"Topics": [
{
"TopicArn": "arn:aws:sns:us-east-1:000000000000:my-test-topic"
}
]
}
$
$ aws --profile=lambda-admin --endpoint-url=http://localhost:4574 lambda list-functions
{
"Functions": [
{
"FunctionName": "first_lambda",
"FunctionArn": "arn:aws:lambda:us-east-1:000000000000:function:first_lambda",
"Runtime": "python3.7",
"Role": "arn:aws:iam::000000000000:role/lambda-role",
"Handler": "first_lambda.lambda_handler",
"CodeSize": 311,
"Description": "",
"Timeout": 5,
"LastModified": "2020-06-16T05:10:16.311+0000",
"CodeSha256": "jRcHzt34ZSDUCyx+INftvu14njRqGeSozKa0Uxv4J98=",
"Version": "$LATEST",
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "af64db69-0b5a-41ad-86c2-8467a60cf618",
"State": "Active"
}
]
}
(I haven't found a way to get the ARN of a S3 bucket from the CLI, but mine is associated with the same account).
Question: is it necessary to explicitly add-permission to a SNS topic even if the publisher is in the same account in which the topic was created?
The default policy for sqs contains the following:
"Principal": {
"AWS": "*"
}
This will allow any IAM entity (IAM user, role) from your account (the Condition; not shown, but present in the policy) to SNS:Publish (among other things) to your topic. They still have to have their own permissions to publish though. For example, for lambda you still need to add sns permissions to its executions role, which confuses people.
The important thing to note is the AWS. The AWS key in Principal does not include services, such as S3. The reason is that the Principal for S3 service looks like this:
"Principal": {
"Service": "s3.amazonaws.com"
}
Therefore you have to explicitly allow service S3 to publish to your topic in the topic's policy.
I haven't found a way to get the ARN of a S3 bucket from the CLI, but mine is associated with the same account
The bucket ARN has known format:
arn:aws:s3:::bucket_name
or in China:
arn:aws-cn:s3:::bucket_name
So even if CLI doesn't explicitly gives the ARN, you can always construct it yourself pretty easily. And if you are not sure if your bucket is in China or not, then you can use get-bucket-location to verify that.
I have a Elasticsearch inside the VPC running in account A.
I want to deliver logs from Firehose in Account B to the Elasticsearch in Account A.
Is it possible?
When I try to create delivery stream from AWS CLI I am getting below exception,
$: /usr/local/bin/aws firehose create-delivery-stream --cli-input-json file://input.json --profile devops
An error occurred (InvalidArgumentException) when calling the CreateDeliveryStream operation: Verify that the IAM role has access to the ElasticSearch domain.
The same IAM role, and same input.json works when modified to the Elasticsearch in Account B. I have Transit gateway connectivity enabled between the AWS accounts and I can connect telnet to the Elasticsearch in Account A from EC2 instance in Account B.
Adding my complete terraform code(i got same exception in AWS CLI and also in Terraform):
https://gist.github.com/karthikeayan/a67e93b4937a7958716dfecaa6ff7767
It looks like you haven't granted sufficient permissions to the role that is used when creating the stream (from the CLI example provided I'm guessing its a role named 'devops'). At minimum you will need firehose: CreateDeliveryStream.
I suggest adding the below permissions to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:CreateDeliveryStream",
"firehose:UpdateDestination"
],
"Resource": "*"
}
]
}
https://forums.aws.amazon.com/message.jspa?messageID=943731
I have been informed from AWS forum that this feature is currently not supported.
You can set up Kinesis Data Firehose and its dependencies, such as Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch, to stream across different accounts. Streaming data delivery works for publicly accessible OpenSearch Service clusters whether or not fine-grained access control (FGAC) is enabled
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/
Within AWS, I am able to successfully pull code from an existing S3 bucket into a new repo within Codecommit upon stack creation of a pipeline in Cloudformation (using a YAML file).
This works perfectly, but I hope to make sure the S3 bucket itself is private and not public, and want some sort of auth system to ensure that for a user to properly pull code from my s3 bucket, they need to supply the correct auth into cloudformation for it to properly populate the Codecommit repo.
Which AWS service is best for me to do this? I was thinking of using an API gateway with Lambda authorizer, but I am interested in other AWS services that might make this easier.
Does AWS Roles suit your needs? You can attach a role to S3 which allows users to download code:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS: [
"arn:aws:iam::AWS-account-ID:user/user-name-1",
"arn:aws:iam::AWS-account-ID:user/user-name-2",
]
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::YOUR_BUCKET/SOME/PATH"
}
]
}
and/or if you don't have predefined users, you can allow Lambda to assume such role, as explained here, and return credentials via API gateway with Lambda authorizer.
I’ve tested a variation of wide policy access , and got to the same point – the log groups is created, but the log stream isn’t.
Followed https://docs.aws.amazon.com/amazon-mq/latest/developer-guide/amazon-mq-configuring-cloudwatch-logs.html and the expected result is getting those messages in CloudWatch, but nothing's coming in.
The goal is to have audit and general MQ logs in CloudWatch.
Has anyone managed to stream MQ logs in CloudWatch? How could I further debug this?
I managed to create the Amazon MQ Broker with logging enabled, and publishing log messaged to Cloudwatch using terraform's provider 1.43.2 -- my project has a lock-down on an older tf provider version, so if you're using a newer one you should be fine
https://github.com/terraform-providers/terraform-provider-aws/blob/master/CHANGELOG.md#1430-november-07-2018
This was the policy that I didn't get right the first time, and needed for MQ to post to Cloudwatch:
data "aws_iam_policy_document" "mq-log-publishing-policy" {
statement {
actions = [
"logs:CreateLogStream",
"logs:PutLogEvents",
]
resources = ["arn:aws:logs:*:*:log-group:/aws/amazonmq/*"]
principals {
identifiers = ["mq.amazonaws.com"]
type = "Service"
}
}
}
resource "aws_cloudwatch_log_resource_policy" "mq-log-publishing-policy" {
policy_document = "${data.aws_iam_policy_document.mq-log-publishing-policy.json}"
policy_name = "mq-log-publishing-policy"
}
Make sure this policy has been correctly applied, otherwise nothing will come up on Cloudwatch. I did so using aws cli:
aws --profile my-testing-profile-name --region my-profile-region logs describe-resource-policies
and you should see the policy in the output.
Or if you're using aws cli you can try
aws --region [your-region] logs put-resource-policy --policy-name AmazonMQ-logs \
--policy-document '{
"Statement": [
{
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Principal": {
"Service": "mq.amazonaws.com"
},
"Resource": "arn:aws:logs:*:*:log-group:/aws/amazonmq/*"
}
],
"Version": "2012-10-17"
}'
Install the AWS CLI agent for Windows and configure your credentials https://docs.aws.amazon.com/cli/latest/userguide/install-windows.html
Create a JSON file in "C:\Users\YOUR-USER\" containing your policy. For example: C:\Users\YOUR-USER\policy.json. You can simply copy this one here and paste into your .json file:
{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Principal": {"Service": "mq.amazonaws.com"},"Action":["logs:CreateLogStream","logs:PutLogEvents"],"Resource" : "arn:aws:logs:*:*:log-group:/aws/amazonmq/*"}]}
Open your CMD and simply type:
aws --region eu-central-1 logs put-resource-policy --policy-name amazonmq_to_cloudwatch --policy-document file://policy.json
Well Done ! This will create a AWS RESOURCE POLICY, which sometimes is not possible to create in the IAM console.
If I am running container in AWS ECS using EC2, then I can access running container and execute any command.
ie.
docker exec -it <containerid> <command>
How can I run commands in the running container or access container in AWS ECS using Fargate?
Update(16 March, 2021):
AWS announced a new feature called ECS Exec which provides the ability to exec into a running container on Fargate or even those running on EC2. This feature makes use of AWS Systems Manager(SSM) to establish a secure channel between the client and the target container. This detailed blog post from Amazon describes how to use this feature along with all the prerequisites and the configuration steps.
Original Answer:
With Fargate you don't get access to the underlying infrastructure so docker exec doesn't seem possible. The documentation doesn't mention this explicitly but it's mentioned in this Deep Dive into AWS Fargate presentation by Amazon where this is mentioned on slide 19:
Some caveats: can’t exec into the container, or access the underlying
host (this is also a good thing)
There's also some discussion about it on this open issue in ECS CLI github project.
You could try to run an SSH server inside a container to get access but I haven't tried it or come across anyone doing this. It also doesn't seem like a good approach so you are limited there.
AWS Fargate is a managed service and it makes sense not to allow access into containers.
If you need to troubleshoot the container you can always increase the log level of your app running in containers. Best practices on working with containers says
"Docker containers are in fact immutable. This means that a running
container never changes because in case you need to update it, the
best practice is to create a new container with the updated version of
your application and delete the old one."
Hope it helps.
You need to provide a "Task role" for a Task Definition (this is different than the "Task execution role"). This can be done by first going to IAM
IAM role creation
IAM > roles > create role
custom trust policy > copy + paste
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Add permission > Create Policy
JSON > replace YOUR_REGION_HERE & YOUR_ACCOUNT_ID_HERE & CLUSTER_NAME > copy + paste
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:YOUR_REGION_HERE:YOUR_ACCOUNT_ID_HERE:log-group:/aws/ecs/CLUSTER_NAME:*"
}
]
}
Give it a name
go back to Add permissions > search by name > check > Next
Give a role name > create role
ECS new task
go back to ECS > go to task definition and create a new revision
select your new role for "Task role" (different than "Task execution role") > update Task definition
go to your service > update > ensure revision is set to latest > finish update of the service
current task and it should auto provision your new task with its new role.
try again
Commands I used to exec in
enables execute command
aws ecs update-service --cluster CLUSTER_NAME --service SERVICE_NAME --region REGION --enable-execute-command --force-new-deployment
adds ARN to environment for easier cli. Does assume only 1 task running for the service, otherwise just manually go to ECS and grab arn and set them for your cli
TASK_ARN=$(aws ecs list-tasks --cluster CLUSTER_NAME --service SERVICE_NAME --region REGION --output text --query 'taskArns[0]')
see the task,
aws ecs describe-tasks --cluster CLUSTER_NAME --region REGION --tasks $TASK_ARN
exec in
aws ecs execute-command --region REGION --cluster CLUSTER_NAME --task $TASK_ARN --container CONTAINER --command "sh" --interactive
As on 16 March 2021, AWS has introduced ECS Exec which can be used to run command on container running in either EC2 or Fargate.
URL will be available at
https://aws.amazon.com/about-aws/whats-new/2021/03/amazon-ecs-now-allows-you-to-execute-commands-in-a-container-running-on-amazon-ec2-or-aws-fargate/