I am new to AWS and IOT. I am trying to connect my raspberry pi with AWS IOT and from there I am trying to push the data to Elasticsearch.
I have managed to to connect raspberry pi with AWS IOT and able to see the data I have sent from Raspberry pi through "Test" in IOT console.
I have created topic rule and configured elasticsearch endpoint, added IAM role.
In rule destination i've entered the elasticsearch endpoint and created it but it shows the following message.
"Awaiting confirmation. Confirmation message sent on 2020-11-26T11:22:55.602Z. The destination responded with HTTP status code - 405."
I am unable see any message in cloudwatch logs to get the token and confirm the destination.
My elastic search access policy is set to open access to domain and the iot policy is as follows
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iot:",
"Resource": ""
}
]
}
I dont know whether I am doing it correct. Please help me to confirm the destination and stream data to elastic search.
Thanks in advance,
Related
I have a Elasticsearch inside the VPC running in account A.
I want to deliver logs from Firehose in Account B to the Elasticsearch in Account A.
Is it possible?
When I try to create delivery stream from AWS CLI I am getting below exception,
$: /usr/local/bin/aws firehose create-delivery-stream --cli-input-json file://input.json --profile devops
An error occurred (InvalidArgumentException) when calling the CreateDeliveryStream operation: Verify that the IAM role has access to the ElasticSearch domain.
The same IAM role, and same input.json works when modified to the Elasticsearch in Account B. I have Transit gateway connectivity enabled between the AWS accounts and I can connect telnet to the Elasticsearch in Account A from EC2 instance in Account B.
Adding my complete terraform code(i got same exception in AWS CLI and also in Terraform):
https://gist.github.com/karthikeayan/a67e93b4937a7958716dfecaa6ff7767
It looks like you haven't granted sufficient permissions to the role that is used when creating the stream (from the CLI example provided I'm guessing its a role named 'devops'). At minimum you will need firehose: CreateDeliveryStream.
I suggest adding the below permissions to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:CreateDeliveryStream",
"firehose:UpdateDestination"
],
"Resource": "*"
}
]
}
https://forums.aws.amazon.com/message.jspa?messageID=943731
I have been informed from AWS forum that this feature is currently not supported.
You can set up Kinesis Data Firehose and its dependencies, such as Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch, to stream across different accounts. Streaming data delivery works for publicly accessible OpenSearch Service clusters whether or not fine-grained access control (FGAC) is enabled
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/
I have setup an Amazon linux AMI with the Kinesis Agent installed and configured to send the logs over to Firehose. The EC2 instance has been attached an IAM role with KinesisFirehoseFullAccess permission. However I am receiving the inadequate permissions error while the data is being sent over.
I know that I have provided the highest level of IAM Kinesis permissions but I am facing a blank wall now. I will, of course, trim the permissions down later but I first need to get this proof of concept working.
From the AWS Firehose, did a test send to the S3 bucket. This worked OK.
Created logs via the Fake Log Generator. I then ran the service. Service is up and running.
User: arn:aws:sts::1245678012:assumed-role/FirstTech-EC2-KinesisFireHose/i-0bdf3adc7a4d97afa is not authorized to perform: firehose:PutRecordBatch on resource: arn:aws:firehose:ap-southeast-1:1245678012:deliverystream/firsttech-ingestion-weblogs (Service: AmazonKinesisFirehose; Status Code: 400; Error Code: AccessDeniedException;
localhost (Agent.MetricsEmitter RUNNING) com.amazon.kinesis.streaming.agent. Agent: Progress: 900 records parsed (220430 bytes), and 0 records sent successfully to destinations. Uptime: 840058ms
I got this working for the aws kinesis agent to send data to a kinesis data stream just in case anyone else here has issues.
I had the same issues after attaching the correct iam role and policy permissions to an ec2 instance that needed to send records to a kinesis data stream.
I just removed the references to aws firehose in the config file. You do not need to use keys embedded in the ec2 instance itself, the iam role is sufficient.
enter image description here
Make sure you create the Kinesis Firehose with from console to ensure that all required IAM accesses are instantiated by default.
Next in your instance ensure your agent.json is correct from:
{
"cloudwatch.emitMetrics": true,
"flows": [
{
"filePattern": "/var/log/pathtolog",
"deliveryStream": "kinesisstreamname" }
]
}
Make sure the EC2 instance has the necessary permissions to send the data to Kinesis:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "cloudwatch:PutMetricData",
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Resource": "arn:aws:firehose:us-east-1:accountid:deliverystream/deliverstreamname"
}
]
}
Also make sure you kinesis agent can collect data from any directory in the instance - the easiest way to do this is by adding the agent to the sudoers group.
sudo usermod -aG sudo aws-kinesis-agent-user
I want a certain HTTPS service to be called every time a file has been uploaded to an S3 bucket.
I have created the S3 bucket and a SNS topic with a verified subscription with the HTTPS service as an endpoint.
I can publish a message on the SNS topic via the AWS UI, and see that the HTTPS service is called as expected.
On the S3 bucket I created an Event, which should link the bucket and the topic. On my first attempt I got an error because the bucket was not allowed to write to the topic, so c.f. the documentation, I changed the topic access policy to:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "sns:Publish",
"Resource": "arn:aws:sns:eu-central-1:TOPIC_ID:OrderUpdates",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "ACCOUNT_ID"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:*:*:*"
}
}
}
]
}
where TOPIC_ID is the topic owner id which can be seen when the topic is shown in the AWS UI, and the ACCOUNT_ID is the account id shown under account settings in the AWS UI.
This change in the topic access policy allowed me to create the event on the bucket:
When I call the API method getBucketNotificationConfiguration I get:
{
"TopicConfigurations": [
{
"Id": "OrderFulfilled",
"TopicArn": "arn:aws:sns:eu-central-1:TOPIC_ID:OrderUpdates",
"Events": [
"s3:ObjectCreated:*"
]
}
],
"QueueConfigurations": [],
"LambdaFunctionConfigurations": []
}
But the HTTPS service is not called. What am I missing in this setup, that will trigger the HTTPS service to be called by the SNS topic subscription every time a file is uploaded to the S3 bucket?
Thanks,
-Louise
Having the same issue S3 upload event does not trigger sns message even though our sns access policy is correctly set. Turns out we can NOT use the Enable encryption option, since S3 events are triggered via CloudWatch Alarms which do not work with SNS encrypted topics as of now.
Switch back to Disable encryption option, everything works again.
To reproduce this situation, I did the following:
Created an Amazon SNS topic and subscribed my phone via SMS (a good way to debug subscriptions!)
Created an Amazon S3 bucket with an Event pointing to the Amazon SNS topic
I received this error message:
Unable to validate the following destination configurations. Permissions on the destination topic do not allow S3 to publish notifications from this bucket.
I then added the policy you show above (adjusted for my account and SNS ARN)
This allowed the Event to successfully save
Testing
I then tested the event by uploading a file to the S3 bucket.
I received an SMS very quickly
So, it would appear that your configuration should successfully enable a message to be sent via Amazon SNS. This suggests that the problem lies with the HTTPS subscription, either from sending it from SNS or receiving it in the application.
I recommend that you add an Email or SMS subscription to verify whether Amazon SNS is receiving the topic and forwarding it to subscribers. If this works successfully, then you will need to debug the receipt of the message in the HTTPS application.
You must add TopicConfiguration
Read more about enable event notification
what is the best approach in using aws elastic search with nodejs? I am using aws ecs ec2 instance for running my docker containers and is using the IAM role to accessing the other aws resource like S3 bucket and dynamodb from nodejs.
Can we use the same procedure for accessing the aws elastic search endpoint too?
I added an inline policy with the existing role and added the elastic search end point arn. but the nodejs sdk is not able to connect to the ES. when the aws key and id is added as environment variable in task definition it starts working. But I dont need to use that method as it will conflict with the other aws resource. (looks like the dev team is configured the program such that it looks for env)
It for sure is not the best method but you can also use a ip based restriction. We currently use this and it works fine. Just set an elastic ip on your ec2 instance (if you haven't already) and set the ip address in the access policy like this:
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"XXX.XXX.XXX.XXX",
]
}
}
For anybody else stumbling across this, here's a few things I learnt whilst I was stuck on something similar:
EC2's role ARN can be added in the access policy for your Elasticsearch domain along with the permissions you want the role to have. For eg. for an EC2 running with role "aws-ec2" needing permissions to make HTTP GET requests to ES, you could have the following in your ES domain access policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<ACCOUNT_ID>:role/aws-ec2",
]
},
"Action": "es:ESHttpGet",
"Resource": "arn:aws:es:<REGION>:<ACCOUNT_ID>:domain/<DOMAIN_NAME>/*"
}
]
}
Any requests made by an EC2 instance running with role "aws-ec2" in your account will have access to elasticsearch.
Note that if you have trouble getting credentials, try the following:
AWS.config.getCredentials(function(err) {
if (err) console.log(err.stack);
// credentials not loaded
else {
// credentials are loaded and can be accessed using
AWS.config.credentials.accessKeyId, AWS.config.credentials.secretAccessKeyId etc.
}
});
This will usually pull the credentials in like magic, I have a theory about how it works (tl:dr; I think it pulls them from the EC2 instance metadata by making a request to a fixed IP) but it's unproven so I won't embarrass myself until I know more. Note that this should work even if you don't have credentials stored in your environment or in the shared credentials file.
Im trying to publish a message via a python Lambda function to AWS IoT.
I've subscribed to a topic ('test') on the IoT console and triggered the function and the messages aren't getting delivered.
Python Code Snippet:
iot = boto3.client('iot-data','eu-west-1') res = {
"message" : "Hello!"
}
iot.publish(
topic='test', //do we need to pass the thing name here?
qos=0,
payload=json.dumps(res))
IoT policy:
{ "Version": "2012-10-17", "Statement": [
{
"Effect": "Allow",
"Action": "iot:*",
"Resource": "*"
} ] }
Also, I have allocated the the correct IAM privileges to the lambda function to publish to IoT.
Any help to pint me at the right direction much appreciated.
Figured this one out, my lambda function is deployed within a VPC subnet without a internet connection. Created a NAT gateway and now the subnet getting internet access through it.
P.S- To publish messages to IoT (MQTT) needs an internet connection.