I have an implementation which uses AWS SDK to connect AWS IoT. It works well on Linux.
I am trying to port it to FreeRTOS based embedded system.
mbedtls is used in AWS SDK with ssl wrapper.
There are small modification on mbedtls side (I provide time from sntp to mbedtls).
When I enabled mbedtls debugs, I am seeing that everything is fine and handshake is completed. But after handshake I am getting connection close message from AWS SDK.
ssl_cli.c : 3303 - client state: MBEDTLS_SSL_FLUSH_BUFFERS (14)
ssl_cli.c : 3303 - client state: MBEDTLS_SSL_HANDSHAKE_WRAPUP (15)
ssl_tls.c : 5024 - <= handshake wrapup
ssl_tls.c : 6346 - <= handshake
ssl_tls.c : 2701 - => write record
ssl_tls.c : 1258 - => encrypt buf
ssl_tls.c : 1400 - before encrypt: msglen = 125, including 0 bytes of padding
ssl_tls.c : 1560 - <= encrypt buf
ssl_tls.c : 2838 - output record: msgtype = 23, version = [3:3], msglen = 141
ssl_tls.c : 2416 - => flush output
ssl_tls.c : 2435 - message length: 146, out_left: 146
ssl_tls.c : 2441 - ssl->f_send() returned 146 (-0xffffff6e)
ssl_tls.c : 2460 - <= flush output
ssl_tls.c : 2850 - <= write record
ssl_tls.c : 6883 - <= write
ssl_tls.c : 6514 - => read
ssl_tls.c : 3728 - => read record
ssl_tls.c : 2208 - => fetch input
ssl_tls.c : 2366 - in_left: 0, nb_want: 5
ssl_tls.c : 2390 - in_left: 0, nb_want: 5
ssl_tls.c : 2391 - ssl->f_recv(_timeout)() returned 5 (-0xfffffffb)
ssl_tls.c : 2403 - <= fetch input
ssl_tls.c : 3488 - input record: msgtype = 21, version = [3:3], msglen = 26
ssl_tls.c : 2208 - => fetch input
ssl_tls.c : 2366 - in_left: 5, nb_want: 31
ssl_tls.c : 2390 - in_left: 5, nb_want: 31
ssl_tls.c : 2391 - ssl->f_recv(_timeout)() returned 26 (-0xffffffe6)
ssl_tls.c : 2403 - <= fetch input
ssl_tls.c : 1576 - => decrypt buf
ssl_tls.c : 2051 - <= decrypt buf
ssl_tls.c : 3961 - **got an alert message, type: [1:0]**
ssl_tls.c : 3976 - **is a close notify message**
As I read, "got an alert message, type: [1:0]" means AWS closes the connection but why and what does it mean?
I saw an "Application Data" entry in the Wireshark. So probably I am getting AWS Close alert in the middle of Application data transaction.
I also saw a comment like "it means certificate is not permissive enough for AWS" but I am using same certificates for both Linux and embedded side.
Any idea. How can I debug it?
I would recommend to triple check your policy (and certificates and the links between them and your thing).
I had the same issue and the solution was to change the policy from:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iot :*",
"Resource": "*"
}
]
}
to:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "iot:*",
"Resource": "*"
}
]
}
i.e.: The "Action" string has lost the space character.
Related
I'm trying to send a GCM notification from an AWS lambda function.
I have two questions:
What permissions does the exection role require?
Is the behavioud described below the best I can expect from AWS notification, do I need to look at another service?
I've followed info from another question:
Send a notification by Lambda function with AWS Pinpoint
My lambda is as follows:
async function sendMessage() {
try {
const pinpoint = new AWS.Pinpoint();
let users = {};
users["---"] = {};
const params = {
ApplicationId: applicationId,
SendUsersMessageRequest: {
Users: users,
MessageConfiguration: {
'GCMMessage': {
Action: 'OPEN_APP',
Title: "Lambda User Msg",
SilentPush: false,
Body: "Lambda User Send Message Test"
}
}
}
};
console.log("Params:", params);
let rspnData = await pinpoint.sendUsersMessages(params).promise();
console.log("sendUsersMessages:rspnData:", rspnData);
} catch (err) {
console.log("sendMessage error:", err);
}
}
Can someone advise as to the minimal permissions required by the execution role of the lambda?
I've currently allowed everything pinpoint related, but would like
to be certain what should actually be allows?
2.
When I test the code from the AWS lambda code console I get the following execution log:
2022-02-11T08:57:26.446Z --- INFO SendMessage
2022-02-11T08:57:26.466Z --- INFO Params: { ApplicationId: '---',
SendUsersMessageRequest: {
Users: { '---': {} },
MessageConfiguration: { GCMMessage: [Object] } } } END RequestId:
There is no output from the sendUserMessage!
ASIDE: I've increased maximum execution time to 20secs, but this feels wrong!
Or I get an error of the form:
2022-02-11T09:38:32.721Z --- INFO sendMessage error: Error: Client network socket disconnected before secure TLS connection was established
at connResetException (internal/errors.js:639:14)
at TLSSocket.onConnectEnd (_tls_wrap.js:1570:19)
at TLSSocket.emit (events.js:412:35)
at TLSSocket.emit (domain.js:475:12)
at endReadableNT (internal/streams/readable.js:1334:12)
at processTicksAndRejections (internal/process/task_queues.js:82:21) {
code: 'TimeoutError',
path: null,
host: 'pinpoint.eu-west-2.amazonaws.com',
port: 443,
localAddress: undefined,
time: 2022-02-11T09:38:32.721Z,
region: 'eu-west-2',
hostname: 'pinpoint.eu-west-2.amazonaws.com',
retryable: true
}
END RequestId: ---
REPORT RequestId: --- Duration: 173.82 ms Billed Duration: 174 ms Memory Size: 128 MB Max Memory Used: 83 MB
Intermitently in the Cloudwatch log I get the following output:
2022-02-11T08:54:11.847Z --- INFO sendUsersMessages:rspnData: {
SendUsersMessageResponse: {
ApplicationId: '---',
RequestId: '---',
Result: { '---': [Object] }
}
}
indicating the notification was actually successfully sent!
On some occasions I get a notification in the app!
Usually very delayed!
If I use the same Id's via aws cli:
aws pinpoint send-users-messages --cli-input-json file://pinpoint-send-users-messages.json
I get a prompt notification, as expected!
Is this the best I can hope for from AWS send notifications, do I need to look at another
service, or am I doing something wrong?
In my case the lambda has the following role.
I don't know if it will be of any help, but please take a look.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:CreateLogGroup"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": "mobiletargeting:SendUsersMessages",
"Resource": "arn:aws:mobiletargeting:us-west-2:{your aws account id}:apps/{pinpoint project id}/messages"
},
{
"Effect": "Allow",
"Action": [
"mobiletargeting:GetEndpoint",
"mobiletargeting:UpdateEndpoint",
"mobiletargeting:PutEvents"
],
"Resource": "arn:aws:mobiletargeting:us-west-2:{your aws account id}:apps/{pinpoint project id}/endpoints/*"
}
]
}
When I try to push my IIS or MSSQL logs into CloudWatch, I can see logs in the server are appearing however they are in the single line in CW where as in the servers they are two different events with different timestamp
I've treid using "multi_line_start_pattern": "yyyy-MM-dd HH:mm:ss" however this doesn't solve my problem
CloudWatch Json file:
{
"FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Id": "IISLogs",
"Parameters": {
"CultureName": "en-US",
"Encoding": "UTF-8",
"Filter": "",
"LineCount": "5",
"LogDirectoryPath": "C:\\logfiles",
"TimeZoneKind": "UTC",
"TimestampFormat": "\\%Y-%m-%d %H:%M:%S\\" (also tried "yyyy-MM-dd HH:mm:ss" format)
}
},
{
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
"Id": "CloudWatchIISLogs",
"Parameters": {
"LogGroup": "/application/iis",
"LogStream": "{instance_id}",
"Region": "eu-west-1",
"multi_line_start_pattern": "yyyy-MM-dd HH:mm:ss"
}
}
under flows:
"(IISLogs),CloudWatchIISLogs",
Logs I see in CW: I see its not finding difference between each end line, however in the IIS server I do have the logs seperated in next line. same is happening for MSSQL.
I would expect the logs to be pushed into the CW same as mentioned in the server/instance unlike below:
Under time: I have the timestamp:
Under Message: this is coming under single message where as it consists of multiple messages (3 events of user1)
2019-05-31 12:19:42 ::1 GET / - 80 user ::1 Mozilla/5.0+(Windows+NT+10.0;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko - 200 0 0 2032019-05-31 12:19:43 ::1 GET / - 80 user1 ::1 Mozilla/5.0+(Windows+NT+10.0;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko - 200 0 0 152019-05-31 12:19:43 ::1 GET /libs/jquery-1.7.1.min.js - 80 user1 ::1 Mozilla/5.0+(Windows+NT+10.0;+WOW64;+Trident/7.0;+rv:11.0)+like+Gecko http://localhost/ 304 0 0 02019-05-31 12:19:43 ::1 GET /libs/canvg/canvg.js - 80 user1 ::1
status code is merging with the next line which is date/time due to which logs are not showing/split up properly.
Any help would be appreciated.
Thanks
I have got an answer to this, this was due to the agent that we were using - SSM, post migration to CW agent its resolved.
I can't see the Log group defined by Cloud Watch agent on my EC2 instance
Also, the default log group /var/log/messages is not visible.
I can't see these logs also on root account.
I have other log groups configured and visible.
I have the following setup:
Amazon Linux
AMI managed role attached to instance: CloudWatchAgentServerPolicy
Agent installed via awslogs - https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
Agent started successfully
No errors in /var/log/awslogs.log. Looks like working normally. Log below.
Configuration done via /etc/awslogs/config/FlaskAppAccessLogs.conf
Instance has outbound access to internet
Instance security groups allows all outbound traffic
Any ideas what to check or what can be missing?
/etc/awslogs/config/FlaskAppAccessLogs.conf:
cat /etc/awslogs/config/FlaskAppAccessLogs.conf
[/var/log/nginx/access.log]
initial_position = start_of_file
file = /var/log/nginx/access.log
datetime_format = %d/%b/%Y:%H:%M:%S %z
buffer_duration = 5000
log_group_name = FlaskApp-Frontends-access-log
log_stream_name = {instance_id}
/var/log/awslogs.log
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Loading additional configs from /etc/awslogs/config/FlaskAppAccessLogs.conf
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Missing or invalid value for use_gzip_http_content_encoding config. Defaulting to use gzip encoding.
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Missing or invalid value for queue_size config. Defaulting to use 10
2019-01-05 17:50:21,520 - cwlogs.push - INFO - 24838 - MainThread - Using default logging configuration.
2019-01-05 17:50:21,544 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting publisher for [c17fae93047ac481a4c95b578dd52f94, /var/log/messages]
2019-01-05 17:50:21,550 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting reader for [c17fae93047ac481a4c95b578dd52f94, /var/log/messages]
2019-01-05 17:50:21,551 - cwlogs.push.reader - INFO - 24838 - Thread-4 - Start reading file from 0.
2019-01-05 17:50:21,563 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting publisher for [8ff79b6440ef7223cc4a59f18e5f3aef, /var/log/nginx/access.log]
2019-01-05 17:50:21,587 - cwlogs.push.stream - INFO - 24838 - Thread-1 - Starting reader for [8ff79b6440ef7223cc4a59f18e5f3aef, /var/log/nginx/access.log]
2019-01-05 17:50:21,588 - cwlogs.push.reader - INFO - 24838 - Thread-6 - Start reading file from 0.
2019-01-05 17:50:27,838 - cwlogs.push.publisher - WARNING - 24838 - Thread-5 - Caught exception: An error occurred (ResourceNotFoundException) when calling the PutLogEvents operation: The specified log group does not exist.
2019-01-05 17:50:27,839 - cwlogs.push.batch - INFO - 24838 - Thread-5 - Creating log group FlaskApp-Frontends-access-log.
2019-01-05 17:50:27,851 - cwlogs.push.publisher - WARNING - 24838 - Thread-3 - Caught exception: An error occurred (ResourceNotFoundException) when calling the PutLogEvents operation: The specified log group does not exist.
2019-01-05 17:50:27,851 - cwlogs.push.batch - INFO - 24838 - Thread-3 - Creating log group /var/log/messages.
2019-01-05 17:50:27,966 - cwlogs.push.batch - INFO - 24838 - Thread-5 - Creating log stream i-0d7e533f67870ff8d.
2019-01-05 17:50:27,980 - cwlogs.push.batch - INFO - 24838 - Thread-3 - Creating log stream i-0d7e533f67870ff8d.
2019-01-05 17:50:28,077 - cwlogs.push.publisher - INFO - 24838 - Thread-5 - Log group: FlaskApp-Frontends-access-log, log stream: i-0d7e533f67870ff8d, queue size: 0, Publish batch: {'skipped_events_count': 0, 'first_event': {'timestamp': 1546688052000, 'start_position': 0L, 'end_position': 161L}, 'fallback_events_count': 0, 'last_event': {'timestamp': 1546708885000, 'start_position': 4276L, 'end_position': 4468L}, 'source_id': '8ff79b6440ef7223cc4a59f18e5f3aef', 'num_of_events': 24, 'batch_size_in_bytes': 5068}
Status of awslogs
sudo service awslogs status
awslogs (pid 25229) is running...
IAM role policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"ec2:DescribeTags",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:DescribeLogGroups",
"logs:CreateLogStream",
"logs:CreateLogGroup"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssm:GetParameter"
],
"Resource": "arn:aws:ssm:*:*:parameter/AmazonCloudWatch-*"
}
]
}
It's seems that posting a question may quickly help to find an answer.
There is additional configuration in which i have made typo:
sudo cat /etc/awslogs/awscli.conf
[plugins]
cwlogs = cwlogs
[default]
region = us-west-1
As described above the logs are delivered to us-west-1 region.
I was checking us-west-2 :)
I have a problem with AWS ElasticBeanstalk Worker with SQS. I have read many resources and do experiment about it but still can't connect the worker with SQS successfully.
The worker act as consumer is created using Node.js with Hapi. I have tested this script using CURL in my local computer and it works well.
var Hapi = require('hapi');
var Good = require('good');
var server = new Hapi.Server();
server.connection({
port: process.env.PORT || 3000
});
server.route({
method: 'POST',
path: '/hello',
handler: function (request, reply) {
console.log('CIHUUY response: ', request.payload);
reply();
}
});
server.register({
register: require('good'),
options: {
reporters: [{
reporter: require('good-console'),
events: { log: '*', response: '*' }
}]
}
}, function (err) {
if (err) {
console.error(err);
}
else {
server.start(function () {
console.info('Server started at ' + server.info.uri);
});
}
});
My IAM Policy for the worker
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "QueueAccess",
"Action": [
"sqs:ChangeMessageVisibility",
"sqs:DeleteMessage",
"sqs:ReceiveMessage",
"sqs:SendMessage"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "MetricsAccess",
"Action": [
"cloudwatch:PutMetricData"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
For queue, I set the permission to allow everybody to access it. I give it a name testingqueue
For worker configuration
I checked the log /var/log/nodejs/nodejs.log
^[[1;33mpost^[[0m /hello {} ^[[33m400^[[0m (2ms)
150701/094208.444, [response], http://ip-10-142-107-58:8081: ^[[1;33mpost^[[0m /hello {} ^[[33m400^[[0m (1ms)
150701/094208.773, [response], http://ip-10-142-107-58:8081: ^[[1;33mpost^[[0m /hello {} ^[[33m400^[[0m (2ms)
150701/094208.792, [response], http://ip-10-142-107-58:8081: ^[[1;33mpost^[[0m /hello {} ^[[33m400^[[0m (1ms)
150701/094208.882, [response], http://ip-10-142-107-58:8081: ^[[1;33mpost^[[0m /hello {} ^[[33m400^[[0m (1ms)
150701/094208.951, [response], http://ip-10-142-107-58:8081: ^[[1;33mpost^[[0m
I also checked the awssqsd log /var/log/aws-sqsd/default.log
2015-07-01T09:44:40Z http-err: 75704523-42de-40de-9f9f-8a59eb3fb332 (7324) 400 - 0.004
2015-07-01T09:44:40Z message: sent to %[http://localhost:80]
2015-07-01T09:44:40Z http-err: 59e2a75b-87f7-4833-8cde-11900d48a7c5 (3770) 400 - 0.007
2015-07-01T09:44:40Z message: sent to %[http://localhost:80]
2015-07-01T09:44:40Z http-err: e2acb4e0-1059-4dc7-9101-8d3e4c974108 (7035) 400 - 0.003
2015-07-01T09:44:40Z message: sent to %[http://localhost:80]
2015-07-01T09:44:40Z http-err: 04d2436a-0b1e-4a1f-8826-a2b30710f569 (9957) 400 - 0.005
I keep getting error 400. I'm curious why it can't connect.
Things I have done:
Queue is created and selected in worker configuration
Worker already use correct IAM policy
The HTTP Path is match, using /hello with POST method
Anyone maybe can help me here
Thank you
there is nothing wrong with your code. I just depoyed it to AWS with no problem.
/var/log/nodejs/nodejs.log
Server started at http://myIP:8081
CIHUUY response: { test: 'testvalue' }
151118/110150.862, [response], http://myIP:8081: [1;33mpost[0m /hello {} [32m200[0m (38ms)
/var/log/aws-sqsd/default.log
2015-11-18T10:58:38Z init: initializing aws-sqsd 2.0 (2015-02-18)
2015-11-18T10:58:39Z start: polling https://sqs.us-west-2.amazonaws.com/myaccountname/awseb-e-mnpfjxiump-stack-AWSEBWorkerQueue-1FPDK4Z8E3WRX
2015-11-18T11:01:50Z message: sent to %[http://localhost:80]
As you can see my test message was received. Only difference is can see is thah i have autogenerated queue, but that should not be a problem, because your logs shows that demon took message from queue and forwarded it. I use the same policy as you do for my worker.
I am sending emails through a node.js app to Mailgun. I keep getting these 421 Syntax error messages come up. Sometimes the messages do end up going through, here is the history for one message:
Date/Time Summary
2015-05-07 16:14 Delivered: sender → recipient 'You have a new notification'
2015-05-07 15:14 Will retry in 3600 seconds: sender → recipient 'You have a new notification' Server response: 421 421 Syntax error
2015-05-07 14:43 Will retry in 1800 seconds: sender → recipient 'You have a new notification' Server response: 421 421 Syntax error
2015-05-07 14:28 Will retry in 900 seconds: sender → recipient 'You have a new notification' Server response: 421 421 Syntax error
2015-05-07 14:18 Will retry in 600 seconds: sender → recipient 'You have a new notification' Server response: 421 421 Syntax error
2015-05-07 14:18 Accepted: sender → recipient 'You have a new notification'
*Email addresses redacted.
Here is what the log says for the 421 error:
{
"severity": "temporary",
"tags": [],
"delivery-status": {
"retry-seconds": 600,
"message": "421 Syntax error",
"code": 421,
"description": null,
"session-seconds": 0.16810393333435059
},
"envelope": {
"transport": "smtp",
"sender": sender,
"sending-ip": "184.173.153.222",
"targets": recipient
},
"recipient-domain": domain,
"id": "TdCQ8omOSwqj_zYq18CBdQ",
"campaigns": [],
"reason": "generic",
"user-variables": {},
"flags": {
"is-routed": null,
"is-authenticated": true,
"is-system-test": false,
"is-test-mode": false
},
"log-level": "warn",
"timestamp": 1431029901.450764,
"message": {
"headers": {
"to": recipient,
"message-id": "20150507201819.16176.81911#mailgundomain",
"from": sender,
"subject": "You have a new notification"
},
"attachments": [],
"recipients": [
recipient
],
"size": 1036
},
"recipient": recipient,
"event": "failed"
}
I am new to using Mailgun and I am building the emails RAW (headers and all). A 421 is supposed to be for network errors... so 'Syntax error' doesn't make sense to me.
Some of the messages go through fine, but I do have an awful lot that are getting retried.
Any thoughts?
Thanks
I created a ticket with Mailgun support and they quickly helped me find the answer:
The error that you are seeing is due to the recipient's server either
1). throttling emails sent from your domain which is also known as ESP
throttling, 2) grey-listing of the IP in which the recipient server will
first verify that the sending server is not sending spam before allowing
delivery, or 3) a local server issue may have occurred such as the server
being offline or misconfigured.
The error code of "4xx" indicates that this is a soft, temporary bounce.
Whenever we attempt to deliver a message and the recipient server returns a
soft bounce, we will retry delivery for up to 8 hours in the following
intervals: 10 minutes, 10 minutes, 15 minutes, 30 minutes, 1 hour, 2 hour
and 4 hours. Unfortunately this cannot be adjusted and is hard coded in our
environment.
I checked with our admin and we had some anti-spam turned on for the mail server. We turned it off and no longer get '421 Syntax Errors'.
Thanks