Cannot connect DynamoDB with Lambda - amazon-web-services

Here is my code
var dynamodb = new AWS.DynamoDB();
dynamodb.batchGetItem(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
and I got this error
{
message: 'Could not load credentials from any providers',
errno: 'ETIMEDOUT',
code: 'CredentialsError',
syscall: 'connect',
address: 'x.x.x.x',
port: 80,
time: 2019-03-13T07:59:34.279Z,
originalError: {
errno: 'ETIMEDOUT',
code: 'ETIMEDOUT',
syscall: 'connect',
address: 'x.x.x.x',
port: 80,
message: 'connect ETIMEDOUT x.x.x.x:80'
}
}
I am new to AWS. I thought calling DynamoDB with Lambda do not need access and secret key. Is that correct?
I also grant full DynamoDB access permission role. What causes this problem?

I've faced the same problem here. The reason is your Lambda function seems inside a VPC and the DynamoDB isn't. Try removing the VPC in Network part of the settings and it should be solved.

Related

Make a cross account call to Redshift Data API

Summary of problem:
We have an AWS Redshift cluster in Account A, this has a database called 'products'
In Account B we have a lambda function which needs to execute a SQL statement against 'products' using the Redshift Data API
We have setup a new secret in AWS Secrets manager containing the redshift cluster credentials. This secret has been shared with Account B. We've confirmed Account B can access this information from AWS Secrets Manager.
When we call the Redshift Data API action 'executeStatement' we get the following error:
ValidationException: Cluster doesn't exist in this region.
at Request.extractError (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\protocol\json.js:52:27)
at Request.callListeners (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\sequential_executor.js:106:20)
at Request.emit (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\sequential_executor.js:78:10)
at Request.emit (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\request.js:688:14)
at Request.transition (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\request.js:22:10)
at AcceptorStateMachine.runTo (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\state_machine.js:14:12)
at C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\state_machine.js:26:10
at Request.<anonymous> (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\request.js:38:9)
at Request.<anonymous> (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\request.js:690:12)
at Request.callListeners (C:\git\repositories\sandbox\redshift\node_modules\aws-sdk\lib\sequential_executor.js:116:18)
The error message suggest it's perhaps not going to the correct account, since the secret contains this information I would have expected it to know.
Code Sample:
Here's my code:
var redshiftdata = new aws.RedshiftData({ region: 'eu-west-2'});
const params : aws.RedshiftData.ExecuteStatementInput = {
ClusterIdentifier: '<clusteridentifier>',
Database: 'products',
SecretArn: 'arn:aws:secretsmanager:<region>:<accountNo>:secret:<secretname>',
Sql: `select * from product_table where id = xxx`,
StatementName: 'statement-name',
WithEvent: true
};
redshiftdata.executeStatement(params,
async function(err, data){
if (err) console.log(err, err.stack);
else {
const resultParams : aws.RedshiftData.GetStatementResultRequest = { Id: data.Id! };
redshiftdata.getStatementResult(resultParams, function(err, data){
if (err) console.log(err, err.stack);
else console.dir(data, {depth: null});
})
}
});
Any suggestions or pointers would be really appreciated.
Thanks for the answer Parsifal. Here's a code snippet of the working solution.
import aws from "aws-sdk";
var roleToAssume = {RoleArn: 'arn:aws:iam::<accountid>:role/<rolename>',
RoleSessionName: 'example',
DurationSeconds: 900,};
var sts = new aws.STS({ region: '<region>'});
sts.assumeRole(roleToAssume, function(err, data) {
if (err)
{
console.log(err, err.stack);
}
else
{
aws.config.update({
accessKeyId: data.Credentials?.AccessKeyId,
secretAccessKey: data.Credentials?.SecretAccessKey,
sessionToken: data.Credentials?.SessionToken
})
// Redshift code here...
}
});

How user aws EventBridge to reboot instance when env degraded

I'm trying to use the event bridge to reboot my degraded ec2 instance whenever the beanstalk changes to warn status, at the destination there is the option to call a lambda function or use the api reboot instance, my doubt is how to get the id of the instance only degraded (Taking into account that my environment has 2 instances).
when Elastic Beanstalk dispatches an event for a health status change, it looks like:
{
"version":"0",
"id":"1234a678-1b23-c123-12fd3f456e78",
"detail-type":"Health status change",
"source":"aws.elasticbeanstalk",
"account":"111122223333",
"time":"2020-11-03T00:34:48Z",
"region":"us-east-1",
"resources":[
"arn:was:elasticbeanstalk:us-east-1:111122223333:environment/myApplication/myEnvironment"
],
"detail":{
"Status":"Environment health changed",
"EventDate":1604363687870,
"ApplicationName":"myApplication",
"Message":"Environment health has transitioned from Pending to Ok. Initialization completed 1 second ago and took 2 minutes.",
"EnvironmentName":"myEnvironment",
"Severity":"INFO"
}
}
in your AWS Lambda, you can use any AWS Elastic Beanstalk command, using the AWS SDK for your language of choice,
using the AWS JavaScript SDK for Elastic Beanstalk, you can restart your environment with restartAppServer:
var params = {
EnvironmentName: event.detail.EnvironmentName // based on the sample above
};
elasticbeanstalk.restartAppServer(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
in the example above we would trigger a restart for all instances in the environment.
to target a specific instance, you can use describe-instances-health, renamed to describeInstancesHealth in the AWS JavaScript SDK:
var params = {
AttributeNames: ["HealthStatus"],
EnvironmentName: event.detail.EnvironmentName,
};
elasticbeanstalk.describeInstancesHealth(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
based on its response, you can filter the instance that isn't OK and trigger a restart using the instance id by calling the EC2 API rebootInstances:
var params = {
InstanceIds: [
"i-1234567890abcdef5"
]
};
ec2.rebootInstances(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});

Connection refused when connecting to redis on EC2 instance

I am trying to connect to local redis database on EC2 instance from a lambda function. However when I try to execute the code, I get the following error in the logs
{
"errorType": "Error",
"errorMessage": "Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
"code": "ECONNREFUSED",
"stack": [
"Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379",
" at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1106:14)"
],
"errno": "ECONNREFUSED",
"syscall": "connect",
"address": "127.0.0.1",
"port": 6379
}
The security group has the following entries
Type: Custom TCP Rule
Port: 6379
Source: <my security group name>
Type: Custom TCP Rule
Port: 6379
Source: 0.0.0.0/0
My Lambda function has the following code.
'use strict';
const Redis = require('redis');
module.exports.hello = async event => {
var redis = Redis.createClient({
port: 6379,
host: '127.0.0.1',
password: ''
});
redis.on('connect', function(){
console.log("Redis client conected : " );
});
redis.set('age', 38, function(err, reply) {
console.log(err);
console.log(reply);
});
return {
statusCode: 200,
body: JSON.stringify(
{
message: 'The lambda function is called..!!',
input: event,
redis: redis.get('age')
},
null,
2
),
};
};
Please let me know where I am going wrong.
First thing, Your lambda trying to connect to localhost so this will not work. You have to place the public or private IP of the Redis instance.
But still, you need to make sure these things
Should in the same VPC as your EC2 instance
Should allow outbound traffic in the security group
Assign subnet
Your instance Allow lambda to connect with Redis in security group
const redis = require('redis');
const redis_client = redis.createClient({
host: 'you_instance_IP',
port: 6379
});
exports.handler = (event, context, callback) => {
redis_client.set("foo", "bar");
redis_client.get("foo", function(err, reply) {
redis_client.unref();
callback(null, reply);
});
};
You can also look into this how-should-i-connect-to-a-redis-instance-from-an-aws-lambda-function
On Linux Ubuntu server 20.04 LTS I was seeing a similar error after reboot of the EC2 server which for our use case runs an express app via a cron job connecting a nodeJs app (installed with nvm) using passport.js to use sessions in Redis:
Redis error: Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED 127.0.0.1:6379
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1144:16) {
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '127.0.0.1',
port: 6379
}
What resolved it for me, as my nodeJs app was running as Ubuntu user I needed to make that path available, was to add to the PATH within /etc/crontab by:
sudo nano /etc/crontab Just comment out the original path in there so you can switch back if required (my original PATH was set to: PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin ) and append the location of your bin you may need to refer to, in the format:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/home/ubuntu/.nvm/versions/node/v12.20.0/bin
And the error disappeared for me
// redisInit.js
const session = require('express-session');
const redis = require('redis');
const RedisStore = require('connect-redis')(session);
const { redisSecretKey } = process.env;
const redisClient = redis.createClient();
redisClient.on('error', (err) => {
console.log('Redis error: ', err);
});
const redisSession = session({
secret: redisSecretKey,
name: 'some_redis_store_name',
resave: true,
saveUninitialized: true,
cookie: { secure: false },
store: new RedisStore(
{
host: 'localhost', port: 6379, client: redisClient, ttl: 86400
}
)
});
module.exports = redisSession;

Cannot get simple AWS web socket publish to work

I wrote this uber simple client to publish a message to aws sdk via websocket protocol (javascript version). https://github.com/aws/aws-iot-device-sdk-js
var awsIot = require('aws-iot-device-sdk');
var device = awsIot.device({
region: "us-west-2",
protocol: "wss",
clientId: "ARUNAVS SUPER TEST",
host: "iot.us-west-2.amazonaws.com",
port: "443"
});
device
.on('connect', function() {
console.log('connect');
device.publish('abcd', JSON.stringify({ test_data: 1}));
});
device
.on('message', function(topic, payload) {
console.log('message', topic, payload.toString());
});
device
.on('error', function(error) {
console.log('error', error);
});
I am getting the following error (after importing admin creds https://github.com/aws/aws-iot-device-sdk-js#websockets):-
node testCode.js
error { Error: unexpected server response (403)
at ClientRequest._req.on
(/Users/arunavs/mrtests/node_modules/ws/lib/WebSocket.js:653:21)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at HTTPParser.parserOnIncomingClient (_http_client.js:472:21)
at HTTPParser.parserOnHeadersComplete (_http_common.js:105:23)
at TLSSocket.socketOnData (_http_client.js:361:20)
at emitOne (events.js:96:13)
at TLSSocket.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:177:18)
at TLSSocket.Readable.push (_stream_readable.js:135:10)
type: 'error',
target:
WebSocket {
domain: null,
_events: {},
_eventsCount: 0,
_maxListeners: undefined,
readyState: 3,
bytesReceived: 0,
extensions: null,
protocol: '',
_binaryType: 'arraybuffer',
_finalize: [Function: bound finalize],
_closeFrameReceived: false,
_closeFrameSent: false,
_closeMessage: '',
_closeTimer: null,
_finalized: true,
The SDK fails to give any reason why I am getting a 403.
Note : According to https://github.com/aws/aws-iot-device-sdk-js/blob/234d170c865586f4e49e4b0946100d93f367ee8f/device/index.js#L142, the code is even presigning using sigv4, as part of my output also has
url: 'wss://iot.us-west-2.amazonaws.com:443/mqtt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential= .......
Has anyone seen an error like this?
I think, you are publish on the THING which does not allow all users to connect with it.
Can you post the details of the policy of the thing that you are trying to publish message on.
On the Create a policy page, in the Name field, type a name for the
policy (for example, MyIoTButtonPolicy). In the Action field, type
iot:Connect. In the Resource ARN field, type *. Select the Allow
checkbox. This allows all clients to connect to AWS IoT.
Read more about POLICIES.
PS: This is just a wild guess. Please post policy details in the question so that I can be sure.

Change cognito user pool user status

Is it possible to change with my android App, Cognito user pool user status from FORCE_CHANGE_PASSWORD to CONFIRMED? or from RESET_REQUIRED to CONFIRMED?
If yes which API call can I use?
In fact, I imported users to Cognito and I don't find a way or any example on how to turn them to CONFIRMED status using my App.
Thanks
To change the cognito user pool user status from FORCE_CHANGE_PASSWORD to CONFIRMED-
1.with aws-cli:
get a session token with the temporary password
aws cognito-idp admin-initiate-auth --user-pool-id us-west-2_xxxxxxx --client-id xxxxxxx --auth-flow ADMIN_NO_SRP_AUTH --auth-parameters USERNAME=xxx,PASSWORD=xxx
set new password with the session token
aws cognito-idp admin-respond-to-auth-challenge --user-pool-id xxxx --client-id xxxx --challenge-name NEW_PASSWORD_REQUIRED --challenge-responses NEW_PASSWORD=xxx,USERNAME=xxx --session session_key_from_previous_token
2.with aws-sdk:
get a session token with the temporary password
cognitoidentityserviceprovider.adminInitiateAuth(
{
AuthFlow: 'ADMIN_NO_SRP_AUTH',
ClientId: 'xxx',
UserPoolId: 'xxx',
AuthParameters:
{ USERNAME: 'xxx', PASSWORD: 'temporary_password' }
}, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
set new password with the session token
var params = {
ChallengeName: 'NEW_PASSWORD_REQUIRED',
ClientId: 'xxxx',
ChallengeResponses: {
USERNAME: 'xxx',
NEW_PASSWORD: 'xxx'
},
Session: 'session_key_from_previous_token'
};
cognitoidentityserviceprovider.respondToAuthChallenge(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});
Note: If get an error about "Unable to verify secret hash for client", create another app client without a secret and use that.
To change the status of the user you just need to go through the respective flows. To change FORCE_CHANGE_PASSWORD to CONFIRMED, you would need to use the one time password and login and change your password. For RESET_REQUIRED, you would need to use the Forgot Password flow and that will change the status to CONFIRMED.