I developed sample apps which backend DB is Redshift and try to execute query by following SDK code.
import { RedshiftDataClient, ExecuteStatementCommand } from '#aws-sdk/client-redshift-data';
export const resolvers: IResolvers<unknown, Context> = {
Query: {
user: (parent, args, context): User => ({ login: context.login }),
region: (): string => getRegion(),
getData: async () => {
const redshift_client = new RedshiftDataClient({});
const request = new ExecuteStatementCommand({
ClusterIdentifier: 'testrs',
Sql: `select * from test`,
SecretArn: 'arn:aws:secretsmanager:us-east-1:12345561:secret:test-HYRSWs',
Database: 'test',
});
try {
const data = await redshift_client.send(request);
console.log('data', data);
return data;
} catch (error) {
console.error(error);
throw new Error('Failed fetching data to Redshift');
} finally {
// execute regardless of error state
}
},
},
};
It returned following error
ERROR AccessDeniedException:
User: arn:aws:sts::12345561:assumed-role/WebsiteStack-Beta-US-EAST-GraphQLLambdaServiceRole1BCPB5P3Q4IS9/GraphQLLambda
is not authorized to perform: redshift-data:ExecuteStatement on resource: arn:aws:redshift:us-east-1:12345561:cluster:testrs
because no identity-based policy allows the redshift-data:ExecuteStatement action
Must I use sdk package like STS ?
If someone has opinion,or materials. will you please let me know
Thanks
I know when using the AWS SDK for Java V2 for the exact same use case, you can successfully query data by setting the ExecuteStatementRequest object and passing it to the Data Client's executeStatement like this:
if (num ==5)
sqlStatement = "SELECT TOP 5 * FROM blog ORDER BY date DESC";
else if (num ==10)
sqlStatement = "SELECT TOP 10 * FROM blog ORDER BY date DESC";
else
sqlStatement = "SELECT * FROM blog ORDER BY date DESC" ;
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.clusterIdentifier(clusterId)
.database(database)
.dbUser(dbUser)
.sql(sqlStatement)
.build();
ExecuteStatementResponse response = redshiftDataClient.executeStatement(statementRequest);
As shown here -- the required values are clusterId, database, and dbUser.
I would assume the AWS SDK for JavaScript would work the same way. (I have not tried using that SDK however).
The reference docs confirm this...
Temporary credentials - when connecting to a cluster, specify the cluster identifier, the database name, and the database user name. Also, permission to call the redshift:GetClusterCredentials operation is required. When connecting to a serverless endpoint, specify the database name.
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-redshift-data/classes/executestatementcommand.html
Related
I'm trying to import data into CloudSQL instance from cloud storage bucket using cloud function.
How can i delete schema's before importing the data using a single cloud function?
I am using Node.js in cloud function.
error:
error: exit status 3 stdout(capped at 100k bytes): SET SET SET SET SET set_config ------------ (1 row) SET SET SET SET stderr: ERROR: schema "< >" already exists
https://cloud.google.com/sql/docs/mysql/admin-api/rest/v1beta4/instances/import
in below code where do i need to put delete all existing schema's apart from public schema?
Entry point : importDatabase
index.js
const {google} = require('googleapis');
const {auth} = require("google-auth-library");
var sqlAdmin = google.sqladmin('v1beta4');
exports.importDatabase = (_req, res) => {
async function doIt() {
const authRes = await auth.getApplicationDefault();
let authClient = authRes.credential;
var request = {
project: 'my-project', // TODO: Update placeholder value.
instance: 'my-instance', // TODO: Update placeholder value.
resource: {
importContext: {
kind: "sql#importContext",
fileType: "SQL", // CSV
uri: <bucket path>,
database: <database-name>
// Options for importing data as SQL statements.
// sqlimportOptions: {
// /**
},
auth: authClient,
};
sqladmin.instances.import(request, function(err, result) {
if (err) {
console.log(err);
} else {
console.log(result);
}
res.status(200).send("Command completed", err, result);
});
}
doIt();
};
package.json
{
"name": "import-database",
"version": "0.0.1",
"dependencies": {
"googleapis": "^39.2.0",
"google-auth-library": "3.1.2"
}
}
The error looks to be occurring due to a previous aborted import managed to transfer the "schema_name" schema, and then this subsequent import was done without first re-initializing the DB,check helpful document on Cloud SQL import
One way to prevent this issue is to change the create statements in the SQL file from:
CREATE SCHEMA schema_name;
to
CREATE SCHEMA IF NOT EXISTS schema_name;
As far the removing of currently created schema is considered by default, only user or service accounts with the Cloud SQL Admin (roles/cloudsql.admin) or Owner (roles/owner) role have the permission to delete a Cloud SQL instance,please check the helpful document on cloudsql.instances.delete to help you understand the next steps.You can also define an IAM custom role for the user or service account that includes the cloudsql.instances.delete permission. This permission is supported in IAM custom roles.
As a best practice for import export operations, we recommend that you adopt the principle of least privilege, which in this case would mean creating a custom role and adding that specific permission and assigning it to your service account. Alternatively, the service account could be given the “Cloud SQL Admin” role, or the “Cloud Composer API Service Agent” role, which include this permission, and would therefore allow you to execute this command.
NOTE:It is recommended and advised to revalidate any delete actions performed as this may lead to loss of useful data.
I'm trying to make a skill based on Cake Time Tutorial but whenever I try to invoke my skill I'm facing an error that I don't know why.
This is my invoking function.
const LaunchRequestHandler = {
canHandle(handlerInput) {
console.log(`Can Handle Launch Request ${(Alexa.getRequestType(handlerInput.requestEnvelope) === "LaunchRequest")}`);
return (
Alexa.getRequestType(handlerInput.requestEnvelope) === "LaunchRequest"
);
},
handle(handlerInput) {
const speakOutput =
"Bem vindo, que série vai assistir hoje?";
console.log("handling launch request");
console.log(speakOutput);
return handlerInput.responseBuilder
.speak(speakOutput)
.reprompt(speakOutput)
.getResponse();
},
};
It should only prompt a message that's in Portuguese "Bem vindo, que série vai assistir hoje?" but instead it tries to access amazon S3 bucket for some reason and prints this error on console.
~~~~ Error handled: AskSdk.S3PersistenceAdapter Error: Could not read item (amzn1.ask.account.NUMBEROFACCOUNT) from bucket (undefined): Missing required key 'Bucket' in params
at Object.createAskSdkError (path\MarcaEpisodio\lambda\node_modules\ask-sdk-s3-persistence-adapter\dist\utils\AskSdkUtils.js:22:17)
at S3PersistenceAdapter.<anonymous> (path\MarcaEpisodio\lambda\node_modules\ask-sdk-s3-persistence-adapter\dist\attributes\persistence\S3PersistenceAdapter.js:90:45)
at step (path\MarcaEpisodio\lambda\node_modules\ask-sdk-s3-persistence-adapter\dist\attributes\persistence\S3PersistenceAdapter.js:44:23)
at Object.throw (path\MarcaEpisodio\lambda\node_modules\ask-sdk-s3-persistence-adapter\dist\attributes\persistence\S3PersistenceAdapter.js:25:53)
at rejected (path\MarcaEpisodio\lambda\node_modules\ask-sdk-s3-persistence-adapter\dist\attributes\persistence\S3PersistenceAdapter.js:17:65)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
Skill response
{
"type": "SkillResponseSuccessMessage",
"originalRequestId": "wsds-transport-requestId.v1.IDREQUESTED",
"version": "1.0",
"responsePayload": "{\"version\":\"1.0\",\"response\":{\"outputSpeech\":{\"type\":\"SSML\",\"ssml\":\"<speak>Desculpe, não consegui fazer o que pediu.</speak>\"},\"reprompt\":{\"outputSpeech\":{\"type\":\"SSML\",\"ssml\":\"<speak>Desculpe, não consegui fazer o que pediu.</speak>\"}},\"shouldEndSession\":false},\"userAgent\":\"ask-node/2.10.2 Node/v14.16.0\",\"sessionAttributes\":{}}"
}
----------------------
I've removed some ID information from the stack error but I think they're not relevant for the purpose.
The only thing I can think that is calling is when I add S3 adapter in alexa skill builder.
exports.handler = Alexa.SkillBuilders.custom()
.withApiClient(new Alexa.DefaultApiClient())
.withPersistenceAdapter(
new persistenceAdapter.S3PersistenceAdapter({
bucketName: process.env.S3_PERSISTENCE_BUCKET,
})
)
.addRequestHandlers(
LaunchRequestHandler,
MarcaEpisodioIntentHandler,
HelpIntentHandler,
CancelAndStopIntentHandler,
SessionEndedRequestHandler,
IntentReflectorHandler // make sure IntentReflectorHandler is last so it doesn't override your custom intent handlers
)
.addRequestInterceptors(MarcaEpisodioInterceptor)
.addErrorHandlers(ErrorHandler)
.lambda();
These are my Intents that I've created
Intents
And this is the function that should handle them.
const Alexa = require("ask-sdk-core");
const persistenceAdapter = require("ask-sdk-s3-persistence-adapter");
const intentName = "MarcaEpisodioIntent";
const MarcaEpisodioIntentHandler = {
canHandle(handlerInput) {
console.log("Trying to handle wiht marca episodio intent");
return (
Alexa.getRequestType(handlerInput.requestEnvelope) !== "LaunchRequest" &&
Alexa.getRequestType(handlerInput.requestEnvelope) === "IntentRequest" &&
Alexa.getIntentName(handlerInput.requestEnvelope) === intentName
);
},
async chandle(handlerInput) {
const serie = handlerInput.requestEnvelope.request.intent.slots.serie.value;
const episodio =
handlerInput.requestEnvelope.request.intent.slots.episodio.value;
const temporada =
handlerInput.requestEnvelope.request.intent.slots.temporada.value;
const attributesManager = handlerInput.attributesManager;
const serieMark = {
serie: serie,
episodio: episodio,
temporada: temporada,
};
attributesManager.setPersistentAttributes(serieMark);
await attributesManager.savePersistentAttributes();
const speakOutput = `${serie} marcada no episódio ${episodio} da temporada ${temporada}`;
return handlerInput.responseBuilder.speak(speakOutput).getResponse();
},
};
module.exports = MarcaEpisodioIntentHandler;
Any help will be grateful.
instead it tries to access amazon S3 bucket for some reason
First, your persistence adapter is loaded and configured on each use before your intent handlers' canHandle functions are polled. The persistence adapter is then used in the RequestInterceptor before any of them are polled.
If there's a problem there, it'll break before you ever get to your LaunchRequestHandler, which is what is happening.
Second, are you building an Alexa-hosted skill in the Alexa developer console or are you hosting your own Lambda via AWS?
Alexa-hosted creates a number of resources for you, including an Amazon S3 bucket and an Amazon DynamoDb table, then ensures the AWS Lambda it creates for you has the necessary role settings and the right information in its environment variables.
If you're hosting your own via AWS, your Lambda will need a role with read/write permissions on your S3 resources and you'll need to set the bucket where you're storing persistent values as an environment variable for your Lambda (or replace the process.env.S3_PERSISTENCE_BUCKET with a string containing the bucket name).
I have written a Nodejs code on Cloud Functions to create an on-demand backup and delete backup after retention days. However, calling sqlAdmin.backupRuns.list() returns a default of only 20 backups.
How do I set the setMaxResults attribute? This returns an error stating method does not exist.
sqlAdmin.backupRuns.list.setMaxResults(100);
This is how I'm retrieving all the backups:
sqlAdmin.backupRuns.list(request, function(err, response)
sqlAdmin.backupRuns.list.setMaxResults(100);
sqlAdmin.backupRuns.list(request, function(err, response) {
response.data.items.forEach(element => {
//do useful
});
}
You need to pass maxResults as a parameter based on the documentation. For example:
params = {
"project": "PROJECT_NAME",
"instance": "INSTANCE_ID",
"maxResults": 100
}
const res = await sqlAdmin.backupRuns.list(params);
console.log(res.data)
INSTANCE_ID is Cloud SQL instance ID. This does not include the project ID.
I'm trying to use RDSDataService to query an Aurora Serverless database. When I'm trying to query, my lambda just times out (I've set it up to 5 minutes just to make sure it isn't a problem with that). I have 1 record in my database and when I try to query it, I get no results, and neither the error or data flows are called. I've verified executeSql is called by removing the dbClusterOrInstanceArn from my params and it throw the exception for not having it.
I have also run SHOW FULL PROCESSLIST in the query editor to see if the queries were still running and they are not. I've given the lambda both the AmazonRDSFullAccess and AmazonRDSDataFullAccess policies without any luck either. You can see by the code below, i've already tried what was recommended in issue #2376.
Not that this should matter, but this lambda is triggered by a Kinesis event trigger.
const AWS = require('aws-sdk');
exports.handler = (event, context, callback) => {
const RDS = new AWS.RDSDataService({apiVersion: '2018-08-01', region: 'us-east-1'})
for (record of event.Records) {
const payload = JSON.parse(new Buffer(record.kinesis.data, 'base64').toString('utf-8'));
const data = compileItem(payload);
const params = {
awsSecretStoreArn: 'arn:aws:secretsmanager:us-east-1:149070771508:secret:xxxxxxxxx,
dbClusterOrInstanceArn: 'arn:aws:rds:us-east-1:149070771508:cluster:xxxxxxxxx',
sqlStatements: `select * from MY_DATABASE.MY_TABLE`
// database: 'MY_DATABASE'
}
console.log('calling executeSql');
RDS.executeSql(params, (error, data) => {
if (error) {
console.log('error', error)
callback(error, null);
} else {
console.log('data', data);
callback(null, { success: true })
}
});
}
}
EDIT: We've run the command through the aws cli and it returns results.
EDIT 2: I'm able to connect to it using the mysql2 package and connecting to it through the URI, so it's defiantly an issue with either the aws-sdk or how I'm using it.
Nodejs excution is not waiting for the result that's why process exit before completing the request.
use mysql library https://www.npmjs.com/package/serverless-mysql
OR
use context.callbackWaitsForEmptyEventLoop =false
Problem was the RDS had to be crated in a VPC, in which the Lambda's were not in
I am working on my AWS cert and I'm trying to figure out how the following bit of js code works:
var AWS = require('aws-sdk');
var uuid = require('node-uuid');
// Create an S3 client
var s3 = new AWS.S3();
// Create a bucket and upload something into it
var bucketName = 'node-sdk-sample-' + uuid.v4();
var keyName = 'hello_world.txt';
s3.createBucket({Bucket: bucketName}, function() {
var params = {Bucket: bucketName, Key: keyName, Body: 'Hello'};
s3.putObject(params, function(err, data) {
if (err)
console.log(err)
else
console.log("Successfully uploaded data to " + bucketName + "/" + keyName);
});
});
This code successfully loads a txt file containing the words "Hello" in it. I do not understand how this ^ can identify MY AWS account. It does! But how! It somehow is able to determine that I want a new bucket inside MY account, but this code was taken directly from the AWS docs. I don't know how it could figure that out....
As per Class: AWS.CredentialProviderChain, the AWS SDK for JavaScript looks for credentials in the following locations:
AWS.CredentialProviderChain.defaultProviders = [
function () { return new AWS.EnvironmentCredentials('AWS'); },
function () { return new AWS.EnvironmentCredentials('AMAZON'); },
function () { return new AWS.SharedIniFileCredentials(); },
function () {
// if AWS_CONTAINER_CREDENTIALS_RELATIVE_URI is set
return new AWS.ECSCredentials();
// else
return new AWS.EC2MetadataCredentials();
}
]
Environment Variables (useful for testing, or when running code on a local computer)
Local credentials file (useful for running code on a local computer)
ECS credentials (useful when running code in Elastic Container Service)
Amazon EC2 Metadata (useful when running code on an Amazon EC2 instance)
It is highly recommended to never store credentials within an application. If the code is running on an Amazon EC2 instance and a role has been assigned to the instance, the SDK will automatically retrieve credentials from the instance metadata.
The next best method is to store credentials in the ~/.aws/credentials file.