After refresh sandbox account from production account , some scheduled saved search who are made to run on production account run on sandbox and it's not what I want.
There is a way to prevent this ?
Saved search are not an accessible record so I don't Know how to do with a script or a workflow. Maybe a general preference or a rule or setting during the refreshing exist but I didn't find it .
The only solution I found except delete all the schedule searchs is to set an
unreachable filters to get 0 results
if someone have a better solution please let me know
function execute(scriptContext) {
var searchSavedSearch = search.create({
type: search.Type.SAV , filters: {
name: 'sendscheduledemails',
operator: search.Operator.IS,
values: ['T']
}
})
var searchResult = searchSavedSearch.run().getRange({ start: 0, end: 1000 })
var resultNumber = searchResult.length
for (var x = 0; x < resultNumber; x++) {
var searchRecord = search.load({ type: search.Type.SAVED_SEARCH, id: searchSavedsearch[x].id })
var filtersArr = searchRecord.filters
filtersArr.push(search.createFilter({
name: 'internalid',
operator: search.Operator.IS,
values: -1
}))
searchRecord.save()
}
}
Have you set email delivery preferences for you Sandbox account? If not go to Setup > Company > Email > Email Preferences (Administrator). Choose your preference on the Sandbox and Release Preview subtab, under Sandbox Options. Reference Suite Answer Id: 20152
Related
I developed sample apps which backend DB is Redshift and try to execute query by following SDK code.
import { RedshiftDataClient, ExecuteStatementCommand } from '#aws-sdk/client-redshift-data';
export const resolvers: IResolvers<unknown, Context> = {
Query: {
user: (parent, args, context): User => ({ login: context.login }),
region: (): string => getRegion(),
getData: async () => {
const redshift_client = new RedshiftDataClient({});
const request = new ExecuteStatementCommand({
ClusterIdentifier: 'testrs',
Sql: `select * from test`,
SecretArn: 'arn:aws:secretsmanager:us-east-1:12345561:secret:test-HYRSWs',
Database: 'test',
});
try {
const data = await redshift_client.send(request);
console.log('data', data);
return data;
} catch (error) {
console.error(error);
throw new Error('Failed fetching data to Redshift');
} finally {
// execute regardless of error state
}
},
},
};
It returned following error
ERROR AccessDeniedException:
User: arn:aws:sts::12345561:assumed-role/WebsiteStack-Beta-US-EAST-GraphQLLambdaServiceRole1BCPB5P3Q4IS9/GraphQLLambda
is not authorized to perform: redshift-data:ExecuteStatement on resource: arn:aws:redshift:us-east-1:12345561:cluster:testrs
because no identity-based policy allows the redshift-data:ExecuteStatement action
Must I use sdk package like STS ?
If someone has opinion,or materials. will you please let me know
Thanks
I know when using the AWS SDK for Java V2 for the exact same use case, you can successfully query data by setting the ExecuteStatementRequest object and passing it to the Data Client's executeStatement like this:
if (num ==5)
sqlStatement = "SELECT TOP 5 * FROM blog ORDER BY date DESC";
else if (num ==10)
sqlStatement = "SELECT TOP 10 * FROM blog ORDER BY date DESC";
else
sqlStatement = "SELECT * FROM blog ORDER BY date DESC" ;
ExecuteStatementRequest statementRequest = ExecuteStatementRequest.builder()
.clusterIdentifier(clusterId)
.database(database)
.dbUser(dbUser)
.sql(sqlStatement)
.build();
ExecuteStatementResponse response = redshiftDataClient.executeStatement(statementRequest);
As shown here -- the required values are clusterId, database, and dbUser.
I would assume the AWS SDK for JavaScript would work the same way. (I have not tried using that SDK however).
The reference docs confirm this...
Temporary credentials - when connecting to a cluster, specify the cluster identifier, the database name, and the database user name. Also, permission to call the redshift:GetClusterCredentials operation is required. When connecting to a serverless endpoint, specify the database name.
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-redshift-data/classes/executestatementcommand.html
Is it possible to get reports by filtering using power bi rest api? I want to embed power bi reports filtering by records. I can't see any option on power bi rest api, then how to get all reports by filter and embed reports in my application?
Since I am using powerbi.js as javascript client so below is my sample code:
https://github.com/Microsoft/PowerBI-JavaScript
var tokenType = 'embed';
// Get models. models contains enums that can be used.
var models = window['powerbi-client'].models;
// We give All permissions to demonstrate switching between View and
//Edit mode and saving report.
var permissions = models.Permissions.All;
var config = {
type: 'report',
tokenType: tokenType == '0' ? models.TokenType.Aad :
models.TokenType.Embed,
accessToken: txtAccessToken,
embedUrl: txtEmbedUrl,
id: txtEmbedReportId,
permissions: permissions,
settings: {
filterPaneEnabled: true,
navContentPaneEnabled: true
}
};
// Get a reference to the embedded report HTML element
var embedContainer = $('#embedContainer')[0];
// Embed the report and display it within the div container.
var report = (<any>window).powerbi.embed(embedContainer, config);
When you are embedding a report, you can use the Embed Configuration to apply filters when the report is loaded. You can also change the filters dynamically later.
Here is a quote from filters wiki:
Filters are JavaScript objects that have a special set of properties. Currently, there are five types of filters: Basic, Advanced, Relative Date, Top N and Include/Exclude, which match the types of filters you can create through the filter pane. There are corresponding interfaces IBasicFilter, IAdvancedFilter, IRelativeDateFilter, ITopNFilter and IIncludeExcludeFilter, which describe their required properties.
For example, your filter can be constructed like this:
const basicFilter: pbi.models.IBasicFilter = {
$schema: "http://powerbi.com/product/schema#basic",
target: {
table: "Sales",
column: "AccountId"
},
operator: "In",
values: [1,2,3],
filterType: pbi.models.FilterType.BasicFilter
}
You should pass this filter in report's configuration filters property.
I'm trying to use history method provided by Pubnub to get the chat history of a channel and running my node.js code on AWS Lambda. However, my function is not getting called. I'm not sure if I'm doing it correctly, but here's the code snippet-
var publishKey = "pub-c-cfe10ea4-redacted";
var subscribeKey = "sub-c-fedec8ba-redacted";
var channelId = "ChatRoomDemo";
var uuid;
var pubnub = {};
function readMessages(intent,session,callback){
pubnub = require("pubnub")({
publish_key : publishKey,
subscribe_key: subscribeKey
});
pubnub.history({
channel : 'ChatRoomDemo',
callback : function(m){
console.log(JSON.stringify(m));
},
count : 100,
reverse : false
});
}
I expect the message history in JSON format to be displayed on the console.
I had the same problem and finally got it working. What you will need to do is allow the CIDR address for pubnub.com. This was a foreign idea to me until I figured it out! Here's how to do that to publish to a channel:
Copy the CIDR address for pubnub.com which is 54.246.196.128/26 (Source) [WARNING: do not this - see comment below]
Log into https://console.aws.amazon.com
Under "Services" go to "VPC"
On the left, under "Security," click "Network ACLs"
Click "Create Network ACL" give it a name tag like "pubnub.com"
Select the VPC for your Lambda skill (if you're not sure, click around your Lambda function, you'll see it. You probably only have one listed like me)
Click "Yes, Create"
Under the "Outbound Rules" tab, click "Edit"
For "Rule #" I just used "1"
For "Type" I used "HTTP (80)"
For "Destination" I pasted in the CIDR from step 1
"Save"
Note, if you're subscribing to a channel, you'll also need to add an "Inbound Rule" too.
I have a simple single-page app, that is deployed to an S3 bucket using gulp-awspublish. We use inquirer.js (via gulp-prompt) to ask the developer which bucket to deploy to.
Sometimes the app may be deployed to several S3 buckets. Currently, we only allow one bucket to be selected, so the developer has to gulp deploy for each bucket in turn. This is dull and prone to error.
I'd like to be able to select multiple buckets and deploy the same content to each. It's simple to select multiple buckets with inquirer.js/gulp-prompt, but not simple to generate arbitrary multiple S3 destinations from a single stream.
Our deploy task is based upon generator-webapp's S3 recipe. The recipe suggests gulp-rename to rewrite the path to write to a specific bucket. Currently our task looks like this:
gulp.task('deploy', ['build'], () => {
// get AWS creds
if (typeof(config.awsCreds) !== 'object') {
return console.error('No config.awsCreds settings found. See README');
}
var dirname;
const publisher = $.awspublish.create({
key: config.awsCreds.key,
secret: config.awsCreds.secret,
bucket: config.awsCreds.bucket
});
return gulp.src('dist/**/*.*')
.pipe($.prompt.prompt({
type: 'list',
name: 'dirname',
message: 'Using the ‘' + config.awsCreds.bucket + '’ bucket. Which hostname would you like to deploy to?',
choices: config.awsCreds.dirnames,
default: config.awsCreds.dirnames.indexOf(config.awsCreds.dirname)
}, function (res) {
dirname = res.dirname;
}))
.pipe($.rename(function(path) {
path.dirname = dirname + '/dist/' + path.dirname;
}))
.pipe(publisher.publish())
.pipe(publisher.cache())
.pipe($.awspublish.reporter());
});
It's hopefully obvious, but config.awsCreds might look something like:
awsCreds: {
dirname: 'default-bucket',
dirnames: ['default-bucket', 'other-bucket', 'another-bucket']
}
Gulp-rename rewrites the destination path to use the correct bucket.
We can select multiple buckets by using "checkbox" instead of "list" for the gulp-prompt options, but I'm not sure how to then deliver it to multiple buckets.
In a nutshell, if $.prompt returns an array of strings instead of a string, how can I write the source to multiple destinations (buckets) instead of a single bucket?
Please keep in mind that gulp.dest() is not used -- only gulp.awspublish() -- and we don't know how many buckets might be selected.
Never used S3, but if I understand your question correctly a file js/foo.js should be renamed to default-bucket/dist/js/foo.js and other-bucket/dist/js/foo.js when the checkboxes default-bucket and other-bucket are selected?
Then this should do the trick:
// additionally required modules
var path = require('path');
var through = require('through2').obj;
gulp.task('deploy', ['build'], () => {
if (typeof(config.awsCreds) !== 'object') {
return console.error('No config.awsCreds settings found. See README');
}
var dirnames = []; // array for selected buckets
const publisher = $.awspublish.create({
key: config.awsCreds.key,
secret: config.awsCreds.secret,
bucket: config.awsCreds.bucket
});
return gulp.src('dist/**/*.*')
.pipe($.prompt.prompt({
type: 'checkbox', // use checkbox instead of list
name: 'dirnames', // use different result name
message: 'Using the ‘' + config.awsCreds.bucket +
'’ bucket. Which hostname would you like to deploy to?',
choices: config.awsCreds.dirnames,
default: config.awsCreds.dirnames.indexOf(config.awsCreds.dirname)
}, function (res) {
dirnames = res.dirnames; // store array of selected buckets
}))
// use through2 instead of gulp-rename
.pipe(through(function(file, enc, done) {
dirnames.forEach((dirname) => {
var f = file.clone();
f.path = path.join(f.base, dirname, 'dist',
path.relative(f.base, f.path));
this.push(f);
});
done();
}))
.pipe(publisher.cache())
.pipe($.awspublish.reporter());
});
Notice the comments where I made changes from the code you posted.
What this does is use through2 to clone each file passing through the stream. Each file is cloned as many times as there were bucket checkboxes selected and each clone is renamed to end up in a different bucket.
I need to retrieve the current balance of bank Account in Netsuite using SuiteTalk(Netsuite Webservise).In suite talk API there is no field/parameter to refer the balance of account.But There is UI field Balance which shows the current balance of the account.Any help/suggestions on this is appreciated
If there is no field/parameter in the API referencing that UI field, it is most likely that the field is not supported by the API.
It's definitely not intuitive, but it is possible to pull this data using the API. Here's an example using the netsuite ruby bindings.
def balance_for_account(ns_account)
search = NetSuite::Records::Account.search(
criteria: {
basic: [
{
field: 'internalIdNumber',
operator: 'equalTo',
value: ns_account.internal_id
}
]
},
columns: {
'listAcct:basic' => {
'platformCommon:internalId/' => {},
'platformCommon:balance' => {}
}
}
)
search.results.first.attributes[:balance].to_f
end