Invocation with unauthorized signer On Anchor Solana - blockchain

I'm trying to sign a transaction manually with the anchor framework.
The following is the normal and currently working method to call the program:
await program.value.rpc.sendTweet(topic, content, {
accounts: {
author: wallet.value.publicKey,
tweet: tweet.publicKey,
systemProgram: web3.SystemProgram.programId,
},
signers: [tweet]
})
And this is the struct on the rust side
#[derive(Accounts)]
pub struct SendTweet<'info> {
#[account(init, payer = author, space = Tweet::LEN)]
pub tweet: Account<'info, Tweet>,
#[account(mut)]
pub author: Signer<'info>,
#[account(address = system_program::ID)]
pub system_program: AccountInfo<'info>,
}
Instead I want to be able to manually compile, sign and send the transaction, so I've wrote the following:
const transactionInstruction = new TransactionInstruction({
keys: [
{pubkey: tweet.publicKey, isSigner: false, isWritable: true},
{pubkey: wallet.value.publicKey, isSigner: true, isWritable: true}
],
programId: programId,
data: transactionInstructionAnchor.data
});
{...}
await web3.sendAndConfirmRawTransaction(connection, rawTransaction)
I see that the problem Might be inside the keys of the TransactionInstruction because I'm passing just 2 keys, but in the SendTweet accounts I'm passing 3 values:
accounts: {
author: wallet.value.publicKey,
tweet: tweet.publicKey,
systemProgram: programId, // <--- THIS ONE ?
},
I've tried adding to the keys this:
{
pubkey: web3.SystemProgram.programId,
isWritable: false,
isSigner: false,
},
But it gives me the error: failed to send transaction: Transaction simulation failed: Error processing Instruction 0: Cross-program invocation with unauthorized signer or writable account
Anyone knows what I'm doing wrong?
Thankyou

Related

Querying MongoDB with a regular expression in Rust

I am trying to implement the bucket pattern as solution to a previous question.
In the example they issue an update with a selector that uses a regular expression:
db.history.updateOne({ "_id": /^7000000_/, "count": { $lt: 1000 } },
{
"$push": {
"history": {
"type": "buy",
"ticker": "MDB",
"qty": 25,
"date": ISODate("2018-11-02T11:43:10")
} },
"$inc": { "count": 1 },
"$setOnInsert": { "_id": "7000000_1541184190" }
},
{ upsert: true })
I'm trying to do the same in Rust, but the query is interpreting my regex as a string literal and not evaluating the regex.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RepetitionBucketUpdate {
#[serde(with = "serde_regex")]
id: Regex,
device_id: Uuid,
session_id: Uuid,
set_id: Uuid,
exercise: String,
level: String,
count: mongodb::bson::Bson,
}
impl From<JsonApiRepetition> for RepetitionBucketUpdate {
fn from (value: JsonApiRepetition) -> Self {
let id = format!("^{}_", value.device_id.to_string().replace("-", ""));
let re = Regex::new(&id).unwrap();
RepetitionBucketUpdate {
id: re,
device_id: value.device_id,
session_id: value.session_id,
set_id: value.set_id,
exercise: value.exercise,
level: value.level,
count: mongodb::bson::bson!( { "$lt": BUCKET_RECORD_LIMIT }),
}
}
}
let update = bson::doc! {
"$push": {
"repetitions": mongodb::bson::to_bson(&repetition_update).unwrap(),
},
"$inc": { "count": 1 },
"$setOnInsert": { "id": oid }
};
let options = mongodb::options::UpdateOptions::builder()
.upsert(true)
.build();
collection.update_one(query, update, options).await.map_err(CollectorError::DbError)?;
If I println! the update parameters I see:
query: Document({"id": String("^6fcd683c20d5415da1341e7d2f780749_"), "device_id": String("6fcd683c-20d5-415d-a134-1e7d2f780749"), "session_id": String("8388e24d-e680-46f4-9205-b9e43e39a17a"), "set_id": String("53d5a3ec-5962-402d-8e8a-41e9c5e3e01f"), "exercise": String("Bench Press"), "level": String("WheelsWithinWheels"), "count": Document(Document({"$lt": Int32(1000)}))})
update: Document({"$push": Document(Document({"repetitions": Document(Document({"number": Int32(88), "rom": Double(69.42), "duration": Double(666.0), "time": Int64(10870198172412)}))})), "$inc": Document(Document({"count": Int32(1)})), "$setOnInsert": Document(Document({"id": String("6fcd683c20d5415da1341e7d2f780749_1600107371599537000")}))})
options: UpdateOptions {
array_filters: None,
bypass_document_validation: None,
upsert: Some(
true,
),
collation: None,
hint: None,
write_concern: None,
}
It's not matching, and it's inserting every update as a new document rather than bucketing subsequent updates.
I can successfully issue a regex based query from the mongodb shell. How do I query with a regex using Rust and mongodb as in the example?
I figured this out, so for anyone who has this problem in the future:
The mongodb driver does not use the Regex crate, instead the bson crate defines a Regex struct.
My usage changed from the above (see question) to:
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RepetitionBucketUpdate {
pub id: mongodb::bson::Bson,
pub device_id: Uuid,
session_id: Uuid,
set_id: Uuid,
exercise: String,
level: String,
count: mongodb::bson::Bson,
}
let id = format!("^{}_", value.device_id.to_string().replace("-", ""));
let re = mongodb::bson::Regex {
pattern: id,
options: String::new(),
};
And voilà, ça marche!

how to create Intents with Context from the DialogFlow API

I'm trying to batch create a bunch of intents, and I want to assign an Input context
As far as I can see this should not need a session as it's just a string context name.
Like this in the GUI:
The API call I make creates the intent,
doesn't throw an error,
but I can't get the contexts to show up.
Is there some weird format to the contexts parameter?
I've exported the zip and looked at the JSON files, they're just an array of strings.
I've seen other code that seems to require a user conversation sessionId to create contexts. But the intents are global - not for a single conversation. And I assume these are just for tracking context within a single conversation session (or google astronaut engineering)
The data I'm POSTing looks like the below
There's a google example here that doesn't touch contexts
https://cloud.google.com/dialogflow/es/docs/how/manage-intents#create_intent
I've tried contexts as various formats without success
// this is the part that doesn't work
// const contexts = [{
// // name: `${sessionPath}/contexts/${name}`,
// // name: 'test context name'
// }]
const contexts = [
'theater-critics'
]
createIntentRequest {
"parent": "projects/XXXXXXXX-XXXXXXXX/agent",
"intent": {
"displayName": "test 4",
"trainingPhrases": [
{
"type": "EXAMPLE",
"parts": [
{
"text": "this is a test phrase"
}
]
},
{
"type": "EXAMPLE",
"parts": [
{
"text": "this is a another test phrase"
}
]
}
],
"messages": [
{
"text": {
"text": [
"this is a test response"
]
}
}
],
"contexts": [
"theater-critics"
]
}
}
Intent projects/asylum-287516/agent/intents/XXXXXXXX-e852-4c09-bda6-e524b8329db8 created
Full JS (TS) code below for anyone else
import { DfConfig } from './DfConfig'
const dialogflow = require('#google-cloud/dialogflow');
const testData = {
displayName: 'test 4',
trainingPhrasesParts: [
"this is a test phrase",
"this is a another test phrase"
],
messageTexts: [
'this is a test response'
]
}
// const messageTexts = 'Message texts for the agent's response when the intent is detected, e.g. 'Your reservation has been confirmed';
const intentsClient = new dialogflow.IntentsClient();
export const DfCreateIntent = async () => {
const agentPath = intentsClient.agentPath(DfConfig.projectId);
const trainingPhrases = [];
testData.trainingPhrasesParts.forEach(trainingPhrasesPart => {
const part = {
text: trainingPhrasesPart,
};
// Here we create a new training phrase for each provided part.
const trainingPhrase = {
type: 'EXAMPLE',
parts: [part],
};
// #ts-ignore
trainingPhrases.push(trainingPhrase);
});
const messageText = {
text: testData.messageTexts,
};
const message = {
text: messageText,
};
// this is the part that doesn't work
// const contexts = [{
// // name: `${sessionPath}/contexts/${name}`,
// // name: 'test context name'
// }]
const contexts = [
'theater-critics'
]
const intent = {
displayName: testData.displayName,
trainingPhrases: trainingPhrases,
messages: [message],
contexts
};
const createIntentRequest = {
parent: agentPath,
intent: intent,
};
console.log('createIntentRequest', JSON.stringify(createIntentRequest, null, 2))
// Create the intent
const [response] = await intentsClient.createIntent(createIntentRequest);
console.log(`Intent ${response.name} created`);
}
// createIntent();
update figured out based on this
https://cloud.google.com/dialogflow/es/docs/reference/rest/v2/projects.agent.intents#Intent
const contextId = 'runner'
const contextName = `projects/${DfConfig.projectId}/agent/sessions/-/contexts/${contextId}`
const inputContextNames = [
contextName
]
const intent = {
displayName: testData.displayName,
trainingPhrases: trainingPhrases,
messages: [message],
inputContextNames
};

How can I update variable values in json body of a request (incrementing of time)

In Postman calls, how can I update variable values in json body of a request with increasing time. I need to call the endpoint for 2048 times. Each call should have the end_time with 5mins difference. I'm unable to convert the value to normal time format.
I wrote this:
var moment = require("moment");
var t = pm.variables.get("t");
pm.environment.set('t', moment().add(1000, 'seconds').valueOf(t));
console.log("t", t);
I see an error:
{
"ErrorCode": "1100",
"Message": "request.end_time: Error converting value \"1581351445025\" to type 'System.TimeSpan'. Path 'end_time', line 10, position 29."
}
Sample request: (In Body)
{
"monday": true,
"tuesday": true,
"wednesday": true,
"thursday": true,
"friday": true,
"saturday": false,
"sunday": false,
"start_time": "7:30:00",
"end_time": "{{t}}",
"start_date": "2020-01-23",
"end_date": "2020-05-23"
}
In Pre-req script view
var moment = require("moment");
var t = pm.variables.get("t");
console.log("t: " + t);
var newT = moment().add(1000, 'seconds').valueOf(t);
console.log("newT: " + newT);
postman.setEnvironmentVariable("newT", newT);
Then your request body should just change to use the new variable {{newT}}
Not sure why but using pm.environment.set wasn't setting the environment at all but postman.setEnvironmentVariable seems to work.

AWS RDS Data API executeStatement not return column names

I'm playing with the New Data API for Amazon Aurora Serverless
Is it possible to get the table column names in the response?
If for example I run the following query in a user table with the columns id, first_name, last_name, email, phone:
const sqlStatement = `
SELECT *
FROM user
WHERE id = :id
`;
const params = {
secretArn: <mySecretArn>,
resourceArn: <myResourceArn>,
database: <myDatabase>,
sql: sqlStatement,
parameters: [
{
name: "id",
value: {
"stringValue": 1
}
}
]
};
let res = await this.RDS.executeStatement(params)
console.log(res);
I'm getting a response like this one, So I need to guess which column corresponds with each value:
{
"numberOfRecordsUpdated": 0,
"records": [
[
{
"longValue": 1
},
{
"stringValue": "Nicolas"
},
{
"stringValue": "Perez"
},
{
"stringValue": "example#example.com"
},
{
"isNull": true
}
]
]
}
I would like to have a response like this one:
{
id: 1,
first_name: "Nicolas",
last_name: "Perez",
email: "example#example.com",
phone: null
}
update1
I have found an npm module that wrap Aurora Serverless Data API and simplify the development
We decided to take the current approach because we were trying to cut down on the response size and including column information with each record was redundant.
You can explicitly choose to include column metadata in the result. See the parameter: "includeResultMetadata".
https://docs.aws.amazon.com/rdsdataservice/latest/APIReference/API_ExecuteStatement.html#API_ExecuteStatement_RequestSyntax
Agree with the consensus here that there should be an out of the box way to do this from the data service API. Because there is not, here's a JavaScript function that will parse the response.
const parseDataServiceResponse = res => {
let columns = res.columnMetadata.map(c => c.name);
let data = res.records.map(r => {
let obj = {};
r.map((v, i) => {
obj[columns[i]] = Object.values(v)[0]
});
return obj
})
return data
}
I understand the pain but it looks like this is reasonable based on the fact that select statement can join multiple tables and duplicated column names may exist.
Similar to the answer above from #C.Slack but I used a combination of map and reduce to parse response from Aurora Postgres.
// declarative column names in array
const columns = ['a.id', 'u.id', 'u.username', 'g.id', 'g.name'];
// execute sql statement
const params = {
database: AWS_PROVIDER_STAGE,
resourceArn: AWS_DATABASE_CLUSTER,
secretArn: AWS_SECRET_STORE_ARN,
// includeResultMetadata: true,
sql: `
SELECT ${columns.join()} FROM accounts a
FULL OUTER JOIN users u ON u.id = a.user_id
FULL OUTER JOIN groups g ON g.id = a.group_id
WHERE u.username=:username;
`,
parameters: [
{
name: 'username',
value: {
stringValue: 'rick.cha',
},
},
],
};
const rds = new AWS.RDSDataService();
const response = await rds.executeStatement(params).promise();
// parse response into json array
const data = response.records.map((record) => {
return record.reduce((prev, val, index) => {
return { ...prev, [columns[index]]: Object.values(val)[0] };
}, {});
});
Hope this code snippet helps someone.
And here is the response
[
{
'a.id': '8bfc547c-3c42-4203-aa2a-d0ee35996e60',
'u.id': '01129aaf-736a-4e86-93a9-0ab3e08b3d11',
'u.username': 'rick.cha',
'g.id': 'ff6ebd78-a1cf-452c-91e0-ed5d0aaaa624',
'g.name': 'valentree',
},
{
'a.id': '983f2919-1b52-4544-9f58-c3de61925647',
'u.id': '01129aaf-736a-4e86-93a9-0ab3e08b3d11',
'u.username': 'rick.cha',
'g.id': '2f1858b4-1468-447f-ba94-330de76de5d1',
'g.name': 'ensightful',
},
]
Similar to the other answers, but if you are using Python/Boto3:
def parse_data_service_response(res):
columns = [column['name'] for column in res['columnMetadata']]
parsed_records = []
for record in res['records']:
parsed_record = {}
for i, cell in enumerate(record):
key = columns[i]
value = list(cell.values())[0]
parsed_record[key] = value
parsed_records.append(parsed_record)
return parsed_records
I've added to the great answer already provided by C. Slack to deal with AWS handling empty nullable character fields by giving the response { "isNull": true } in the JSON.
Here's my function to handle this by returning an empty string value - this is what I would expect anyway.
const parseRDSdata = (input) => {
let columns = input.columnMetadata.map(c => { return { name: c.name, typeName: c.typeName}; });
let parsedData = input.records.map(row => {
let response = {};
row.map((v, i) => {
//test the typeName in the column metadata, and also the keyName in the values - we need to cater for a return value of { "isNull": true } - pflangan
if ((columns[i].typeName == 'VARCHAR' || columns[i].typeName == 'CHAR') && Object.keys(v)[0] == 'isNull' && Object.values(v)[0] == true)
response[columns[i].name] = '';
else
response[columns[i].name] = Object.values(v)[0];
}
);
return response;
}
);
return parsedData;
}

How to resolve parent to child relationship with AppSync

I have schema looking like below
type Post {
id: ID!
creator: String!
createdAt: String!
like: Int!
dislike: Int!
frozen: Boolean!
revisions:[PostRevision!]
}
type PostRevision {
id: ID!
post: Post!
content: String!
author: String!
createdAt: String!
}
type Mutation {
createPost(postInput: CreatePostInput!): Post
}
I would like to be able to batch insert Post and PostRevision at the same time when i run createPost mutation; however, VTL is giving me a much of hard time.
I have tried below
## Variable Declarations
#set($postId = $util.autoId())
#set($postList = [])
#set($postRevisionList = [])
#set($post = {})
#set($revision = {})
## Initialize Post object
$util.qr($post.put("creator", $ctx.args.postInput.author))
$util.qr($post.put("id", $postId))
$util.qr($post.put("createdAt", $util.time.nowEpochMilliSeconds()))
$util.qr($post.put("like", 0))
$util.qr($post.put("dislike", 0))
$util.qr($post.put("frozen", false))
## Initialize PostRevision object
$util.qr($revision.put("id", $util.autoId()))
$util.qr($revision.put("author", $ctx.args.postInput.author))
$util.qr($revision.put("post", $postId))
$util.qr($revision.put("content", $ctx.args.postInput.content))
$util.qr($revision.put("createdAt", $util.time.nowEpochMilliSeconds()))
## Listify objects
$postList.add($post)
$postRevisionList.add($revision)
{
"version" : "2018-05-29",
"operation" : "BatchPutItem",
"tables" : {
"WHISPR_DEV_PostTable": $util.toJson($postList),
"WHISPR_DEV_PostRevisionTable": $util.toJson($postRevisionList)
}
}
So basically I am reconstructing the document in the resolver of createPost so that I can add Post then also add ID of the post to postReivision However when I run below code
mutation insertPost{
createPost(postInput:{
creator:"name"
content:"value"
}){
id
}
}
I get following error
{
"data": {
"createPost": null
},
"errors": [
{
"path": [
"createPost"
],
"data": null,
"errorType": "MappingTemplate",
"errorInfo": null,
"locations": [
{
"line": 2,
"column": 3,
"sourceName": null
}
],
"message": "Expected JSON object but got BOOLEAN instead."
}
]
}
What am I doing wrong?
I know it would be easier to resolve with lambda function but I do not want to double up the cost for no reason. Any help would be greatly appreciated. Thanks!
If anyone still needs the answer for this (this question is still the #1 google hit for the mentioned error message):
The problem is the return value of the add() method, which returns a boolean value.
To fix this, just wrap the add() methods into $util.qr, as you are already doing for the put() methods:
$util.qr(($postList.add($post))
$util.qr(($postRevisionList.add($revision))
It looks like you are missing a call to $util.dynamodb.toDynamoDBJson which is causing AppSync to try to put plain JSON objects into DynamoDB when DynamoDB requires a DynamoDB specific input structure where each attribute instead of being a plain string like "hello world!" is an object { "S": "hello world!" }. The $util.dynamodb.toDynamoDBJson helper handles this for you for convenience. Can you please try adding the toDynamoDBJson() to these lines:
## Listify objects
$postList.add($util.dynamodb.toDynamoDBJson($post))
$postRevisionList.add($util.dynamodb.toDynamoDBJson($revision))
Hope this helps :)