I have to sign a raw transaction on testnet whose hex is given below, I tried hard coding everything in transaction generate part, it is broadcasting properly.
but I have to implement the transaction and signature part separately.
I am using bitcore-lib-cash package
const bitcore = require('bitcore-lib-cash')
const txhex = 010000000139a7e6578a862a10151bdbe0ed4a833cd615273b0cd0ecda1616ee8407d7d8040000000000ffffffff0241010000000000001976a914f0ac6825bd05b406d5224eab0be73852a487e06c88ac94dc0100000000001976a914185ec62d62510d40795109e6484e0487c28a3caf88ac00000000
const private_key = 'private key here'
let transaction = new bitcore.Transaction(txbuffer).sign(private_key)
console.log(private_key)
{
"errorMessage": "Invalid state: Not all utxo information is available to sign the transaction.",
"errorType": "bitcore.ErrorInvalidState",
"stackTrace": [
"Error",
"new NodeError (/var/task/node_modules/bitcore-lib-cash/lib/errors/index.js:20:41)",
"Object.checkState (/var/task/node_modules/bitcore-lib-cash/lib/util/preconditions.js:9:13)",
"Transaction.sign (/var/task/node_modules/bitcore-lib-cash/lib/transaction/transaction.js:1077:5)",
"/var/task/src/custody/utils/biputils.js:155:12",
"sign_transaction (/var/task/src/custody/utils/biputils.js:167:6)",
"Object.generate_signature (/var/task/src/custody/assets/bitcoincash.js:220:23)",
"<anonymous>",
"process._tickDomainCallback (internal/process/next_tick.js:228:7)"
]
}
UPDATE
I just checked the source code of bitcore-lib-cash, when calling a toString or toBuffer function, they don't encode every field in input.
Here is one of the part of process to encode input:
Input.prototype.toBufferWriter = function(writer) {
if (!writer) {
writer = new BufferWriter();
}
writer.writeReverse(this.prevTxId);
writer.writeUInt32LE(this.outputIndex);
var script = this._scriptBuffer;
writer.writeVarintNum(script.length);
writer.write(script);
writer.writeUInt32LE(this.sequenceNumber);
return writer;
};
So I think you have to rewrite this toBufferWriter function or parse every field in the txhex and rebuild the transaction again.
Your txhex is invalid.
After decode txhex, it should have a output field in each UTXO input object, that's where the error came from.
Related
I am trying to make the primary key of my dynamodb table something like user_uuid. The user is being created in AWS Cognito and I can't seem to find a uuid like field as part of the CognitoUser class. I am trying to avoid using the username as the pk.
Can someone guide me to the right solution? I can't seem to find anything on the internet regarding a user_uuid field and for some reason I can't even find the documentation of CognitoUser class that is imported from "amazon-cognito-identity-js";
Depends if you plan to use email or phone as a 'username'. In that case, I would use the sub because it never changes. But, the sub is not k-sortable so that requires the use of an extra DB item and index/join to make users sortable by date added. If you plan to generate your GUID/KSUID, and only use email/phone as an alias, then I would use the 'username' as a common id between your DB and userpool.
Good luck with your project!
FWIW - the KSUID generators found in wild are massively overbuilt. 3000+ lines of code and 80+ dependencies. I made my own k-sortable and prefixed pseudo-random ID gen for Cognito users. Here's the code.
export function idGen(prefix: any) {
const validPrefix = [
'prefix1',
'prefix2'
];
//check if prefix argument is supplied
if (!prefix) {
return 'error! must supply prefix';
}
//check if value is a valid type
else if (validPrefix.indexOf(prefix) == -1) {
return 'error! prefix value supplied must be: ' + validPrefix;
} else {
// generate epoch time in seconds
const epoch = Math.round(Date.now() / 1000);
// convert epoch time to 6 character base36 string
const time = epoch.toString(36);
// generate 20 character base36 pseudo random string
const random =
Math.random().toString(36).substring(2, 12) +
Math.random().toString(36).substring(2, 12);
// combine prefix, strings, insert : divider and return id
return prefix + ':' + time + random;
}
}
Cognito user unique identifiers can be saved to a database using a combination of the "sub" value and the username, please refer to this question for a more lengthy discussion.
In the description of amazon-cognito-identity-js (found here, use case 5), they show how to get the userAttributes of a CognitoUser. One of the attributes is the sub value, which you can get at for example like this:
user.getUserAttributes(function(err, attributes) {
if (err) {
// Handle error
} else {
// Do something with attributes
const sub = attributes.find(obj => obj.Name === 'sub').Value;
}
});
I couldn't find any documentation on the available user attributes either, I recommend using the debugger to look at the user attributes returned from the function.
I am trying to read data from Bigquery using Bigquery java library.
My dataset is not in US location, so when i am giving my dataset name to library , it is throwing an error that dataset not found in US location because it searches by default in US location.
I have also tried giving the location using setLocation("asia-southeast1") but still it is finding in US location.
This is my code snippet:
val bigquery: BigQuery =BigQueryOptions.newBuilder().setLocation("asia-southeast1").build().getService
val query = "SELECT TO_JSON_STRING(t, true) AS json_row FROM "+dbName+"."+tableName+" AS t"
logger.info("Query is " + query)
val queryResult: QueryJobConfiguration = QueryJobConfiguration.newBuilder(query).build
val result: TableResult = bigquery.query(queryResult)
I am writing code in SCALA. As it uses same libraries as JAVA and JAVA is more popular, thats why I am asking this for JAVA.
Please help me to know that how I can change location from US to southeast.
Can I change something inside QueryJobConfiguration as i have searched a-lot but i am unable to find anything.
My only requirement is that I want final result as TableResult.
This is the exception being thrown
com.google.cloud.bigquery.BigQueryException: Not found: Dataset XXXXXXXX was not found in location US
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.translate(HttpBigQueryRpc.java:106)
at com.google.cloud.bigquery.spi.v2.HttpBigQueryRpc.getQueryResults(HttpBigQueryRpc.java:584)
at com.google.cloud.bigquery.BigQueryImpl$34.call(BigQueryImpl.java:1203)
at com.google.cloud.bigquery.BigQueryImpl$34.call(BigQueryImpl.java:1198)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.bigquery.BigQueryImpl.getQueryResults(BigQueryImpl.java:1197)
at com.google.cloud.bigquery.BigQueryImpl.getQueryResults(BigQueryImpl.java:1181)
at com.google.cloud.bigquery.Job$1.call(Job.java:329)
at com.google.cloud.bigquery.Job$1.call(Job.java:326)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.poll(RetryHelper.java:64)
at com.google.cloud.bigquery.Job.waitForQueryResults(Job.java:325)
at com.google.cloud.bigquery.Job.getQueryResults(Job.java:291)
at com.google.cloud.bigquery.BigQueryImpl.query(BigQueryImpl.java:1168)
...
Thanks in advance.
You shouldn't actually need to specify the location because BigQuery will infer it from the dataset being referenced in your query. See here.
When loading data, querying data, or exporting data, BigQuery
determines the location to run the job based on the datasets
referenced in the request. For example, if a query references a table
in a dataset stored in the asia-northeast1 region, the query job will
run in that region.
I just tested using the Java SDK on a dataset/table I created in asia-southeast1, and it worked without needing to explicitly specify the location.
If it's still not working for you by default (check the table you're referncing actually exists), then you can specify the location by setting it in the JobId and passing that to the overloaded method:
String query = "SELECT * FROM `grey-sort-challenge.asia_southeast1.a_table`;";
QueryJobConfiguration queryConfig = QueryJobConfiguration.newBuilder(query)
.setUseLegacySql(Boolean.FALSE)
.build();
JobId id = JobId.newBuilder().setLocation("asia-southeast1")
.setRandomJob()
.build();
try {
for (FieldValueList row : BIGQUERY.query(queryConfig, id).iterateAll()) {
for (FieldValue val : row) {
System.out.printf("%s,", val.toString());
}
System.out.printf("\n");
}
} catch (InterruptedException e) {
e.printStackTrace();
}
I'm using a website that requires that their API key AND query data be submitted using Webform.Post method. I'm able to get this to work in Python, C# and I'm even able to construct and execute a cURL command which returns a usable JSON file that Excel can parse. I am also using Postman to validate my parameters and everything looks good using all these methods. However, my goal is to build a query form that I can use within Excel but I can't get past this query syntax in PowerBi Query.
For now I am doing a simple query. That query looks like this:
let
url_1 = "https://api.[SomeWebSite].com/api/v1.0/search/keyword?apiKey=blah-blah-blah",
Body_1 = {
"SearchByKeywordRequest:
{
""keyword"": ""Hex Nuts"",
""records"": 0,
""startingRecord"": 0,
""searchOptions"": Null.Type,
""searchWithYourSignUpLanguage"": Null.Type
}"
},
Source = WebMethod.Post(url_1,Body_1)
in
Source
ScreenSnip showing valid syntax
It generates the following error:
Expression.Error: We cannot convert the value "POST" to type Function.
Details:
Value=POST
Type=[Type]
ScreenSnip of Error as it appears in PowerQuery Advanced Editor
I've spend the better part of the last two days trying to find either some example using this method or documentation. The Microsoft documentation simple states the follow:
WebMethod.Post
04/15/2018
2 minutes to read
About
Specifies the POST method for HTTP.
https://learn.microsoft.com/en-us/powerquery-m/webmethod-post
This is of no help and the only posts I have found so far criticize the poster for not using GET versus POST. I would do this but it is NOT supported by the website I'm using. If someone could just please either point me to a document which explains what I am doing wrong or suggest a solution, I would be grateful.
WebMethod.Post is not a function. It is a constant text value "POST". You can send POST request with either Web.Contents or WebAction.Request function.
A simple example that posts JSON and receives JSON:
let
url = "https://example.com/api/v1.0/some-resource-path",
headers = [#"Content-Type" = "application/json"],
body = Json.FromValue([Foo = 123]),
source = Json.Document(Web.Contents(url, [Headers = headers, Content = body])),
...
Added Nov 14, 19
Request body needs to be a binary type, and included as Content field of the second parameter of Web.Contents function.
You can construct a binary JSON value using Json.FromValue function. Conversely, you can convert a binary JSON value to a corresponding M type using Json.Document function.
Note {} is list type in M language, which is similar to JSON array. [] is record type, which is similar to JSON object.
With that said, your query should be something like this,
let
url_1 = "https://api.[SomeWebSite].com/api/v1.0/search/keyword?apiKey=blah-blah-blah",
Body_1 = Json.FromValue([
SearchByKeywordRequest = [
keyword = "Hex Nuts",
records = 0,
startingRecord = 0,
searchOptions = null,
searchWithYourSignUpLanguage = null
]
]),
headers = [#"Content-Type" = "application/json"],
source = Json.Document(Web.Contents(url_1, [Headers = headers, Content = Body_1])),
...
References:
Web.Contents (https://learn.microsoft.com/en-us/powerquery-m/web-contents)
Json.FromValue (https://learn.microsoft.com/en-us/powerquery-m/json-fromvalue)
Json.Document (https://learn.microsoft.com/en-us/powerquery-m/json-document)
I'm trying to build a chatbot using Amazon's boto3 library. Right now, I am trying to create an intent using the put_intent function. My code is as follows:
intent = lexClient.put_intent(name = 'test',
sampleUtterances = ["Who is messi?"]
)
When I try running this, I get the following exception:
botocore.errorfactory.BadRequestException: An error occurred
(BadRequestException) when calling the PutIntent operation: RelativeId
does not match Lex ARN format: intent:test2:$LATEST
Can anyone tell me what I'm doing wrong?
I got the same error when trying to have a digit in intent name field. Realized that was not allowed when trying to do the same from AWS console.
Error handling could really be more specific.
Try taking the question mark out of the utterance, that has caused me issues in the past!
You need to run GetSlotType. That will return current checksum for that slot. Put that checksum in your PutSlotType checksum. Big bang boom.
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/LexModelBuildingService.html#getSlotType-property
var params = {
name: "AppointmentTypeValue",
checksum:'54c6ab5f-fe30-483a-a364-b76e32f6f05d',
description: "Type of dentist appointment to schedule'",
enumerationValues: [
{
value: "cleaning"
},
{
value: "whitening"
},
{
value: "root canal"
},
{
value:"punch my face"
}
]
};
For the put_intent function I faced similar issues.
At least the following 3 are worth mentioning.
Sample Utterances
There are requirements for the sample utterances:
An utterance can consist only of Unicode
characters, spaces, and valid punctuation marks. Valid punctuation
marks are: periods for abbreviations, underscores, apostrophes, and
hyphens. If there is a slot placeholder in your utterance, ensure that
it's in the {slotName} format and has spaces at both ends.
It seems like there is no error raised when calling the put_intent function with the following code.
intent = lexClient.put_intent(name = 'test',
sampleUtterances = ["Who is messi"]
)
However, if you try to add it to your bot and start building the bot it will fail.
To fix it remove the question mark at the end of you sampleUtterance.
intent = lexClient.put_intent(name = 'test',
sampleUtterances = ["Who is messi?"]
)
Prior intent version
If your intent already exists you need to add the checksum to your function call. To get the checksum of your intent you can use the get_intent function.
For example docs:
response = client.get_intent(
name='test',
version='$LATEST'
)
found_checksum = response.get('checksum')
After that you can put a new version of the intent:
intent = lexClient.put_intent(name = 'test',
sampleUtterances = ["Who is messi"],
checksum = found_checksum
)
Intent Name (correct in your case, just adding this for reference)
It seems like the name can only contain letters, underscores, and should be <=100 in length. Haven't found anything in the docs. This is just trial and error.
Calling put_intent with the following:
intent = lexClient.put_intent(name = 'test_1',
sampleUtterances = ["Who is messi"]
)
Results in the following error:
BadRequestException: An error occurred (BadRequestException) when calling the PutIntent operation: RelativeId does not match Lex ARN format: intent:test_1:$LATEST
To fix the name you can replace it to:
intent = lexClient.put_intent(name = 'test',
sampleUtterances = ["Who is messi"]
)
I want to add scritpted field in Kibana 5 to get stored proc name from message. To be able to visualize number of errors per each SP.
I have field "message" where I can see error text:
"[2017-02-03 05:04:51,087] # MyApp.Common.Server.Logging.ExceptionLogger [ERROR]: XmlWebServices Exception
User:
Name: XXXXXXXXXXXXXXXXXXXXXXX
Email: 926715#test.com
User ID: 926715 (PAID)
Web Server: PERFTESTSRV
Exception:
Type: MyApp.Common.Server.DatabaseException
Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Source: MyApp.Common.Server
Database: MyDB
Cmd Type: StoredProcedure
Cmd Text: spGetData
Trans: YES
Trans Lvl: Unspecified"
Guide: https://www.elastic.co/blog/using-painless-kibana-scripted-fields
My plan is to add something like as a Painless script:
def m = /(?:Cmd\sText:\s*)[a-zA-Z]{1,}/.matcher(doc['message'].value);
if ( m.matches() ) {
return m.group(1)
} else {
return "no match"
}
And also I've tried
def tst = doc['message'].value;
if (tst != null)
{
def m = /(?:User\sID:\s*)[0-9]{1,}/.matcher(tst);
if ( m.matches() ) {
return m.group(1)
}
} else {
return "no match"
}
How I can address doc['message'].value?
When I try to do so I got error "Courier Fetch: 5 of 5 shards failed."
When I try doc['message.keyword'].value, I do not have full message inside. I do not understand where I can learn the structure of what message have inside and how I can refer it?
I assume that problem with lenght of message. It is too long to be type "keyword". It should be type "text" which is not supported by painless.
https://www.elastic.co/blog/using-painless-kibana-scripted-fields
Both Painless and Lucene expressions operate on fields stored in doc_values. So >for string data, you will need to have the string to be stored in data type >keyword. Scripted fields based on Painless also cannot operate directly on >_source.
https://www.elastic.co/guide/en/elasticsearch/reference/master/keyword.html_italic_
A field to index structured content such as email addresses, hostnames, status >codes, zip codes or tags.
If you need to index full text content such as email bodies or product >descriptions, it is likely that you should rather use a text field.