Error while creating DID using Didkit-wasm library - blockchain

I am trying to generate a credential using the didkit-wasm library with the following code, but getting error with prepareIssueCredential method: key expansion failed.
Any idea what i could be doing wrong
const did = `did:pkh:tz:` + userData.account.address;
const credential = {
'#context': [
'https://www.w3.org/2018/credentials/v1',
{
alias: 'https://schema.org/name',
description: 'https://schema.org/description',
website: 'https://schema.org/url',
logo: 'https://schema.org/logo',
BasicProfile: 'https://tzprofiles.com/BasicProfile',
},
],
id: 'urn:uuid:' + uuid(),
issuer: 'did:pkh:tz:tz1ZDSnw...',
issuanceDate: new Date().toISOString(),
type: ['VerifiableCredential', 'Company Credential'],
credentialSubject: {
id: did,
name: company.name,
description: company.description,
url: company.url,
},
};
let credentialString = JSON.stringify(credential);
const proofOptions = {
verificationMethod: did + '#TezosMethod2021', //subject's did
proofPurpose: 'assertionMethod',
};
const publicKeyJwkString = await JWKFromTezos(
'edpkuGHxcJDq9....' //issuer's public key
);
let prepStr = await prepareIssueCredential(
credentialString,
JSON.stringify(proofOptions),
publicKeyJwkString
);

Related

Firestore exception occurred in retry method that was not classified as transient

I'm using GCP with node 16 and firestore. I'm getting data of subcollection and update by id. In console.log(subCollect.id); and console.log(subCollect.data()); the information logged is correct. In update command I receive the folowing error
var subCollecPromises = [];
var parentPromises = [];
var querySnapshot = await db.collectionGroup("mysubcollection")
.where('name', '==', "subname")
.where('sequence', '==', 2).get();
querySnapshot.forEach(doc => {
var data = doc.data();
if(data.enable){
var childRef;
var parentRef;
childRef = doc.ref.parent;
parentRef = childRef.parent;
subCollecPromises.push(doc.ref.get());
parentPromises.push(parentRef.get());
}
});
const arrsubCollecSnap = await Promise.all(subCollecPromises);
const arrsubParentSnap = await Promise.all(parentPromises);
for (let index = 0; index < arrsubParentSnap.length; index++) {
const item = arrsubParentSnap[index];
var parentData = item.data();
var subCollect = arrsubCollecSnap[index];
console.log(subCollect.id);
console.log(subCollect.data());
await db.collection("mysubcollection").doc(subCollect.id)
.update({sequence: 3, datetime: new Date()});
await sendMail(parentData.mail);
}
Error in update:
Error: 5 NOT_FOUND: no entity to update: app: "dev~myapp-backend"
path <
Element {
type: "mysubcollection"
name: "Vuxx2Hy9xprtm7tZFyne"
}
>
code: 5,
details: 'no entity to update: app: "dev~myapp-backend"\n' +
'path <\n' +
' Element {\n' +
' type: "mysubcollection"\n' +
' name: "Vuxx2Hy9xprtm7tZFyne"\n' +
' }\n' +
'>\n',
metadata: Metadata {
internalRepr: Map(1) { 'content-type' => [Array] },
options: {}
},
note: 'Exception occurred in retry method that was not classified as transient'
The correct way for update a subcollection is starts at parent collection:
await db.collection("parentCollection").doc(parentId).collection("mysubcollection").doc(subCollect.id).update({sequence: 3, datetime: new Date()});

AWS SDK ResourceConflictException: The operation cannot be performed at this time. The function is currently in the following state: Pending

Getting ResourceConflictException while invoking the lambda function just created before. It used to work about a year or so ago but not it works ONLY if I add delay between create function and invoke function (at least 1 second). Any idea why invoke fails with message:
ResourceConflictException: The operation cannot be performed at this time. The function is currently in the following state: Pending
Here is the code snippet (tried client node versions 14/17/18):
const AWS = require('aws-sdk')
const AdmZip = require('adm-zip')
const uuid = require("uuid")
AWS.config = new AWS.Config();
AWS.config.accessKeyId = "...";
AWS.config.secretAccessKey = "....";
AWS.config.region = "us-west-2";
const lambda = new AWS.Lambda({apiVersion: '2015-03-31', region: 'us-west-2'})
// create function
//
const fragment = `const response = {
statusCode: 200,
body: JSON.stringify('Hello!'),
};
return response;`
const code = "exports.handler = async (event) => { " + fragment + "};"
const functionName = uuid.v4()
console.log("Creating function: " + functionName)
const zip = new AdmZip()
zip.addFile('index.js', Buffer.from(code))
const zippedBuffer = zip.toBuffer()
const params = {
Code: { ZipFile: zippedBuffer },
FunctionName: functionName,
Handler: 'index.handler',
Role: "...",
Runtime: 'nodejs14.x',
Description: 'user lambda function'
}
lambda.createFunction(params, function(err, data) {
if (err) {
console.error("Error from createFunction: " + err)
} else {
console.log("Lambda create function success!")
setTimeout(() => {
// invoke
//
const paramsInvoke = {
FunctionName: functionName,
InvocationType: "RequestResponse",
Payload: JSON.stringify({key: "value"})
}
lambda.invoke(paramsInvoke).promise().then ((response) => {
console.log("GOT RESPONSE: " + JSON.stringify(response))
})
}, 700) // exception if below 700ms
}
})

Cdk watch expects an environment variable to be a string while it is already a string

This error occasionally occurs on "cdk watch" and disappears when I destroy and redeploy the stack. All the global and per lambda variables are strings for sure. The table name is not declared explicitly but generated from the id..(maybe this is the cause of the issue?)
export class MyStack extends Stack {
constructor(app: App, id: string, props: MyStackProps) {
super(app, id);
const isProd = props.deploymentEnv;
const stackName = Stack.of(this).stackName;
const PRIMARY_KEY = 'reportId';
const dynamoTable = new Table(this, `MyTable-${stackName}`, {
partitionKey: {
name: PRIMARY_KEY,
type: AttributeType.STRING,
},
stream: StreamViewType.NEW_IMAGE,
removalPolicy: isProd ? RemovalPolicy.RETAIN : RemovalPolicy.DESTROY,
});
// Default props for lambda functions
const nodeJsFunctionProps: NodejsFunctionProps = {
bundling: {
externalModules: [
'aws-sdk', // Use the 'aws-sdk' available in the Lambda runtime
'#sparticuz/chrome-aws-lambda',
],
},
depsLockFilePath: join(__dirname, '../package-lock.json'),
environment: {
PRIMARY_KEY: PRIMARY_KEY,
TABLE_NAME: dynamoTable.tableName,
},
runtime: Runtime.NODEJS_16_X,
};
In the lambda file, I'm getting the variables this way:
const TABLE_NAME = process.env.TABLE_NAME ?? '';
The error:
failed: InvalidParameterType: Expected params.Environment.Variables['TABLE_NAME'] to
be a string

Pulumi GCP MemoryStore Redis Cache Internal Server Error 13

I have a weird scenario here.
The following line in my Pulumi typescript code always fails the first time:
const redisCache = new gcp.redis.Instance("my-redis-cache", {
name: "my-metadata-cache",
tier: "BASIC",
memorySizeGb: 1,
authorizedNetwork: pulumi.interpolate`projects/someprojectid/global/networks/default`,
connectMode: "PRIVATE_SERVICE_ACCESS",
redisVersion: "REDIS_6_X",
displayName: "My Metadata Cache",
project: someprojectid,
}, defaultResourceOptions);
**
error: 1 error occurred:
* Error waiting to create Instance: Error waiting for Creating Instance: Error code 13, message: an internal error has occurred
**
Strangely, when I again run pulumi up, it succeeds. Has anyone else faced this before? Any clues?
Ok this turned out to be a case of working with a beast of a code. Once I started isolating the issue, things became clearer. For those who stumble across this one, here is a full working code.
import * as pulumi from "#pulumi/pulumi";
import * as gcp from "#pulumi/gcp";
export interface CacheComponentResourceArgs {
projectId : pulumi.Input<string>;
projectNumber: pulumi.Input<string>;
}
export class CacheComponentResource extends pulumi.ComponentResource {
constructor(name: string, resourceArgs: CacheComponentResourceArgs, opts?: pulumi.ResourceOptions) {
const inputs: pulumi.Inputs = {
options: opts,
};
super("ekahaa:abstracta:Cache", name, inputs, opts);
const serviceNetworkingAccessService = new gcp.projects.Service("service-nw-" + name , {
disableDependentServices: true,
project: resourceArgs.projectId,
service: "servicenetworking.googleapis.com",
}, {
parent : this
});
const redisService = new gcp.projects.Service("redis-service-" + name, {
disableDependentServices: true,
project: resourceArgs.projectId,
service: "redis.googleapis.com",
}, {
parent : this
});
const defaultGlobalAddress = new gcp.compute.GlobalAddress("default-ip-range-" + name, {
name: "default-ip-range",
purpose: "VPC_PEERING",
prefixLength: 16,
project: resourceArgs.projectId,
addressType: "INTERNAL",
network: pulumi.interpolate`projects/${resourceArgs.projectId}/global/networks/default`
}, {
parent : this,
dependsOn: [ redisService]
});
const privateServiceConnection = new gcp.servicenetworking.Connection("servicenetworking-" + name, {
service: "servicenetworking.googleapis.com",
network: pulumi.interpolate`projects/${resourceArgs.projectId}/global/networks/default`,
reservedPeeringRanges: [defaultGlobalAddress.name],
}, {
parent : this,
dependsOn: [ defaultGlobalAddress]
});
const iamBindingRedis2 = new gcp.projects.IAMBinding("iamredis2-" + name, {
members: [
pulumi.interpolate`serviceAccount:service-${resourceArgs.projectNumber}#service-networking.iam.gserviceaccount.com`
],
role: "roles/servicenetworking.serviceAgent",
project: resourceArgs.projectId
}, {
parent : this,
dependsOn: [privateServiceConnection]
});
const redisCache = new gcp.redis.Instance(name, {
name: name,
tier: "BASIC",
memorySizeGb: 1,
authorizedNetwork: pulumi.interpolate`projects/${resourceArgs.projectId}/global/networks/default`,
connectMode: "PRIVATE_SERVICE_ACCESS",
redisVersion: "REDIS_6_X",
displayName: "Abstracta Metadata Cache",
project: resourceArgs.projectId,
}, {
parent : this,
dependsOn : [redisService,serviceNetworkingAccessService,iamBindingRedis2]
});
this.registerOutputs({
redisCache : redisCache
});
}
}
let suffix = "20211018-002";
let org_name = `org-redis-demo-${suffix}`;
let projectId = `redis-demo-${suffix}` ;
const myGcpProject = new gcp.organizations.Project('ab-' + org_name, {
orgId: gcpOrgId,
projectId: projectId,
billingAccount: billingAccountId,
name: 'ab-' + org_name,
});
const myGcpProjectIAM = new gcp.projects.IAMBinding("iam-001", {
members: [
"user:vikram.vasudevan#ekahaa.com",
],
role: "roles/owner",
project: myGcpProject.projectId
});
const cacheComponentResource = new CacheComponentResource("my-cache", {
projectId : myGcpProject.projectId,
projectNumber : myGcpProject.number
}, {
dependsOn : [myGcpProjectIAM]
});

Update asset files using Tokens with aws-cdk

I have created this stack:
export class InfrastructureStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const bucket = new s3.Bucket(this, "My Hello Website", {
websiteIndexDocument: 'index.html',
websiteErrorDocument: 'error.html',
publicReadAccess: true,
removalPolicy: cdk.RemovalPolicy.DESTROY
});
const api = new apigateway.RestApi(this, "My Endpoint", {
restApiName: "My rest API name",
description: "Some cool description"
});
const myLambda = new lambda.Function(this, 'My Backend', {
runtime: lambda.Runtime.NODEJS_8_10,
handler: 'index.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'code'))
});
const apiToLambda = new apigateway.LambdaIntegration(myLambda)
api.root.addMethod('GET', apiToLambda);
updateWebsiteUrl.newUrl(api.url);
}
}
Last line of code is my function to update asset that will be deployed on S3 as a website with a API url that will be created during deployment. This is just a plain Node.js script that replaces files PLACEHOLDER with api.url.
Of course during compile time the CDK does not know what will be the final adress of REST endpoint because this is happening during deploy time and it updates my url with somethis like:
'https://${Token[TOKEN.26]}.execute-api.${Token[AWS::Region.4]}.${Token[AWS::URLSuffix.1]}/${Token[TOKEN.32]}/;'
Is there any way that I can update this after integrating lambda with API endpooint after deploying those?
I would like to use #aws-cdk/aws-s3-deployment module to deploy code to newly created bucket. All in the same Stack, so one cdk deploy will update everything I need.
To avoid confusion. My updateWebsiteUrl is:
export function newUrl(newUrl: string): void {
const scriptPath = path.join(__dirname, '/../../front/');
const scriptName = 'script.js';
fs.readFile(scriptPath + scriptName, (err, buf) => {
let scriptContent : string = buf.toString();
let newScript = scriptContent.replace('URL_PLACEHOLDER', newUrl);
fs.writeFile(scriptPath + 'newScript.js', newScript, () => {
console.log('done writing');
});
});
}
And my script is just simple:
const url = URL_PLACEHOLDER;
function foo() {
let req = new XMLHttpRequest();
req.open('GET', url , false);
req.send(null);
if (req.status == 200) {
replaceContent(req.response);
}
}
function replaceContent(content) {
document.getElementById('content').innerHTML = content;
}
I ran into the same issue today and managed to find a solution for it.
The C# code I am using in my CDK program is the following:
// This will at runtime be just a token which refers to the actual JSON in the format {'api':{'baseUrl':'https://your-url'}}
var configJson = stack.ToJsonString(new Dictionary<string, object>
{
["api"] = new Dictionary<string, object>
{
["baseUrl"] = api.Url
}
});
var configFile = new AwsCustomResource(this, "config-file", new AwsCustomResourceProps
{
OnUpdate = new AwsSdkCall
{
Service = "S3",
Action = "putObject",
Parameters = new Dictionary<string, string>
{
["Bucket"] = bucket.BucketName,
["Key"] = "config.json",
["Body"] = configJson,
["ContentType"] = "application /json",
["CacheControl"] = "max -age=0, no-cache, no-store, must-revalidate"
},
PhysicalResourceId = PhysicalResourceId.Of("config"),
},
Policy = AwsCustomResourcePolicy.FromStatements(
new[]
{
new PolicyStatement(new PolicyStatementProps
{
Actions = new[] { "s3:PutObject" },
Resources= new [] { bucket.ArnForObjects("config.json") }
})
})
});
}
You will need to install the following package to have the types available: https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html
It is basically a part of the solution you can find as an answer to this question AWS CDK passing API Gateway URL to static site in same Stack, or at this GitHub repository: https://github.com/jogold/cloudstructs/blob/master/src/static-website/index.ts#L134