How to setup hyperledger fabric explorer | amazon managed blockchain - amazon-web-services

I setup hyperledger fabric network using amazon managed blockchain by following this guide. Everything works properly in the hyperledger network. Now I want to setup hyperledger explorer. I can not find any amazon's official document to setup hyperledger fabric explorer. So I am following this article. As author's suggestion, I cloned this repo. I have done everything as the author said in this article. Now I need to edit first-network.json file. I edited the first-network.json file, as the following,
{
"name": "first-network",
"version": "1.0.0",
"license": "Apache-2.0",
"client": {
"tlsEnable": true,
"adminUser": "admin",
"adminPassword": "adminpw",
"enableAuthentication": false,
"organization": "m-QMD*********6HK",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"channels": {
"mychannel": {
"peers": {
"nd-JEFEX**************N4": {}
},
"connection": {
"timeout": {
"peer": {
"endorser": "6000",
"eventHub": "6000",
"eventReg": "6000"
}
}
}
}
},
"organizations": {
"Org1MSP": {
"mspid": "m-QMD*********6HK",
"fullpath": true,
"adminPrivateKey": {
"path": "/fabric-path/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore/1bebc656f198efb4b5bed08ef42cf3b2d89ac86f0a6b928e7a172fd823df0a48_sk"
},
"signedCert": {
"path": "/fabric-path/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/Admin#org1.example.com-cert.pem"
}
}
},
"peers": {
"nd-JEFEX**************N4": {
"tlsCACerts": {
"path": "/fabric-path/fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"url": "grpcs://nd-JEFEX**************N4.m-QMD*********6HK.n-rf*********q.managedblockchain.us-east-1.amazonaws.com:30003",
"eventUrl": "grpcs://nd-JEFEX**************N4.m-QMD*********6HK.n-rf*********q.managedblockchain.us-east-1.amazonaws.com:30003",
"grpcOptions": {
"ssl-target-name-override": "nd-JEFEX**************N4"
}
}
}
}
My question is what should I add in the place of adminPrivateKey-path, signedCert-path, tlsCACerts-path.
Here is my list of available files generated while setting up hyperledger hyperledger fabric in amazon managed blockchain.
/home/ec2-user/admin-msp$ ls * -r
user:
signcerts:
cert.pem
keystore:
fd84a**********************1f03ff_sk
cacerts:
ca-m-*****-n-*****-managedblockchain-us-east-1-amazonaws-com-30002.pem
admincerts:
cert.pem
Help me to setup hyperledger fabric explorer for my hyperledger fabric network.

You should configure your connection profile as below:
"organizations": {
"Org1MSP": {
"mspid": "m-QMD*********6HK",
"fullpath": true,
"adminPrivateKey": {
"path": "/home/ec2-user/admin-msp/keystore/fd84a**********************1f03ff_sk"
},
"signedCert": {
"path": "/home/ec2-user/admin-msp/signcerts/cert.pem"
}
}
},
"peers": {
"nd-JEFEX**************N4": {
"tlsCACerts": {
"path": "/home/ec2-user/admin-msp/cacerts/ca-m-*****-n-*****-managedblockchain-us-east-1-amazonaws-com-30002.pem"
},
"url": "grpcs://nd-JEFEX**************N4.m-QMD*********6HK.n-rf*********q.managedblockchain.us-east-1.amazonaws.com:30003",
"grpcOptions": {
"ssl-target-name-override": "nd-JEFEX**************N4"
}
}
}
And I recommend to use the latest Explorer because commit for AWS managed blockchain service and many other bugfixes were committed recently (Making Hyperledger Explorer compatible to Amazon Managed Blockchain N… · hyperledger/blockchain-explorer#7b30821)

Related

async function in dialogflow fulfillment inline editor error

Anyone encountered same issue as mine?
when I'm using async function with my script I encounter an error like 'sync functions' is only available in ES8 (use 'esversion:8')'
I already tried to input the
/*esversion: 8 */
also
/* jshint esversion: 8 */
at the first line of my script
May I know what I need to check on my script for me to use the async?
/*esversion: 8 */ << I also tried this /* jshint esversion: 8 */ still the error not resolve.
'use strict';
function main() {
const {BigQuery} = require('#google-cloud/bigquery');
async function query () { << 'sync functions' is only available in ES8 (use 'esversion:8')'
const bigqueryClient = new BigQuery();
const sqlQuery = `SELECT * FROM `sample.dataset` LIMIT 1000;
const options = {
query: sqlQuery,
location: 'US',
params: {serialnumber: 'test', min_word_count: 250},
useQueryCache: false,
};
Package.json
{
"name": "dialogflowFirebaseFulfillment",
"description": "This is the default fulfillment for a Dialogflow agents using Cloud Functions for Firebase",
"version": "0.0.1",
"private": true,
"license": "Apache Version 2.0",
"author": "Google Inc.",
"engines": {
"node": "10",
"jshintConfig":{"esversion": 8, "strict": "implied", "devel": true, "node": true, "globals": {} }
},
"scripts": {
"start": "firebase serve --only functions:dialogflowFirebaseFulfillment",
"deploy": "firebase deploy --only functions:dialogflowFirebaseFulfillment"
},
"dependencies": {
"actions-on-google": "^2.2.0",
"firebase-admin": "^5.13.1",
"firebase-functions": "^2.0.2",
"dialogflow": "^0.6.0",
"dialogflow-fulfillment": "^0.5.0",
"#google-cloud/bigquery": "^0.12.0"
}
}
There is no clear answer here, but several things you can try to make it work.
In package.json make sure you are calling compatible packages and setting the nodeJS engine explicitly. Here's my current configuration. Carefully update packages as it could break some working code.
{
"name": "dialogflowFirebaseFulfillment",
"description": "This is the default fulfillment for a Dialogflow agents using Cloud Functions for Firebase",
"version": "0.0.1",
"private": true,
"license": "Apache Version 2.0",
"author": "Google Inc.",
"engines": {
"node": "12"
},
"scripts": {
"start": "firebase serve --only functions:dialogflowFirebaseFulfillment",
"deploy": "firebase deploy --only functions:dialogflowFirebaseFulfillment"
},
"dependencies": {
"actions-on-google": "^2.13.0",
"firebase-admin": "^9.5.0",
"firebase-functions": "^3.13.2",
"dialogflow": "^1.2.0",
"dialogflow-fulfillment": "^0.6.1",
"#google-cloud/bigquery": "^5.5.0",
"axios": "0.21.1"
}
}
In my code I have it set up like this:
'use strict';
/ jshint esversion: 8 /
Some will tell you jshint comes before - mine works after. I don't know if it matters but you can try changing it around.
Enable the Bigquery API and Billing on your account in Google Cloud Console if not yet enabled. You have access to a free tier but storage costs will incur depending on the size of your data
In inline editor return results of your async function or call it. You can add the line below in main() after the query function.
return query() ;
OR
query;
Write a new function above main and before intentMap (if using dialogflow fullfillment) that exploits async or promises.
function getSnum(agent) {
return main().then(results=> {console.log(results.length);})
}

AWS EKS logging to CloudWatch - how to send logs only, without metrics?

I would like to forward the logs of select services running on my EKS cluster to CloudWatch for cluster-independent storage and better observability.
Following the quickstart outlined at https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-EKS-quickstart.html I've managed to get the logs forwarded via Fluent Bit service, but that has also generated 170 Container Insights metrics channels. Not only are those metrics not required, but they also appear to cost a fair bit.
How can I disable the collection of cluster metrics such as cpu / memory / network / etc, and only keep forwarding container logs to CloudWatch? I'm having a very hard time finding any documentation on this.
I think I figured it out - the cloudwatch-agent daemonset from quickstart guide is what's sending the metrics, but it's not required for log forwarding. All the objects with names related to cloudwatch-agent in quickstart yaml file are not required for log forwarding.
As suggested by Toms Mikoss, you need to delete the metrics object in your configuration file. This file is the one that you pass to the agent when starting
This applies to "on-premises" "linux" installations. I havent tested this on windows, nor EC2 but I imagine it will be similar. The AWS Documentation here says that you can also distribute the configuration via SSM, but again, I imagine the answer here is still applicable.
Example of file with metrics:
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/nginx.log",
"log_group_name": "nginx",
"log_stream_name": "{hostname}"
}
]
}
}
},
"metrics": {
"metrics_collected": {
"cpu": {
"measurement": [
"cpu_usage_idle",
"cpu_usage_iowait"
],
"metrics_collection_interval": 60,
"totalcpu": true
}
}
}
}
Example of file without metrics:
{
"agent": {
"metrics_collection_interval": 60,
"run_as_user": "root"
},
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/nginx.log",
"log_group_name": "nginx",
"log_stream_name": "{hostname}"
}
]
}
}
}
}
For reference, the command to start for linux on-premises servers:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config \
-m onPremise -s -c file:configuration-file-path
More details in the AWS Documentation here

aws quicksight create-analysis cli command

We have two different accounts:
one for developing
another clien prod account
We have cloudformation templates to deploy resources, during developing new features firstly we test on dev and then deploy to prod. But with quicksight it not so easy, there are no cloudformation templates for quicksight. We need to reacreate all reports in prod account, manually it is very hard. I found QuickSight API and create-analysis command but I don't understand how I can create analysis via this command.
Maybe someone have examples or know how to create analysis with cli?
Slavik
It's not possible to create an entirely new analysis or dashboard via the API, however it is possible to promote these throughout the environments via the API. I found the following AWS blog post to be of some use:
AWS QuickSight Blog
Rich
First create an Analysis Template using:
aws quicksight create-template --aws-account-id 123456789123 --cli-input-json file://./create-template.json
You can use the following JSON (create-analysis-cli-input.json):
{
"AwsAccountId":"123456789123",
"AnalysisId":"TestAnalysis",
"Name":"TestAnalysis-Report",
"Parameters":{
"StringParameters":[
{
"Name":"Parameters1",
"Values":[
"All"
]
},
{
"Name":"Parameters2",
"Values":[
"All"
]
}
],
"IntegerParameters":[
{
"Name":"IntParameter1",
"Values":[
0
]
},
{
"Name":"IntParameter2",
"Values":[
1000
]
}
],
"DateTimeParameters":[
{
"Name":"Date1",
"Values":[
20160327
]
},
{
"Name":"Date2",
"Values":[
20160723
]
}
]
},
"Permissions":[
{
"Principal":"arn:aws:quicksight:ap-southeast-2:123456789123:user/default/user-qs",
"Actions":[
"quicksight:UpdateDataSourcePermissions",
"quicksight:DescribeDataSource",
"quicksight:DescribeDataSourcePermissions",
"quicksight:PassDataSource",
"quicksight:UpdateDataSource",
"quicksight:DeleteDataSource"
]
}
],
"SourceEntity":{
"SourceTemplate":{
"DataSetReferences":[
{
"DataSetPlaceholder":"Template-SRM-Payments Dataset",
"DataSetArn":"arn:aws:quicksight:ap-southeast-2:123456789123:dataset/abc"
},
{
"DataSetPlaceholder":"Template-SRM-DailyPayments Dataset",
"DataSetArn":"arn:aws:quicksight:ap-southeast-2:123456789123:dataset/def"
},
{
"DataSetPlaceholder":"Template-SRM-DateTable Dataset",
"DataSetArn":"arn:aws:quicksight:ap-southeast-2:123456789123:dataset/ghi"
}
],
"Arn":"arn:aws:quicksight:ap-southeast-2:123456789123:template/report-template"
}
},
"ThemeArn":"arn:aws:quicksight::aws:theme/SEASIDE",
"Tags":[
{
"Key":"Name",
"Value":"TestReport"
}
]
}
The CLI command to run is:
aws quicksight create-analysis --aws-account-id 123456789123 --cli-input-json file://./create-analysis-cli-input.json

How can I get the TaskId of a Fargate ecs Container

Similar to this question How to get Task ID from within ECS container? but I want to get the TaskId for my Fargate task. How can you do this? Like others I want this for logging information.
I'm running a Spring App with ELK stack for logging and would like if possible to include the TaskId in the logs if possible.
Edit
I actually never got this to work by the way, here is my code:
private String getTaskIdInternal() {
String url = System.getenv("ECS_CONTAINER_METADATA_URI_V4") + "/task";
logger.info("Getting ecsMetaDataURL={}", url);
if (url == null) {
throw new RuntimeException("ECS_CONTAINER_METADATA_URI_V4 env variable not defined");
}
RestTemplate restTemplate = new RestTemplate();
ResponseEntity<JsonNode> response = restTemplate.getForEntity(url, JsonNode.class);
logger.info("ecsMetaData={}", response);
JsonNode map = response.getBody();
String taskArn = map.get("TaskARN").asText();
String[] splitTaskArn = taskArn.split("/");
String taskId = splitTaskArn[splitTaskArn.length - 1];
logger.info("ecsTaskId={}", taskId);
return taskId;
}
But I always get this stack trace:
Could not get the taskId from ECS. exception=org.springframework.web.client.HttpClientErrorException: 403 Forbidden
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:118)
at org.springframework.web.client.DefaultResponseErrorHandler.handleError(DefaultResponseErrorHandler.java:103)
at org.springframework.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:63)
at org.springframework.web.client.RestTemplate.handleResponse(RestTemplate.java:732)
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:690)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:646)
at org.springframework.web.client.RestTemplate.getForEntity(RestTemplate.java:325)
If you're trying to get the task id in Fargate for ECS you make use of metadata endpoints.
Assuming you're using version 1.4.0 of Fargate you can get this via a http request to ${ECS_CONTAINER_METADATA_URI_V4}/task.
An example response from this endpoint is below
{
"Cluster": "arn:aws:ecs:us-west-2:&ExampleAWSAccountNo1;:cluster/default",
"TaskARN": "arn:aws:ecs:us-west-2:&ExampleAWSAccountNo1;:task/default/febee046097849aba589d4435207c04a",
"Family": "query-metadata",
"Revision": "7",
"DesiredStatus": "RUNNING",
"KnownStatus": "RUNNING",
"Limits": {
"CPU": 0.25,
"Memory": 512
},
"PullStartedAt": "2020-03-26T22:25:40.420726088Z",
"PullStoppedAt": "2020-03-26T22:26:22.235177616Z",
"AvailabilityZone": "us-west-2c",
"Containers": [
{
"DockerId": "febee046097849aba589d4435207c04aquery-metadata",
"Name": "query-metadata",
"DockerName": "query-metadata",
"Image": "mreferre/eksutils",
"ImageID": "sha256:1b146e73f801617610dcb00441c6423e7c85a7583dd4a65ed1be03cb0e123311",
"Labels": {
"com.amazonaws.ecs.cluster": "arn:aws:ecs:us-west-2:&ExampleAWSAccountNo1;:cluster/default",
"com.amazonaws.ecs.container-name": "query-metadata",
"com.amazonaws.ecs.task-arn": "arn:aws:ecs:us-west-2:&ExampleAWSAccountNo1;:task/default/febee046097849aba589d4435207c04a",
"com.amazonaws.ecs.task-definition-family": "query-metadata",
"com.amazonaws.ecs.task-definition-version": "7"
},
"DesiredStatus": "RUNNING",
"KnownStatus": "RUNNING",
"Limits": {
"CPU": 2
},
"CreatedAt": "2020-03-26T22:26:24.534553758Z",
"StartedAt": "2020-03-26T22:26:24.534553758Z",
"Type": "NORMAL",
"Networks": [
{
"NetworkMode": "awsvpc",
"IPv4Addresses": [
"10.0.0.108"
],
"AttachmentIndex": 0,
"IPv4SubnetCIDRBlock": "10.0.0.0/24",
"MACAddress": "0a:62:17:7a:36:68",
"DomainNameServers": [
"10.0.0.2"
],
"DomainNameSearchList": [
"us-west-2.compute.internal"
],
"PrivateDNSName": "ip-10-0-0-108.us-west-2.compute.internal",
"SubnetGatewayIpv4Address": ""
}
]
}
]
}
As you can see you would need to parse the TaskARN to get the TaskID (it is the last part of the ARN if you split by "/".
Amazon do specify the following in the documentation that should be noted.
For tasks using the Fargate launch type and platform versions prior to 1.4.0, the task metadata version 3 and 2 endpoint are supported. For more information, see Task Metadata Endpoint version 3 or Task Metadata Endpoint version 2.
The link in the accepted answer is for EC2 launch type. The direct doc link for Fargate is: https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-metadata-endpoint-v4-fargate.html. The json content seems to be pretty much the same though.

configuring Synonyms.txt in AWS hosted elastic search

I am trying to upload sysnonyms.txt in AWS hosted elastic search, but I couldn't find any feasible way to do that. All I have tried is the following.
I am not supposed to use inline sysnonym, since i have a huge list of synonmys. So I tried to use below settings to uplaod synonyms.txt to AWS hosted elastic search,
"settings": {
"analysis": {
"filter": {
"synonyms_filter" : {
"type" : "synonym",
"synonyms_path" : "https://test-bucket.s3.amazonaws.com/synonyms.txt"
}
},
"analyzer": {
"synonyms_analyzer" : {
"tokenizer" : "whitespace",
"type": "custom",
"filter" : ["lowercase","synonyms_filter"]
}
}
}
when I use above settings to create index from Kibana(VPC access), I am getting below exception.
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[0jc0TeJ][x.x.x.x:9300][indices:admin/create]"}],"type":"illegal_argument_exception","reason":"IOException while reading synonyms_path_path: (No such file or directory)"}},"status":400}
Since my Elastic search is hosted my AWS, I cant get node details or etc folder details to upload my file.
Any suggestion on the approach or how to upload file to AWS ES?
The AWS ES service has many limitations, one of which is that you cannot use file-based synonyms (since you don't have access to the filesystem).
You need to list all your synonyms inside the index settings.
"settings": {
"analysis": {
"filter": {
"synonyms_filter" : {
"type" : "synonym",
"synonyms" : [ <--- like this
"i-pod, i pod => ipod",
"universe, cosmos"
]
}
},
"analyzer": {
"synonyms_analyzer" : {
"tokenizer" : "whitespace",
"type": "custom",
"filter" : ["lowercase","synonyms_filter"]
}
}
}
UPDATE:
You can now use file-based synonyms in AWS ES by adding custom packages