Calling Move (entry) functions using SUI CLI - blockchain

I have published a module called certificates at 0x426ff70c987a00b9384b102f10a4f8bb8945141f
\identities>sui client object --id 0x426ff70c987a00b9384b102f10a4f8bb8945141f
----- Move Package (0x426ff70c987a00b9384b102f10a4f8bb8945141f[1]) -----
Owner: Immutable
Version: 1
Storage Rebate: 0
Previous Transaction: K01/b4ZdtujIIAiFODDRATUUMs3mw41OHNoB2kfMghY=
----- Data -----
Modules: ["certificates"]
I am trying to call a function named issue_certificate with the following signature:
public entry fun issue_certificate(_: &CertCreatorCap,
name: vector<u8>,
year: u8,
recipient: address,
ctx: &mut TxContext)
&CertCreatorCap has the following ID: 0x8e724e1266e1f4f1a8d6cfa904b2e0749ed41953
\identities>sui client object --id 0x8e724e1266e1f4f1a8d6cfa904b2e0749ed41953
----- Move Object (0x8e724e1266e1f4f1a8d6cfa904b2e0749ed41953[1]) -----
Owner: Account Address ( 0xb7a9c2bc3a65ad0b02851e426e6b34dcf069b6c7 )
Version: 1
Storage Rebate: 14
Previous Transaction: K01/b4ZdtujIIAiFODDRATUUMs3mw41OHNoB2kfMghY=
----- Data -----
type: 0x426ff70c987a00b9384b102f10a4f8bb8945141f::certificates::CertCreatorCap
id: 0x8e724e1266e1f4f1a8d6cfa904b2e0749ed41953
The command using Sui Client CLI is: variant -> name = "JIM"
\identities>sui client call --function issue_certificate --module certificates --package 0x426ff70c987a00b9384b102f10a4f8bb8945141f --args 0x8e724e1266e1f4f1a8d6cfa904b2e0749ed41953 "JIM" 2022 0xb7a9c2bc3a65ad0b02851e426e6b34dcf069b6c7 --gas-budget 100000
Could not serialize argument of type U8 at 2 into u8. Got error: out of range integral type conversion attempted
I have tried putting in b"JIM" and <74,105,109>, they give the following errors.
\identities>sui client call --function issue_certificate --module certificates --package 0x426ff70c987a00b9384b102f10a4f8bb8945141f --args 0x8e724e1266e1f4f1a8d6cfa904b2e0749ed41953 <74,105,109> 2022 0xb7a9c2bc3a65ad0b02851e426e6b34dcf069b6c7 --gas-budget 100000
The system cannot find the file specified.
\identities>sui client call --function issue_certificate --module certificates --package 0x426ff70c987a00b9384b102f10a4f8bb8945141f --args 0x8e724e1266e1f4f1a8d6cfa904b2e0749ed41953 b"JIM" 2022 0xb7a9c2bc3a65ad0b02851e426e6b34dcf069b6c7 --gas-budget 100000
Could not serialize argument of type U8 at 2 into u8. Got error: out of range integral type conversion attempted
I think the fundamental question is how I do pass vector/string arguments via the CLI to call a move function on Sui.

\identities>sui client call --function issue_certificate --module certificates --package 0x426ff70c987a00b9384b102f10a4f8bb8945141f --args 0x8e724e1266e1f4f1a8d6cfa904b2e0749ed41953 "JIM" 22 0xb7a9c2bc3a65ad0b02851e426e6b34dcf069b6c7 --gas-budget 100000
The above command works lol, it's because 2022 is way bigger than u8... What a careless mistake.

Related

Connecting cassandra-stress to AWS Keyspaces

I've provisions a Keyspace on AWS and in order to make sure it can achieve our desired performance I'm trying to run the cassandra-stress tool on it and compare it to other architectures we're experimenting with.
I managed to connect to it using the following cqlshrc:
[connection]
port = 9142
factory = cqlshlib.ssl.ssl_transport_factory
[ssl]
validate = true
certfile = /root/.cassandra/AmazonRootCA1.pem
And the following command (hoping that soon enough there will be Python3 support, the development was completed this February according to their Jira ticket):
cqlsh cassandra.eu-central-1.amazonaws.com 9142 -u "myuser-at-722222222222" -p "12/12ZmHmtD1klsDk9cgqt/XXXXXXXXxUz6Sy687z/U=" --ssl --cqlversion="3.4.4"
Surprisingly or not, when using the official AWS guides things tend to work.
So I went on and tried connecting the cassandra-stress tool (I have it inside a Docker container, I'd rather keep my OS Java free) to the same Keyspace.
First I converted the AWS AmazonRootCA1.pem into cassandra_truststore.jks using the following commands (explained here):
openssl x509 -outform der -in AmazonRootCA1.pem -out temp_file.der
keytool -import -alias cassandra -keystore cassandra_truststore.jks -file temp_file.der
Now when I'm trying to run the actual tool like this:
./cassandra-stress write -node cassandra.eu-central-1.amazonaws.com -port native=9142 thrift=9142 jmx=9142 -transport truststore=/root/.cassandra/cassandra_truststore.jks truststore-password=mypassword -mode native cql3 user="myuser-at-722222222222" password="12/12ZmHmtD1klsDk9cgqt/XXXXXXXXxUz6Sy687z/U="
I'm getting the following error:
******************** Stress Settings ********************
Command:
Type: write
Count: -1
No Warmup: false
Consistency Level: LOCAL_ONE
Target Uncertainty: 0.020
Minimum Uncertainty Measurements: 30
Maximum Uncertainty Measurements: 200
Key Size (bytes): 10
Counter Increment Distibution: add=fixed(1)
Rate:
Auto: true
Min Threads: 4
Max Threads: 1000
Population:
Sequence: 1..1000000
Order: ARBITRARY
Wrap: true
Insert:
Revisits: Uniform: min=1,max=1000000
Visits: Fixed: key=1
Row Population Ratio: Ratio: divisor=1.000000;delegate=Fixed: key=1
Batch Type: not batching
Columns:
Max Columns Per Key: 5
Column Names: [C0, C1, C2, C3, C4]
Comparator: AsciiType
Timestamp: null
Variable Column Count: false
Slice: false
Size Distribution: Fixed: key=34
Count Distribution: Fixed: key=5
Errors:
Ignore: false
Tries: 10
Log:
No Summary: false
No Settings: false
File: null
Interval Millis: 1000
Level: NORMAL
Mode:
API: JAVA_DRIVER_NATIVE
Connection Style: CQL_PREPARED
CQL Version: CQL3
Protocol Version: V4
Username: myuser-at-722222222222
Password: *suppressed*
Auth Provide Class: null
Max Pending Per Connection: 128
Connections Per Host: 8
Compression: NONE
Node:
Nodes: [cassandra.eu-central-1.amazonaws.com]
Is White List: false
Datacenter: null
Schema:
Keyspace: keyspace1
Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
Replication Strategy Pptions: {replication_factor=1}
Table Compression: null
Table Compaction Strategy: null
Table Compaction Strategy Options: {}
Transport:
factory=org.apache.cassandra.thrift.TFramedTransportFactory; truststore=/root/.cassandra/cassandra_truststore.jks; truststore-password=mypassword; keystore=null; keystore-password=null; ssl-protocol=TLS; ssl-alg=SunX509; store-type=JKS; ssl-ciphers=TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA;
Port:
Native Port: 9142
Thrift Port: 9142
JMX Port: 9142
Send To Daemon:
*not set*
Graph:
File: null
Revision: unknown
Title: null
Operation: WRITE
TokenRange:
Wrap: false
Split Factor: 1
java.lang.RuntimeException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra.eu-central-1.amazonaws.com/3.127.48.183:9142 (com.datastax.driver.core.exceptions.TransportException: [cassandra.eu-central-1.amazonaws.com/3.127.48.183] Channel has been closed))
at org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:220)
at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesNative(SettingsSchema.java:79)
at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:69)
at org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:228)
at org.apache.cassandra.stress.StressAction.run(StressAction.java:57)
at org.apache.cassandra.stress.Stress.run(Stress.java:143)
at org.apache.cassandra.stress.Stress.main(Stress.java:62)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: cassandra.eu-central-1.amazonaws.com/3.127.48.183:9142 (com.datastax.driver.core.exceptions.TransportException: [cassandra.eu-central-1.amazonaws.com/3.127.48.183] Channel has been closed))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:233)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:79)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1424)
at com.datastax.driver.core.Cluster.getMetadata(Cluster.java:403)
at org.apache.cassandra.stress.util.JavaDriverClient.connect(JavaDriverClient.java:160)
at org.apache.cassandra.stress.settings.StressSettings.getJavaDriverClient(StressSettings.java:211)
... 6 more
I've tried changing some parameters such as the jks password etc. (Just in case I was wrong) but I got a different error message so it's probably not the case.
Did I miss something?
Try using TLP Stress instead.
tlp-stress run RandomPartitionAccess -d 10m --host cassandra.us-east-1.amazonaws.com --port 9142 --username alice --password fLyWYFlTCD5J2gzGAZ –ssl --max-requests 4000 --dc us-east-2 --threads 10
https://thelastpickle.com/tlp-stress/

how to parse Http json response and fail or pass job based on that?

I have an gitlab ci yaml file. and 2 jobs. My .gitlab-ci.yaml file is:
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- trigger_IT_service
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
trigger_IT_service_job:
stage: trigger_IT_service
script:
- 'curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer'
And It's my trigger_IT_service job report:
Running on DIGITALIZATION...
00:00
Fetching changes with git depth set to 50...
00:05
Reinitialized existing Git repository in D:/GitLab-Runner/builds/c11pExsu/0/personalname/newproject/.git/
Checking out 24be087a as master...
Removing Output/
git-lfs/2.5.2 (GitHub; windows amd64; go 1.10.3; git 8e3c5c93)
Skipping Git submodules setup
$ curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer
00:02
StatusCode : 200
StatusDescription : 200
Content : {"status":200,"message":"SAP transfer started. Please
check in db","errorCode":0,"timestamp":"2020-03-25T13:53:05
.722+0300","responseObject":null}
RawContent : HTTP/1.1 200 200
Keep-Alive: timeout=10
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json;charset=UTF-8
Date: Wed, 25 Mar 2020 10:53:05 GMT
Server: Apache
I have to control the this report "Content" part in gitlab ci yaml
If "message" is "SAP transfer started. Please check in db" the pipeline should pass otherwise must be failed.
Actually my question is:
how to parse Http json response and fail or pass job based on that
Thank you for all your helps.
Best way would be to install some tool to parse json and use it, different examples here
Given json example from comment:
{
"status": 200,
"message": "SAP transfer started. Please check in db",
"errorCode": 0,
"timestamp": "2020-03-25T17:06:43.430+0300",
"responseObject": null
}
If you can install python3 on your runner you could achieve it all with script:
import requests; # note this might require additional install with pip install requests
message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']
if message != 'SAP transfer started. Please check in db':
print('Invalid message: ' + message)
exit(1)
else:
print('Message ok')
So trigger_IT_service stage in your yaml would be:
trigger_IT_service_job:
stage: trigger_IT_service
script: >
python -c "import requests; message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']; (print('Invalid message: ' + message), exit(1)) if message != 'SAP transfer started. Please check in db' else (print('Message ok'), exit(0))"

Can't execute AWS Lambda function built with Micronaut and Graal: Error decoding JSON stream

I built a native java AWS Lambda function using Graal and Micronaut as explained here
After deploying it to AWS Lambda (custom runtime), I can't successfully execute it.
The error that AWS shows is:
{
"errorType": "Runtime.ExitError",
"errorMessage": "RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Error: Runtime exited with error: exit status 1"
}
The AWS log output is:
START RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Version: $LATEST
01:13:08.015 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [ec2, cloud, function]
Error executing function (Use -x for more information): Error decoding JSON stream for type [request]: No content to map due to end-of-input
at [Source: (BufferedInputStream); line: 1, column: 0]
END RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2
REPORT RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Duration: 698.31 ms Billed Duration: 700 ms Memory Size: 512 MB Max Memory Used: 54 MB
RequestId: 9a231ad9-becc-49f7-832a-f9088f821fb2 Error: Runtime exited with error: exit status 1
Runtime.ExitError
But when I test it locally using
echo '{"value":"testing"}' | ./server
I got
01:35:56.675 [main] INFO i.m.context.env.DefaultEnvironment - Established active environments: [function]
{"value":"New value: testing"}
The function code is:
#FunctionBean("user-data-function")
public class UserDataFunction implements Function<UserDataRequest, UserData> {
private static final Logger LOG = LoggerFactory.getLogger(UserDataFunction.class);
private final UserDataService userDataService;
public UserDataFunction(UserDataService userDataService) {
this.userDataService = userDataService;
}
#Override
public UserData apply(UserDataRequest request) {
if (LOG.isDebugEnabled()) {
LOG.debug("Request: {}", request.getValue());
}
return userDataService.get(request.getValue());
}
}
And the UserDataService is:
#Singleton
public class UserDataService {
public UserData get(String value) {
UserData userData = new UserData();
userData.setValue("New value: " + value);
return userData;
}
}
To test it on AWS console, I configured the following test event:
{ "value": "aws lambda test" }
PS.: I uploaded to AWS Lambda a zip file that contains the "server" and the "bootstrap" file to allow the "custom runtime" as explained before.
What I'm doing wrong?
Thanks in advance.
Tiago Peixoto.
EDIT: added the lambda test event used on AWS console.
Ok, I figured it out. I just changed the bootstrap file from this
#!/bin/sh
set -euo pipefail
./server
to this
#!/bin/sh
set -euo pipefail
# Processing
while true
do
HEADERS="$(mktemp)"
# Get an event
EVENT_DATA=$(curl -sS -LD "$HEADERS" -X GET "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next")
REQUEST_ID=$(grep -Fi Lambda-Runtime-Aws-Request-Id "$HEADERS" | tr -d '[:space:]' | cut -d: -f2)
# Execute the handler function from the script
RESPONSE=$(echo "$EVENT_DATA" | ./server)
# Send the response
curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$REQUEST_ID/response" -d "$RESPONSE"
done
as explained here

AWS metadata end point not available in CodeBuild

I have a django app that has a function to determine if it is being run in an EC2 or not:
def am_i_ec2():
result = False
meta = 'http://169.254.169.254/latest/meta-data/public-ipv4'
try:
result = urlopen(meta).status == 200
except Exception:
return result
return True
This obviously works fine in my local machine. It also works on the EC2s where the pipeline will eventually make the deployment:
Python 3.6.8 (default, Mar 18 2019, 18:57:19)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from urllib.request import urlopen
>>> def am_i_ec2():
... result = False
... meta = 'http://169.254.169.254/latest/meta-data/public-ipv4'
... try:
... result = urlopen(meta).status == 200
... except Exception:
... return result
... return True
...
>>> am_i_ec2()
True
However, on the CodeBuild stage, I manually added a curl line to the buildspec and I'm getting this:
[Container] 2019/07/28 21:36:11 Running command curl http://169.254.169.254/latest/meta-data/public-ipv4
curl: (7) Couldn't connect to server
[Container] 2019/07/28 21:36:11 Command did not exit successfully curl http://169.254.169.254/latest/meta-data/public-ipv4 exit status 7
I'm assuming that all the networking pieces are working fine since prior to this implementation, the build was working fine and it does some pip installations prior to this, so it does have internet access.
What I'm I missing here?
As already pointed out by Subin, another SO post [1] and others [2][3], you have to change the metadata endpoint for this to work. Your build is usually not running on a plain EC2 instance but in a dockerized environment or some other type of container implemented by AWS.
Use: http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
References
[1] https://stackoverflow.com/a/47028691/10473469
[2] https://blog.jwr.io/aws/codebuild/container/iam/role/2019/05/30/iam-role-inside-container-inside-aws-codebuild.html
[3] https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer/ ("Create a Build Specification")

apimcli list apis : 400 Bad request

I have wso2am-2.5.0 and apimcli-1.1.0
all downloaded from here: https://wso2.com/api-management/install/
i try to configure and use apimcli with wso2am running locally
so, i've added the environment named local:
apimcli add-env -n local
--apim https://localhost:9443
--registration https://localhost:9443/identity/connect/register
--import-export https://localhost:9443/api-import-export-2.2.0-v2
--api_list https://localhost:9443/api/am/publisher/v0.12/apis
--token https://localhost:9443/oauth2/token
note the parameter --api_list defined in documentation as --list
but apimcli add-env --help displays --api_list instead
and finally I try to get list of apis:
apimcli list apis -e local -u admin -p admin --insecure --verbose
but it gives me the following output:
Executed ImportExportCLI (apimcli) on Wed, 26 Sep 2018 15:59:48 EEST
[INFO]: Insecure: true
[INFO]: apis called
[INFO]: Environment: 'local'
[INFO]: Reg Endpoint read: https://localhost:9443/identity/connect/register
Getting ClientID, ClientSecret: Status - 403 Forbidden
Error: <nil>
Body:
<html>
<head>
<title>Error 403</title>
</head>
<body>
<h1>Error 403 - Forbidden</h1>
</body>
</html>
Error: Request didn't respond 200 OK: 403 Forbidden
[INFO]: EnvKeysAll: &{map[]}
[ERROR]: connecting to https://localhost:9443/oauth2/token
apimcli: Unable to connect. Reason: Status: 400 Bad Request
[ERROR]: Unable to connect.: Status: 400 Bad Request
Exit status 1
It seems the publisher API version is wrong.
--api_list https://localhost:9443/api/am/publisher/v0.12/apis
Make it v0.13 and try again.
Edit: It seems the DCR endpoint is also wrong. Change it like this.
--registration https://localhost:9443/client-registration/v0.13/register
It seems the readme file shipped with the cli is not correct. :-/
Please use the following doc instead.
https://docs.wso2.com/display/AM250/Migrating+the+APIs+and+Applications+to+a+Different+Environment