Related
I would like to use webassembly directly from my embedded v8 without the detour via JavaScript. I used the provided hello-world example and the WasmModuleObjectBuilderStreaming class from v8.h. However, I am stuck at how to extract the add function. Help would be appreciated.
#include <include/v8.h>
#include <include/libplatform/libplatform.h>
#include <stdlib.h>
#include <unistd.h>
using v8::HandleScope;
using v8::Isolate;
using v8::Local;
using v8::Promise;
using v8::WasmModuleObjectBuilderStreaming;
int main(int argc, char* argv[]) {
v8::V8::InitializeICUDefaultLocation(argv[0]);
v8::V8::InitializeExternalStartupData(argv[0]);
std::unique_ptr<v8::Platform> platform = v8::platform::NewDefaultPlatform();
v8::V8::InitializePlatform(platform.get());
v8::V8::Initialize();
Isolate::CreateParams create_params;
create_params.array_buffer_allocator = v8::ArrayBuffer::Allocator::NewDefaultAllocator();
Isolate* isolate = Isolate::New(create_params);
Isolate::Scope isolate_scope(isolate);
HandleScope scope(isolate);
WasmModuleObjectBuilderStreaming stream(isolate);
// Use the v8 API to generate a WebAssembly module.
//
// |bytes| contains the binary format for the following module:
//
// (func (export "add") (param i32 i32) (result i32)
// get_local 0
// get_local 1
// i32.add)
//
// taken from: https://github.com/v8/v8/blob/master/samples/hello-world.cc#L66
std::vector<uint8_t> wasmbin {
0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00, 0x01, 0x07, 0x01,
0x60, 0x02, 0x7f, 0x7f, 0x01, 0x7f, 0x03, 0x02, 0x01, 0x00, 0x07,
0x07, 0x01, 0x03, 0x61, 0x64, 0x64, 0x00, 0x00, 0x0a, 0x09, 0x01,
0x07, 0x00, 0x20, 0x00, 0x20, 0x01, 0x6a, 0x0b
};
// write bytes and finish
stream.OnBytesReceived(wasmbin.data(), wasmbin.size());
stream.Finish();
Local<Promise> promise = stream.GetPromise();
// TODO: Get exports, extract `add` & call `add`
}
Build setup:
Follow the instruction in Run the example from the official Getting started with embedding V8. Save the code to sample/wasm.cc and execute following commands:
$ g++ -I. -O2 -Iinclude samples/wasm.cc -o wasm -lv8_monolith -Lout.gn/x64.release.sample/obj/ -pthread -std=c++17`
$ ./wasm`
Solution:
Thanks #liliscent, I adapted my example accordingly. Because we all like, working code:
#include <include/v8.h>
#include <include/libplatform/libplatform.h>
using v8::HandleScope;
using v8::Isolate;
using v8::Local;
using v8::Promise;
using v8::WasmModuleObjectBuilderStreaming;
using v8::WasmCompiledModule;
using v8::Context;
using v8::Local;
using v8::Value;
using v8::String;
using v8::Object;
using v8::Function;
using v8::Int32;
using args_type = Local<Value>[];
int main(int argc, char* argv[]) {
v8::V8::InitializeICUDefaultLocation(argv[0]);
v8::V8::InitializeExternalStartupData(argv[0]);
std::unique_ptr<v8::Platform> platform = v8::platform::NewDefaultPlatform();
v8::V8::InitializePlatform(platform.get());
v8::V8::Initialize();
Isolate::CreateParams create_params;
create_params.array_buffer_allocator = v8::ArrayBuffer::Allocator::NewDefaultAllocator();
Isolate* isolate = Isolate::New(create_params);
Isolate::Scope isolate_scope(isolate);
HandleScope scope(isolate);
Local<Context> context = Context::New(isolate);
Context::Scope context_scope(context);
WasmModuleObjectBuilderStreaming stream(isolate);
// Use the v8 API to generate a WebAssembly module.
//
// |bytes| contains the binary format for the following module: //
// (func (export "add") (param i32 i32) (result i32)
// get_local 0
// get_local 1
// i32.add)
//
// taken from: https://github.com/v8/v8/blob/master/samples/hello-world.cc#L66
std::vector<uint8_t> wasmbin {
0x00, 0x61, 0x73, 0x6d, 0x01, 0x00, 0x00, 0x00, 0x01, 0x07, 0x01,
0x60, 0x02, 0x7f, 0x7f, 0x01, 0x7f, 0x03, 0x02, 0x01, 0x00, 0x07,
0x07, 0x01, 0x03, 0x61, 0x64, 0x64, 0x00, 0x00, 0x0a, 0x09, 0x01,
0x07, 0x00, 0x20, 0x00, 0x20, 0x01, 0x6a, 0x0b
};
// same as calling:
// let module = new WebAssembly.Module(bytes);
Local<WasmCompiledModule> module = WasmCompiledModule::DeserializeOrCompile(isolate,
WasmCompiledModule::BufferReference(0, 0),
WasmCompiledModule::BufferReference(wasmbin.data(), wasmbin.size())
).ToLocalChecked();
// same as calling:
// let module_instance_exports = new WebAssembly.Instance(module).exports;
args_type instance_args{module};
Local<Object> module_instance_exports = context->Global()
->Get(context, String::NewFromUtf8(isolate, "WebAssembly"))
.ToLocalChecked().As<Object>()
->Get(context, String::NewFromUtf8(isolate, "Instance"))
.ToLocalChecked().As<Object>()
->CallAsConstructor(context, 1, instance_args)
.ToLocalChecked().As<Object>()
->Get(context, String::NewFromUtf8(isolate, "exports"))
.ToLocalChecked().As<Object>()
;
// same as calling:
// module_instance_exports.add(77, 88)
args_type add_args{Int32::New(isolate, 77), Int32::New(isolate, 88)};
Local<Int32> adder_res = module_instance_exports
->Get(context, String::NewFromUtf8(isolate, "add"))
.ToLocalChecked().As<Function>()
->Call(context, context->Global(), 2, add_args)
.ToLocalChecked().As<Int32>();
printf("77 + 88 = %d\n", adder_res->Value());
return 0;
}
You can construct a WebAssembly module directly from C++ via v8::WasmCompiledModule class (it will be renamed to v8::WasmModuleObject in next version):
Local<WasmCompiledModule> module = WasmCompiledModule::DeserializeOrCompile(isolate,
WasmCompiledModule::BufferReference(0, 0),
WasmCompiledModule::BufferReference(wasmbin.data(), wasmbin.size())
).ToLocalChecked();
But AFAIK, v8 doesn't expose its webassembly api directly, you have to get them from JS global context. The following code creates a module instance, and gets the exports of the instance:
using args_type = Local<Value>[];
Local<Object> module_instance_exports = context->Global()
->Get(context, String::NewFromUtf8(isolate, "WebAssembly"))
.ToLocalChecked().As<Object>()
->Get(context, String::NewFromUtf8(isolate, "Instance"))
.ToLocalChecked().As<Object>()
->CallAsConstructor(context, 1, args_type{module})
.ToLocalChecked().As<Object>()
->Get(context, String::NewFromUtf8(isolate, "exports"))
.ToLocalChecked().As<Object>()
;
Then you can get the add function from exports object and call it:
Local<Int32> adder_res = module_instance_exports
->Get(context, String::NewFromUtf8(isolate, "add"))
.ToLocalChecked().As<Function>()
->Call(context, context->Global(), 2, args_type{Int32::New(isolate, 77), Int32::New(isolate, 88)})
.ToLocalChecked().As<Int32>();
std::cout << "77 + 88 = " << adder_res->Value() << "\n";
You might be interested in the Wasm C/C++ API proposal, which allows using a Wasm engine directly from C/C++. The design of this API is independent of any particular engine, but the proposal contains a more or less complete prototype implementation on top of V8.
Sample snippet (see e.g. hello.cc):
// ...
auto engine = wasm::Engine::make();
auto store = wasm::Store::make(engine.get());
auto module = wasm::Module::make(store.get(), binary);
auto instance = wasm::Instance::make(store.get(), module.get(), imports);
auto exports = instance->exports();
exports[0]->func()->call();
I have recently been setting up various testing environments and in this cas I nneed to read and decode a gzip response from a HTTP server. I know what I have so far works as I have tested it with wireshark and hardcoded data as outlined below, my question is what is wrong with how I am handling the gizzped data from a HTTP server?
Here is what Im using:
From this thread http://www.qtcentre.org/threads/30031-qUncompress-data-from-gzip I am using the gzipDecopress function with the data provided and seeing that it works.
QByteArray gzipDecompress( QByteArray compressData )
{
//Hardcode sample data
const char dat[40] = {
0x1F, 0x8B, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0xAA, 0x2E, 0x2E, 0x49, 0x2C, 0x29,
0x2D, 0xB6, 0x4A, 0x4B, 0xCC, 0x29, 0x4E, 0xAD, 0x05, 0x00, 0x00, 0x00, 0xFF, 0xFF, 0x03, 0x00,
0x2A, 0x63, 0x18, 0xC5, 0x0E, 0x00, 0x00, 0x00};
compressData = QByteArray::fromRawData( dat, 40);
//decompress GZIP data
//strip header and trailer
compressData.remove(0, 10);
compressData.chop(12);
const int buffersize = 16384;
quint8 buffer[buffersize];
z_stream cmpr_stream;
cmpr_stream.next_in = (unsigned char *)compressData.data();
cmpr_stream.avail_in = compressData.size();
cmpr_stream.total_in = 0;
cmpr_stream.next_out = buffer;
cmpr_stream.avail_out = buffersize;
cmpr_stream.total_out = 0;
cmpr_stream.zalloc = Z_NULL;
cmpr_stream.zalloc = Z_NULL;
if( inflateInit2(&cmpr_stream, -8 ) != Z_OK) {
qDebug() << "cmpr_stream error!";
}
QByteArray uncompressed;
do {
int status = inflate( &cmpr_stream, Z_SYNC_FLUSH );
if(status == Z_OK || status == Z_STREAM_END) {
uncompressed.append(QByteArray::fromRawData((char *)buffer, buffersize - cmpr_stream.avail_out));
cmpr_stream.next_out = buffer;
cmpr_stream.avail_out = buffersize;
} else {
inflateEnd(&cmpr_stream);
}
if(status == Z_STREAM_END) {
inflateEnd(&cmpr_stream);
break;
}
}while(cmpr_stream.avail_out == 0);
return uncompressed;
}
When the data is hardcoded as in that example, the string is decompressed. However, when I read the response from a HTTP server and store it in a QByteArray, it cannot be uncompressed. I am reading the response as follows and I can see it works when comparing the results on wireshark
//Read that length of encoded data
char EncodedData[ LengthToRead ];
memset( EncodedData, 0, LengthToRead );
recv( socketDesc, EncodedData, LengthToRead, 0 );
EndOfData = true;
//EncodedDataBytes = QByteArray((char*)EncodedData);
EncodedDataBytes = QByteArray::fromRawData(EncodedData, LengthToRead );
I assume i am missing some header or byte order when reading the response, but at the moment have no idea what. Any help very welcome!!
EDIT: So I have been looking at this a little more over the weekend and at the moment im trying to test the encode and decode of the given hex string, which is "{status:false}" in plain text. I have tried to use online gzip encoders such as http://www.txtwizard.net/compression but it returns some ascii text that does not match the hex string in the above code. When I use PHPs gzcompress( "{status:false}", 1) function it gives me non-ascii values, that I cannot copy/paste to test since they are ascii. So I am wondering if there is any standard reference for gzip encode/decode? It is definitely not in some special encoding since both firefox and wireshark can decode the packets, but my software cannot.
So the issue was with my gzip function, the correct function I found on this link: uncompress error when using zlib
As mentioned above by Cornstalks the infalteInit2 function needs to take MAX_WBITS+16 as its max bit size, I think that was the issue. If anybody knows any libraries or plugins to handle this please post them here! I am surprised that this had to be coded manually when it is so commonly used by HTTP clients/servers.
I stumbled upon a neat trick that I've started using to write binary files into (flash) memory on arduino/esp8266 using a library someone posted to one of the esp8266 forums. I've been trying a number of ways to expand upon it. Most recently I've been minifying and compressing my web content files and compiling them in with sketches on my ESP.
The script he posted first uses the output of the unix command xxd -i to write the binary file into an array of hex. The second part uses a struct to combine the file details with a pointer to the array that you can reference from the code whenever the server gets a uri request that matches an entry in the array.
What I would like to do is create a second array of these things with 'default' tools already pre-compressed so I don't have to go through it every time and/or modify my script that builds the header file any time I create a new server sketch. Basically compress and xxd stuff like jquery.js, bootstrap.css and bootstrap.js (or more often their smaller counterparts like backbone or barekit)
Currently once a file is dumped to hex, for example:
FLASH_ARRAY(uint8_t, __js__simple_js,
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x4b, 0x2b,
0xcd, 0x4b, 0x2e, 0xc9, 0xcc, 0xcf, 0x53, 0xc8, 0xad, 0xf4, 0xcf, 0xf3,
0xc9, 0x4f, 0x4c, 0xd1, 0xd0, 0xac, 0x4e, 0xcc, 0x49, 0x2d, 0x2a, 0xd1,
0x50, 0x0a, 0xc9, 0xc8, 0x2c, 0x56, 0x00, 0xa2, 0xc4, 0x3c, 0x85, 0xfc,
0xbc, 0x1c, 0xa0, 0x94, 0x42, 0x6e, 0x6a, 0x71, 0x71, 0x62, 0x7a, 0xaa,
0x92, 0xa6, 0x75, 0x51, 0x6a, 0x49, 0x69, 0x51, 0x9e, 0x42, 0x49, 0x51,
0x69, 0x6a, 0x2d, 0x00, 0x16, 0xa6, 0x25, 0xe5, 0x43, 0x00, 0x00, 0x00);
The existing code added them all at once along with the struct definition:
struct t_websitefiles {
const char* path;
const char* mime;
const unsigned int len;
const char* enc;
const _FLASH_ARRAY<uint8_t>* content;
} files[] = {
{
.path = "/js/simple.js",
.mime = "application/javascript",
.len = 84,
.enc = "gzip",
.content = &__js__simple_js,
},
{
/* details for file2 ...*/
},
{
/* details for file3 ...*/
}
};
Building an array of the structs representing the various files.
My questions amount to noob questions regarding the language syntax. Can I assume that I can use an identical populated struct in the place of what is inside the curly brackets? For example, if I had a second header file with my regularly used libraries, and jquery was compressed in an array called 'default_files' at position 3, could I use something like &default_files[3] in the place of { /* definitions stuffs */ }. Such as:
struct t_websitefiles {
const char* path;
const char* mime;
const unsigned int len;
const char* enc;
const _FLASH_ARRAY<uint8_t>* content;
} files[] = {
{
.path = "/js/simple.js",
.mime = "application/javascript",
.len = 84,
.enc = "gzip",
.content = &__js__simple_js,
},
&default_files[1],
&default_files[3],
{
.path = "/text/readme.txt",
.mime = "text/text",
.len = 112,
.enc = "",
.content = &__text__readme_txt,
}
};
(I'm guessing based on what I've learned thus far it needs the & in front of it?)
I also assume rather than re-writing the struct definition twice,I could do it as a typedef and then just do:
t_websitefiles files[] = { {/*definitions*/},{ /*stuffs*/ } };
Is that correct? Any help is appreciated. It's hard sometimes to find details on the syntax for specific use cases in documentation covering basics. (I would just try it, but I'm not conveniently in front of a compiler at the moment nor do I have direct access to my codebase but want to work on it later when I might not have direct access to the net)
From what I understand, you want create an array of structs such contains both compound literals and items from another array, all defined in header information.
I don't think this is possible - or at least not in the exact way you suggest. I'll try and provide an alternative though.
Can I assume that I can use an identical populated struct in the place of what is inside the curly brackets?
No - you're mixing your types. 'files' is defined as an array of 'struct t_website'.
The code
struct t_websitefiles files[] = {
...
&default_files[1],
...
}
won't compile as you are mixing your types. files is defined as an array of struct t_websitefile, but &default_files[1] is a pointer. C makes a distinction between pointers and non-pointers. They are seperate types.
The obvious option that I can see to do what you want is to use pointers. This will allow you to define everything in header information.
struct t_websitefiles default_files[] = {
....
}
struct t_websitefiles files[] = {
....
}
// An array of pointers
struct t_websitefiles *files_combined[] = {
&files[0],
&files[1],
&default_files[0],
// Or whatever values you want here
...
}
// Example main, just iterates through the combined list
// of files
int main(int argc, char* argv[]) {
int i;
int files_combined_len = sizeof(files_combined)/sizeof(struct t_websitefiles);
for (i=0; i<files_combined_len; i++) {
printf("File %s\r\n", files_combined[i]->path);
}
return 0;
}
Hope this helps.
I am working on porting an application running on Arduino Mega to LPC824. The following piece of code is working differently for both the platforms.
/**
* Calculation of CMAC
*/
void cmac(const uint8_t* data, uint8_t dataLength) {
uint8_t trailer[1] = {0x80};
uint8_t bytes[_lenRnd];
uint8_t temp[_lenRnd];
memcpy(temp, data, dataLength);
concatArray(temp, dataLength, trailer, 1);
dataLength ++;
addPadding(temp, dataLength);
memcpy(bytes, _sk2, _lenRnd);
xorBytes(bytes,temp,_lenRnd);
aes128_ctx_t ctx;
aes128_init(_sessionkey, &ctx);
uint8_t* chain = aes128_enc_sendMode(bytes, _lenRnd, &ctx, _ivect);
Board_UARTPutSTR("chain\n\r");
printBytes(chain, 16, true);
memcpy(_ivect, chain, _lenRnd);
//memcpy(_ivect, aes128_enc_sendMode(bytes,_lenRnd,&ctx,_ivect), _lenRnd);
memcpy(_cmac,_ivect, _lenRnd);
Board_UARTPutSTR("Initialization vector\n\r");
printBytes(_ivect, 16, true);
}
I am expecting a value like {0x5d, 0xa8, 0x0f, 0x1f, 0x1c, 0x03, 0x7f, 0x16, 0x7e, 0xe5, 0xfd, 0xf3, 0x45, 0xb7, 0x73, 0xa2} for the chain variable. But the follow function is working differently. The print inside the function has the correct value which I want ({5d, 0xa8, 0x0f, 0x1f, 0x1c, 0x03, 0x7f, 0x16, 0x7e, 0xe5, 0xfd, 0xf3, 0x45, 0xb7, 0x73, 0xa2}).
But when the function returns chain is having a different value, compared to what I am expecting, I get the following value for chain {0x00, 0x20, 0x00, 0x10, 0x03, 0x01, 0x00, 0x00, 0xd5, 0x00, 0x00, 0x00, 0xd7, 0x00, 0x00, 0x00}
Inside the function, the result is correct. But it returns a wrong value to the function which called it. Why is it happening so ?
uint8_t* aes128_enc_sendMode(unsigned char* data, unsigned short len, aes128_ctx_t* key,
const unsigned char* iv) {
unsigned char tmp[16];
uint8_t chain[16];
unsigned char c;
unsigned char i;
memcpy(chain, iv, 16);
while (len >= 16) {
memcpy(tmp, data, 16);
//xorBytes(tmp,chain,16);
for (i = 0; i < 16; i++) {
tmp[i] = tmp[i] ^ chain[i];
}
aes128_enc(tmp, key);
for (i = 0; i < 16; i++) {
//c = data[i];
data[i] = tmp[i];
chain[i] = tmp[i];
}
len -= 16;
data += 16;
}
Board_UARTPutSTR("Chain!!!:");
printBytes(chain, 16, true);
return chain;
}
A good start with an issue like this is to delete as much as you can while reproducing the error, with a minimal code example the answer is typically clear. I have done that for you here.
uint8_t* aes128_enc_sendMode(void) {
uint8_t chain[16];
return chain;
}
The chain variable is a local to the function, it ceases to be defined once the function exists. Accessing a pointer to that variable causes undefined behaviour, don't do it.
In practice the pointer to the array still exists and points to an arbitrary block of memory. This block of memory is no longer reserved and can be overwritten at any time.
I suspect it works for the AVR because it is a simple 8 bit chip and that piece of memory was sitting unmolested by the time you used it. The ARM would have used greater optimisations, possibly running the full array on registers, so the data doesn't survive the transition.
tldr; You need to malloc() any arrays that you want to live past the function's exit. Be careful, malloc and embedded systems go together like diesel and styrofoam, it gets messy real quick.
I am attempting to initialise a GUID variable but I not sure this is how you are meant to do it. What I am especially confused about is how to store the last 12 hexadecimal digits in the char array(do I include the "-" character?)
How do I define/initialise a GUID variable?
bool TVManager::isMonitorDevice(GUID id)
{
// Class GUID for a Monitor is: {4d36e96e-e325-11ce-bfc1-08002be10318}
GUID monitorClassGuid;
char* a = "bfc1-08002be10318"; // do I store the "-" character?
monitorClassGuid.Data1 = 0x4d36e96e;
monitorClassGuid.Data2 = 0xe325;
monitorClassGuid.Data3 = 0x11ce;
monitorClassGuid.Data4 = a;
return (bool(id == monitorClassGuid));
}
The Data4 member is not a pointer, it's an array. You'd want:
monitorClassGuid.Data4 = { 0xbf, 0xc1, 0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 };
To make your example work. You might find it easier to do all of the initialization along with the definition of your monitorClassGuid variable:
GUID monitorClassGuid = { 0x4d36e96e, 0xe325, 0x11c3, { 0xbf, 0xc1, 0x08, 0x00, 0x2b, 0xe1, 0x03, 0x18 } };
This question was asked long time ago, but maybe it helps somebody else.
You can use this code to initialize a GUID:
#include <combaseapi.h>;
GUID guid;
CLSIDFromString(L"{4d36e96e-e325-11ce-bfc1-08002be10318}", &guid);