Why does ByteData.view(buffer) throw NoSuchMethodError: Class 'String' has no instance method 'asByteData' - casting

I am trying to read UDP data and converting it to a float.
I know I need to
Receive the Datagram from RawDatagramSocket
Get the ByteBuffer from the Datagram
Convert the ByteBuffer to ByteData
use the getFloat32 method on the ByteData
Here is the broken code:
import 'dart:io';
import 'dart:typed_data';
import 'dart:mirrors';
getTypeName(dynamic obj) {
return reflect(obj).type.reflectedType.toString();
}
void main(List<String> args){
RawDatagramSocket.bind("192.168.1.1", 4444).then((RawDatagramSocket socket){
print('Datagram socket ready to receive');
print('${socket.address.address}:${socket.port}');
socket.listen((RawSocketEvent e){
Datagram d = socket.receive();
if (d == null) return;
ByteBuffer buffer = getTypeName(d.data.buffer);
ByteData bdata = new ByteData.view(buffer);
});
});
}
Here is the debugging code:
import 'dart:io';
import 'dart:typed_data';
import 'dart:mirrors';
getTypeName(dynamic obj) {
return reflect(obj).type.reflectedType.toString();
}
void main(List<String> args){
RawDatagramSocket.bind("192.168.1.1", 4444).then((RawDatagramSocket socket){
print('Datagram socket ready to receive');
print('${socket.address.address}:${socket.port}');
socket.listen((RawSocketEvent e){
Datagram d = socket.receive();
if (d == null) return;
print(d.data);
});
});
}
If I initiate netcat using the following command nc -u 192.168.1.1 4444 and send the data '1111', I get the following printed out from the debug code:
[49, 49, 49, 49, 10]
I understand those values are the assai characters of '1111' followed by a carriage return.
The problem is, when I run the 'broken code', I end up getting the following message:
Unhandled exception:
NoSuchMethodError: Class 'String' has no instance method 'asByteData'.
Receiver: "_ByteBuffer"
Tried calling: asByteData(0, null)
#0 Object._noSuchMethod (dart:core-patch/object_patch.dart:43)
#1 Object.noSuchMethod (dart:core-patch/object_patch.dart:47)
#2 new ByteData.view (dart:typed_data:449)
#3 main.<anonymous closure>.<anonymous closure> (file:///Users/iProgram/test.dart:17:25)
#4 _RootZone.runUnaryGuarded (dart:async/zone.dart:1307)
#5 _BufferingStreamSubscription._sendData (dart:async/stream_impl.dart:330)
#6 _BufferingStreamSubscription._add (dart:async/stream_impl.dart:257)
#7 _StreamController&&_SyncStreamControllerDispatch._sendData (dart:async/stream_controller.dart:796)
#8 _StreamController._add (dart:async/stream_controller.dart:667)
#9 _StreamController.add (dart:async/stream_controller.dart:613)
#10 new _RawDatagramSocket.<anonymous closure> (dart:io-patch/socket_patch.dart:1699)
#11 _NativeSocket.issueReadEvent.issue (dart:io-patch/socket_patch.dart:760)
#12 _microtaskLoop (dart:async/schedule_microtask.dart:41)
#13 _startMicrotaskLoop (dart:async/schedule_microtask.dart:50)
#14 _runPendingImmediateCallback (dart:isolate-patch/isolate_patch.dart:99)
#15 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:152)
The question is, why am I getting the error Class 'String' has no instance method 'asByteData' when I am not using a string or the asByteData command?
I have been using the following documentation: Dart - ByteData
Thanks

why am I getting the error Class 'String' has no instance method 'asByteData' when I am not using a string or the asByteData command?
The call stack shows what's happening here:
Tried calling: asByteData(0, null)
#0 Object._noSuchMethod (dart:core-patch/object_patch.dart:43)
#1 Object.noSuchMethod (dart:core-patch/object_patch.dart:47)
#2 new ByteData.view (dart:typed_data:449)
You're calling new ByteData.view which is internally calling asByteData.
The ByteData.view constructor takes a ByteBuffer but you're passing it a string:
ByteBuffer buffer = getTypeName(d.data.buffer);
Although you've written ByteBuffer here, your getTypeName() function is actually returning a String:
return reflect(obj).type.reflectedType.toString();

Related

When using ZMQ to receive messages, ZMQ always went wrong probability

When using ZMQ to receive messages, ZMQ always went wrong probability.
My develop environment:
zmq version: 4.1.4
server: centos7.0, 8cpu 2.0GHZ
code as follow:
{
zmq::context_t context(1);
zmq::socket_t socket(context, ZMQ_PULL);
socket.bind(m_config.m_strIPAndPort);
zmq::pollitem_t pollItems[]= {{socket,0,ZMQ_POLLIN,0}};
int MAX_POLLITEM_NUM = 1;
while ( !m_bStopThread )
{
try
{
zmq::poll(pollItems, MAX_POLLITEM_NUM, -1);
if( pollItems[0].revents & ZMQ_POLLIN )
{
zmq::message_t zmqMsg;
bool bRes = socket.recv(&zmqMsg);
if ( bRes )
{
pushMsg((const char *)zmqMsg.data(), zmqMsg.size());
}
}
}
catch (exception & e)
{
const char * pszError = e.what();
string strErrMsg = "Error : receive cmd from client exception : " + string(pszError);
qInfo(strErrMsg.c_str());
}
}
}
Stack information of GDB:
#0 0x00007fe3366c7277 in raise () from /lib64/libc.so.6
#1 0x00007fe3366c8968 in abort () from /lib64/libc.so.6
#2 0x00007fe337ffabb9 in zmq::zmq_abort (
errmsg_=errmsg_#entry=0x7fe33803b71f "check ()") at src/err.cpp:87
#3 0x00007fe3380030eb in zmq::msg_t::size (this=0x7fe320002d00)
at src/msg.cpp:361
#4 0x00007fe3380308f2 in zmq::v2_encoder_t::message_ready (
this=0x7fe2e8008ce0) at src/v2_encoder.cpp:54
#5 0x00007fe338021cc1 in zmq::stream_engine_t::out_event
(this=0x7fe320002ce0)
at src/stream_engine.cpp:386
#6 0x00007fe338015a76 in zmq::session_base_t::read_activated (
this=0x7fe3200031b0, pipe_=0x7fe2e8008ac0) at src/session_base.cpp:288
#7 0x00007fe337ffbb14 in zmq::io_thread_t::in_event (this=0x1d17100)
at src/io_thread.cpp:85
#8 0x00007fe337ffa36e in zmq::epoll_t::loop (this=0x1d176b0)
at src/epoll.cpp:188
#9 0x00007fe33802b96d in thread_routine (arg_=0x1d17730) at
src/thread.cpp:109
#10 0x00007fe337284e25 in start_thread () from /lib64/libpthread.so.0
#11 0x00007fe33678fbad in clone () from /lib64/libc.so.6
When running it at another server with higher frequency cpu(eg. 3.2G), It works properly.
What should I do?
Thanks for answers.

Dart unittest should fail on exception

I have this test:
Future f = neo4d.nodes.delete(1);
f.then(((_) {
})).catchError((e){
expect(e.statusCode, equals(409));
});
return f;
});
that currently blows up, since the e.statusCode is 404 instead of 409. I want the test to just fail but instead the whole test suite is stopped because of an uncaught exception.
How do I catch the exception (and fail the test) and stop it from blowing up all other tests?
This is the output I get when I run the code above:
[2014-03-06 14:44:32.020] DEBUG http: R10: Received data in 9ms with status 404: [{
"message" : "Cannot find node with id [1] in database.",
"exception" : "NodeNotFoundException",
"fullname" : "org.neo4j.server.rest.web.NodeNotFoundException",
"stacktrace" : [ "org.neo4j.server.rest.web.DatabaseActions.node(DatabaseActions.java:183)", "org.neo4j.server.rest.web.DatabaseActions.deleteNode(DatabaseActions.java:233)", "org.neo4j.server.rest.web.RestfulGraphDatabase.deleteNode(RestfulGraphDatabase.java:279)", "java.lang.reflect.Method.invoke(Method.java:601)", "org.neo4j.server.rest.transactional.TransactionalRequestDispatcher.dispatch(TransactionalRequestDispatcher.java:139)", "org.neo4j.server.rest.security.SecurityFilter.doFilter(SecurityFilter.java:112)", "java.lang.Thread.run(Thread.java:722)" ]
}]
Uncaught Error: Expected: 409
Actual: 404
Stack Trace:
#0 SimpleConfiguration.onExpectFailure (package:unittest/src/simple_configuration.dart:141:7)
#1 _ExpectFailureHandler.fail (package:unittest/src/simple_configuration.dart:15:28)
#2 DefaultFailureHandler.failMatch (package:unittest/src/expect.dart:117:9)
#3 expect (package:unittest/src/expect.dart:75:29)
#4 nodes... (file:///Users/oskbor/Projects/neo4dart/test/alltests.dart:202:15)
#5 _invokeErrorHandler (dart:async/async_error.dart:12)
#6 _Future._propagateToListeners. (dart:async/future_impl.dart:469)
#7 _rootRun (dart:async/zone.dart:683)
#8 _RootZone.run (dart:async/zone.dart:823)
#9 _Future._propagateToListeners (dart:async/future_impl.dart:445)
#10 _Future._propagateMultipleListeners (dart:async/future_impl.dart:384)
#11 _Future._propagateToListeners (dart:async/future_impl.dart:411)
#12 _Future._completeError (dart:async/future_impl.dart:315)
#13 _Future._asyncCompleteError. (dart:async/future_impl.dart:367)
#14 _asyncRunCallback (dart:async/schedule_microtask.dart:18)
#15 _createTimer. (dart:async-patch/timer_patch.dart:11)
#16 _Timer._createTimerHandler._handleTimeout (timer_impl.dart:151)
#17 _Timer._createTimerHandler. (timer_impl.dart:166)
#18 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:93)
Unhandled exception:
Expected: 409
Actual: 404
#0 _rootHandleUncaughtError.. (dart:async/zone.dart:677)
#1 _asyncRunCallback (dart:async/schedule_microtask.dart:18)
#2 _asyncRunCallback (dart:async/schedule_microtask.dart:21)
#3 _createTimer. (dart:async-patch/timer_patch.dart:11)
#4 _Timer._createTimerHandler._handleTimeout (timer_impl.dart:151)
#5 _Timer._createTimerHandler._handleTimeout (timer_impl.dart:159)
#6 _Timer._createTimerHandler._handleTimeout (timer_impl.dart:159)
#7 _Timer._createTimerHandler. (timer_impl.dart:166)
#8 _RawReceivePortImpl._handleMessage (dart:isolate-patch/isolate_patch.dart:93)
regards Oskar
An asynchronous function, i.e. a function returning a Future, can "throw" two different ways:
It can throw synchronously and not even return a Future in the first place:
Future<int> doSomethingAsync() {
throw new Exception("No! I don't want to even get started!");
}
Or, it can throw asynchronously - i.e. return a Future which then asynchronously throws an error instead of completing:
Future<int> doSomethingAsync() {
return new Future.error(new Exception("Here's your future, but it'll fail."));
}
Your async call
neo4d.nodes.delete(1)
must be of the former type: it throws right away without even returning a Future. That's why the exception doesn't get caught by .catchError and blows up your entire test suite instead.
You want to either update neo4d.nodes.delete and make it throw asynchronously. Or, if that can't be done, wrap your test in a good old synchronous try-catch:
try {
neo4d.nodes.delete(1).then(((_) { expect(0, 1); }));
}
catch (e) {
expect(e.statusCode, equals(409));
}

How do I free an SSL Connection from OpenSSL?

Does the following code properly free all the allocated memory?
void foo(){
//set up connection
SSL *ssl = NULL;
SSL_METHOD *method = (SSL_METHOD *)SSLv23_client_method();
SSL_CTX *ctx = SSL_CTX_new(method);
BIO *bio = BIO_new_ssl_connect(ctx);
BIO_get_ssl(bio, &ssl);
BIO_set_conn_hostname(bio, "facebook.com:443");
doConnect(ssl, ctx, bio);
...
doFree(ssl, ctx, bio);
}
void doConnect(SSL *ssl, SSL_CTX *ctx, BIO *bio){
BIO_reset(bio); //this is here in case we are trying to reconnect
if (BIO_do_connect(connection->bio) <= 0){
while ( BIO_should_retry(connection->bio)){
if (BIO_do_connect(connection->bio) > 0){
break;
}
}
//error handeling in case BIO_should_retry returns false omitted.
}
if (SSL_get_verify_result(connection->ssl) != X509_V_OK){
// Handle the failed verification
}
int socket = BIO_get_fd(bio, NULL);
}
void doFree(SSL *ssl, SSL_CTX *ctx, BIO *bio){
BIO_free_all(bio); //is this right?
}
The reason I'm wondering if this is the correct way of freeing the memory is because I'm currently getting the following stack trace and I dont know if I'm improperly freeing the memory or if it is some other sort of error (valgrind doesnt report any memory error, it simply halts here).
(gdb) bt
#0 0x040010c2 in ?? () from /lib/ld-linux.so.2
#1 0x06a13a0b in write () at ../sysdeps/unix/syscall-template.S:82
#2 0x04153ae9 in ?? () from /lib/i386-linux-gnu/libcrypto.so.1.0.0
#3 0x041508e4 in BIO_write () from /lib/i386-linux-gnu/libcrypto.so.1.0.0
#4 0x040771f1 in ?? () from /lib/i386-linux-gnu/libssl.so.1.0.0
#5 0x040775ff in ?? () from /lib/i386-linux-gnu/libssl.so.1.0.0
#6 0x04078d2f in ?? () from /lib/i386-linux-gnu/libssl.so.1.0.0
#7 0x04077a64 in ?? () from /lib/i386-linux-gnu/libssl.so.1.0.0
#8 0x04074bde in ?? () from /lib/i386-linux-gnu/libssl.so.1.0.0
#9 0x0408eed1 in SSL_shutdown () from /lib/i386-linux-gnu/libssl.so.1.0.0
#10 0x0409b175 in ?? () from /lib/i386-linux-gnu/libssl.so.1.0.0
#11 0x04150638 in BIO_free () from /lib/i386-linux-gnu/libcrypto.so.1.0.0
#12 0x041511c4 in BIO_free_all () from /lib/i386-linux-gnu/libcrypto.so.1.0.0
It could be because your code doesn't call SSL_library_init();. Adding that, the includes, a main and removing references to connection made it all work for me.
Without SSL_library_init(); it crashed in BIO_should_retry because bio was NULL.
I was freeing the memory properly. It turned out that the issue was a result of me using OpenSSL with threads but not initializing OpenSSLs threading system.

C++ program crashing for a string concatenation.

This is the line that is causing the crash :
if (size <= 0)
return;
if (data)
{
std::string sData = std::string((char*)data, size);
buffer += sData; <-- This is the line causing crash
processBuffer();
}
else
return;
Here is the stack trace:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread -1282016352 (LWP 27952)]
0x002b48ec in memcpy () from /lib/tls/libc.so.6
(gdb) bt
#0 0x002b48ec in memcpy () from /lib/tls/libc.so.6
#1 0x001fea31 in std::string::_Rep::_M_clone () from /usr/lib/libstdc++.so.6
#2 0x001fef2e in std::string::reserve () from /usr/lib/libstdc++.so.6
#3 0x001ff83d in std::string::append () from /usr/lib/libstdc++.so.6
#4 0x001ff9e2 in std::string::operator+= () from /usr/lib/libstdc++.so.6
#5 0x003fc6c8 in StreamDecoder::StreamDecoderEncoder::addData
at src/StreamDecoder.cpp:171
I have verified that data is not empty and buffer is a string declared as a private member variable of that class.
I do not know why there is a segfault on memcpy. What could have gone wrong here ?
I had this problem working on a school project a few months back... if a string gets massive, it can cause a segfault. Try using something like an ostringstream instead.

Debugging Python ctypes segmentation fault

I am trying to port some Python ctypes code from a Windows-specific program to link with a Linux port of my library. The shortest Python code sample that describes my problem is shown below. When I try to execute it, I receive a segmentation fault in examine_arguments() in Python. I placed a printf statement in my library at the crashing function call, but it is never executed, which leads me to think the problem is in the ctypes code.
import ctypes
avidll = ctypes.CDLL("libavxsynth.so")
class AVS_Value(ctypes.Structure, object):
def __init__(self, val=None):
self.type=ctypes.c_short(105) # 'i'
self.array_size = 5
self.d.i = 99
class U(ctypes.Union):
_fields_ = [("c", ctypes.c_void_p),
("b", ctypes.c_long),
("i", ctypes.c_int),
("f", ctypes.c_float),
("s", ctypes.c_char_p),
("a", ctypes.POINTER(AVS_Value))]
AVS_Value._fields_ = [("type", ctypes.c_short),
("array_size", ctypes.c_short),
("d", U)]
avs_create_script_environment = avidll.avs_create_script_environment
avs_create_script_environment.restype = ctypes.c_void_p
avs_create_script_environment.argtypes = [ctypes.c_int]
avs_set_var = avidll.avs_set_var
avs_set_var.restype = ctypes.c_int
avs_set_var.argtypes = [ctypes.c_void_p, ctypes.c_char_p, AVS_Value]
env = avs_create_script_environment(2)
val = AVS_Value()
res = avs_set_var(env, b'test', val)
My library has the following in its headers, and a plain-C program doing what I describe above (calling create_script_environment followed by set_var) runs fine. Looking at logging information my library is putting onto the console, the crash happens when I try to enter avs_set_var.
typedef struct AVS_ScriptEnvironment AVS_ScriptEnvironment;
typedef struct AVS_Value AVS_Value;
struct AVS_Value {
short type; // 'a'rray, 'c'lip, 'b'ool, 'i'nt, 'f'loat, 's'tring, 'v'oid, or 'l'ong
// for some function e'rror
short array_size;
union {
void * clip; // do not use directly, use avs_take_clip
char boolean;
int integer;
float floating_pt;
const char * string;
const AVS_Value * array;
} d;
};
AVS_ScriptEnvironment * avs_create_script_environment(int version);
int avs_set_var(AVS_ScriptEnvironment *, const char* name, AVS_Value val);
I tried backtracing the call from GDB, but I don't understand how to interpret the results nor really much about using GDB.
#0 0x00007ffff61d6490 in examine_argument () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#1 0x00007ffff61d65ba in ffi_prep_cif_machdep () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#2 0x00007ffff61d3447 in ffi_prep_cif () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#3 0x00007ffff61c7275 in _ctypes_callproc () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#4 0x00007ffff61c7aa2 in PyCFuncPtr_call.2798 () from /usr/lib/python2.7/lib-dynload/_ctypes.so
#5 0x00000000004c7c76 in PyObject_Call ()
#6 0x000000000042aa4a in PyEval_EvalFrameEx ()
#7 0x00000000004317f2 in PyEval_EvalCodeEx ()
#8 0x000000000054b171 in PyRun_FileExFlags ()
#9 0x000000000054b7d8 in PyRun_SimpleFileExFlags ()
#10 0x000000000054c5d6 in Py_Main ()
#11 0x00007ffff68e576d in __libc_start_main () from /lib/x86_64-linux-gnu/libc.so.6
#12 0x000000000041b931 in _start ()
I'm at a loss as to how to approach this problem. I've looked at the details of the calling types, but I don't see anything obviously incorrect there. Am I falling into any platform-specific usages of types?
Edit It seems there's a problem with 32-bit vs 64-bit architectures in the ctypes module. When I tested this again with a 32-bit build of my library and 32-bit Python, it ran successfully. On 64-bit, it segfaults at the same place.
Try using c_void_p for the opaque AVS_ScriptEnvironment*:
avs_create_script_environment.restype = c_void_p
and:
avs_set_var.argtypes=[c_void_p,ctypes.c_char_p,AVS_Value]