not enough values to unpack (expected 2, got 0) in scapy - python-2.7

It is giving me this error when trying to execute the code

Related

BigQuery - Where can I find the error stream?

I have uploaded a CSV file with 300K rows from GCS to BigQuery, and received the following error:
Where can I find the error stream?
I've changed the create table configuration to allow 4000 errors and it worked, so it must be a problem with the 3894 rows in the message, but this error message does not tell me much about which rows or why.
Thanks
I'm finally managed to see the error stream by running the following command in the terminal:
bq --format=prettyjson show -j <JobID>
It returns a JSON with more details.
In my case it was:
"message": "Error while reading data, error message: Could not parse '16.66666666666667' as int for field Course_Percentage (position 46) starting at location 1717164"
You should be able to click on Job History in the BigQuery UI, then click the failed load job. I tried loading an invalid CSV file just now, and the errors that I see are:
Errors:
Error while reading data, error message: CSV table encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the error stream for more details. (error code: invalid)
Error while reading data, error message: CSV table references column position 1, but line starting at position:0 contains only 1 columns. (error code: invalid)
The first one is just a generic message indicating the failure, but the second error (from the "error stream") is the one that provides more context for the failure, namely CSV table references column position 1, but line starting at position:0 contains only 1 columns.
Edit: given a job ID, you can also use the BigQuery CLI to see complete information about the failure. You would use:
bq --format=prettyjson show -j <job ID>
Using python client it's
from google.api_core.exceptions import BadRequest
job = client.load_table_from_file(*args, **kwargs)
try:
result = job.result()
except BadRequest as ex:
for err in ex.errors:
print(err)
raise
# or alternatively
# job.errors
You could also just do.
try:
load_job.result() # Waits for the job to complete.
except ClientError as e:
print(load_job.errors)
raise e
This will print the errors to screen or you could log them etc.
Following the rest of the answers, you could also see this information in the GCP logs (Stackdriver) tool.
But It might happen that this does not answer your question. It seems like there are detailed errors (such as the one Elliot found) and more imprecise ones. Which gives you no description at all independently of the UI you're using to explore it.

fasttext assertion "counts.size() == osz_" failed

I am trying to use fasttext for text classification and I am training on a corpus of 850MB of texts on Windows, but I keep getting the following error:
assertion "counts.size() == osz_" failed: file "src/model.cc", line 206, function: void fasttext::Model::setTargetCounts(const std::vector<long int>&) Aborted (core dumped)
I checked the values of counts.size() and osz_ and found that counts.size = 2515626 and osz_ = 300. When I call in.good() on the input stream in FastText::loadModel i get 0, in.fail()=1 and in.eof()=1.
I am using the following commands to train and test my model:
./fasttext supervised -input fasttextinput -output fasttextmodel -dim 300 -epoch 5 -minCount 5 -wordNgrams 2
./fasttext test fasttextmodel.bin fasttextinput
My input data is properly formatted according to the fasttext github page, so I am wondering if this is some failure of me or a bug.
Thanks for any support on this!
To close this thread:
As #Sixhobbits' pointed out the error was related to https://github.com/facebookresearch/fastText/issues/73 (running out of disk space when saving the fastText supervised model)

vision.internal.disparityParser in MATLAB

I am working with the computer Vision toolbox in MATLAB 2014b
there is a function for Semi-global Matching (SGM )
I am trying to generate a disparity map of a stereo images. However, the disparity range needs to be quite large for some experiments.
Here is the function call:
Dmap = disparity(I1 I2, 'BlockSize', 15, 'DisparityRange', [-2466, 2466]);
The problem is the DisparityRange is limited to be in the range of [-2464, 2464]. Thus, I am getting an error msg like this one bellow.
Error using disparity
The value of 'DisparityRange' is invalid. Expected DisparityRange to be an array with all of the values >
-2466.
Error in vision.internal.disparityParser (line 38)
parser.parse(varargin{:});
Error in disparity>parseOptionalInputs (line 264)
r = vision.internal.disparityParser(imageSize, getDefaultParameters(),...
Error in disparity>parseInputs (line 244)
r = parseOptionalInputs(imageSize, varargin{:});
Error in disparity (line 137)
r = parseInputs(I1, I2, varargin{:});
My questions:
1. I could not find the function (vision.internal.disparityParser). Where is should be located.
2. I would like to modify the code to work for rainges beyond the specified limit. Is that possible?
3. For anyone who worked with the C++ version of the SGM function (OpenCV), does the same problem exist (i.e., the disparity range limits).
Thank you!
:)
I could only answer the first question. The function vision.internal.disparityParser is located at $MATLAB/toolbox/vision/vision/+vision/+internal/disparityParser.m .

Fatal error in ../deps/v8/src/handles.h, CHECK(location_ != NULL) failed / FATAL ERROR: CALL_AND_RETRY_0 Allocation failed - process out of memory

I installed node on an ubuntu server and am trying to run a basic node program from the server. In it, I try to read in 13 json files, ranging from 120mb - 500+ mb. Previously, I was running it locally, which was ok for the smaller files but was running into the following error when trying to read the larger ones:
FATAL ERROR: CALL_AND_RETRY_0 Allocation failed - process out of memory
I need to be able to parse through the json object after reading the file, so I can't read it line by line. And I need to be able to build a cumulative object from the results of parsing all the objects. Like I said, the code works just fine and produces the result I expect when handling 3+ smaller files (< 20mb), but crashes with the larger ones.
When trying to run it on the server, I have the same issue (it'll work just fine with smaller files), but crashes with the following error on larger ones:
# Fatal error in ../deps/v8/src/handles.h, line 48
# CHECK(location_ != NULL) failed
#
==== C stack trace ===============================
1: V8_Fatal
2: v8::String::NewFromUtf8(v8::Isolate*, char const*, v8::String::NewStringType, int)
3: node::StringBytes::Encode(v8::Isolate*, char const*, unsigned long, node::encoding)
4: node::Buffer::Utf8Slice(v8::FunctionCallbackInfo<v8::Value> const&)
5: v8::internal::FunctionCallbackArguments::Call(void (*)(v8::FunctionCallbackInfo<v8::Value> const&))
6: ??
7: ??
[1] 12381
illegal hardware instruction (core dumped) node main2.js
This is the code chunk it's failing in:
for (var i = 0; i < jsonFileArray.length; i++) {
if (jsonFileArray[i].match(/\.json$/)) {
jsonObject = JSON.parse(fs.readFileSync(dirPath + jsonFileArray[i]));
categoryListObject = jsonManipulator.getFieldValues("Categories", jsonObject, categoryListObject);
}
}
I tried increasing my --max-old-space-size, but that didn't help. Also, I should clarify, I'm pretty new to coding and have never written anything in C, so despite googling this, I'm not really sure where to go next. So really, any help/guidance/insight/step in the right direction would be super appreciated! Thanks!
You're parsing an entire huge JSON file into memory so this can be expected. Try using something like this module: https://github.com/dominictarr/JSONStream and read the files with a read stream (http://nodejs.org/api/fs.html#fs_fs_createreadstream_path_options).