I have generated the genesis block and related hashes, daemon runs fine.
I'm trying to mine the 1st block (block 1) using 'setgenerate true 1'
I've changed related params in chainparams.cpp, any time I run the command I get segmentation fault.
debug log shows
2018-06-25 19:30:54 keypool reserve 2
2018-06-25 19:30:54 CreateNewBlock(): total size 1000
Using latest master branch.
First thing you need to do is check the debug.log from .pivx folder
second thing what data you given in pivx.conf ?
for mine ill add below
rpcuser=user
rpcpassword=password
rpcallowip=127.0.0.1
listen=1
server=1
daemon=1
logtimestamps=1
maxconnections=256
staking=1
txindex=1
And your error segmentation fault. is because the miner.cpp . In src/miner.cpp there is line:
uint256 hashBlockLastAccumulated = chainActive[nHeight - (nHeight % 10) - 10]->GetBlockHash();
so, nHeight is blockchain last block number (which at empty blockchain is 0) + 1 = 1, and thus accessing negative index of array causes Segmentation Fault.
So you need edit this code anyway to run the mining process.
Related
I was running demo codes in AUTO-07p, which is mainly based on fortran but also following python syntax, but I was getting same error when I run most of demo codes. Since they're not written by me, but written by experts and has been released for a long time, so there must NOT an error. Maybe I don't have to modify the codes.
So when I run codes containing either bifurcation analysis part or plotting, I kept getting an error message RuntimeError: maximum recursion depth exceeded while calling a Python object.
So, I tried to change recursion limit to 10^7, but then I got the error message "Segmentation fault (core dumped)
I'm using ubuntu 16.04 LTS
demo('bvp')
bvp =run('bvp')
branchpoints = bvp("BP")
for solution in branchpoints:
bp = load(solution,ISW=-1,NTST=50)
# Compute forwards
print "Solution label",bp["LAB"],"forwards"
fw = run(bp)
# Compute backwards
print "Solution lavel", bp["LAB"], "backwards"
bw = run(bp,DS='-')
both = fw + bw
merged = merge(both)
bvp = bvp + merged
bvp=relavel(bvp)
save(bvp, 'bvp')
plot(bvp)
wait()
The error message says (excluding seemingly unnecessary lines)
/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.pyc in flatnonzero(a)
924
925
-->926 return np.nonzero(np.ravel(a))[0]
927
928
... last 1 frames repeated, from the frame below ...
/usr/local/lib/python2.7/dist-packages/numpy/core/numeric.pyc in flatnonzero(a)
924
925
-->926 return np.nonzero(np.ravel(a))[0]
927
928
RuntimeError: maximum recursion depth exceeded while calling a Python object
So it seems the code is stuck in an infinite loop, but that flatnonzero function is simply indexing nonzero entries... I'm not sure what should I do
I am studying blockchain and I am trying to mine the genesis block of an crypto source.
The source I have is an PoS + masternode source. Of course there is PoW in it to mine the first blocks.
So I generated the genesis hash and merkle root. The daemon boots up and everything works. But the moment I use the "setgenerate true" or "getblocktemplate" commands nothing happens. The genesis block can't be mined.
The "getblocktemplate" returns "Out of memory (code -7)"
Debug.log shows:
2019-01-21 16:23:42 ERROR: CheckTransaction() : txout.nValue negative
2019-01-21 16:23:42 ERROR: CheckBlock() : CheckTransaction failed
2019-01-21 16:23:42 CreateNewBlock() : TestBlockValidity failed
2019-01-21 16:23:42 CreateNewBlock: Failed to detect masternode to pay
2019-01-21 16:23:42 CreateNewBlock(): total size 1000
I disabled the masternode enforcement sporks
Is there anyone who experienced something like this or can help me with it?
The genesis block doesn't actually require mining. You can create it as whatever you want as long as it follows the serialisation of your protocol. Genesis blocks tend to follow slightly different rules to normal blocks and so often do not pass validation under normal circumstances.
Here is how we handle the genesis block in our code-base. It has slightly different rules to how we handle other blocks.
All a block needs is a block to point backwards to. So as long as you have some previous hash new blocks should be able to be formed on top of your genesis block.
I suggest you try Bitshares or Steem code and see how the mining goes. You can use the TEST mode in either one to starting creating / mining blocks from the Genesis block.
My ETM trace is captured separately and loaded with TRACE32 command LA.IMPORT (It is not connected directly with a device)
How to filter all the records for each core means run 0,1,2 ... from ETB dump in separate windows for LA method?
Is there method which provides trace data same like capturing from device ?
I tried using Trace.Find ,core 0 but it is not working. It prints the record number but when I try using print trace.record.data(recno) (recno here is which is output of Trace.Find ,core 0) I didn't get any record data
Can you please try the below commands to check trace data records for core n after importing etb dump . Please comment if it worked or not.
la.list /core n
or
trace.list /core n
I could not get the 2nd question.
The ETB dump is as good as the trace obtained through live capturing from device. Only difference is that the etb data is stored in DDR or other location, and in live capturing it will be saved in t32 device memory and they will be saved with timestamps if cycle accurate tracing is enabled. If there are no fifo overflows, both will be identical. Please correct me if my understanding is wrong.
I am new to working with Promela and in particular SPIN. I have a model which I am trying verify and can't understand SPIN's output to resolve the problem.
Here is what I did:
spin -a untitled.pml
gcc -o pan pan.c
./pan
The output was as follows:
pan:1: VECTORSZ is too small, edit pan.h (at depth 0)
pan: wrote untitled.pml.trail
(Spin Version 6.4.5 -- 1 January 2016)
Warning: Search not completed
+ Partial Order Reduction
Full statespace search for:
never claim - (none specified)
assertion violations +
acceptance cycles - (not selected)
invalid end states +
State-vector 8172 byte, depth reached 0, errors: 1
0 states, stored
0 states, matched
0 transitions (= stored+matched)
0 atomic steps
hash conflicts: 0 (resolved)
I then ran SPIN again to try to determine the cause of the problem by examining the trail file. I used this command:
spin -t -v -p untitled.pml
This was the result:
using statement merging
spin: trail ends after -4 steps
#processes: 1
( global variable dump omitted )
-4: proc 0 (:init::1) untitled.pml:173 (state 1)
1 process created
According to this output (as I understand it), the verification is failing during the "init" procedure. The relevant code from within untitled.pml is this:
init {
int count = 0;
int ordinal = N;
do // This is line 173
:: (count < 2 * N + 1) ->
At this point I have no idea what is causing the problem since to me, the "do" statement should execute just fine.
Can anyone please help me in understanding SPINs output so I can remove this error during the verification process? The model does produce the correct output for reference.
You can simply ignore the trail file in this case, it is not relevant at all.
The error message
pan:1: VECTORSZ is too small, edit pan.h (at depth 0)
tells you that the size of directive VECTORSZ is too small to successfully verify your model.
By default, VECTORSZ has size 1024.
To fix this issue, try compiling your verifier with a larger VECTORSZ size:
spin -a untitled.pml
gcc -DVECTORSZ=2048 -o run pan.c
./run
If 2048 doesn't work too, try some more (increasingly larger) values.
I am writing desktop app for windows on C++ using Qt for GUI and GStreamer for audio processing.
In my app I need to monitor several internet aac audio streams if they are online, and listen to available stream that has the most priority. For this task I use GstDiscoverer object from GStreamer, but I have some problems with it.
I check audio streams every 1-2 seconds, so GstDiscoverer is called very often.
And every time my app is running, eventually it will crash with segmentation fault error during GstDiscoverer check.
I tried both sync and async methods of calling GstDiscoverer ( gst_discoverer_discover_uri(), gst_discoverer_discover_uri_async() ) , both work the same way.
The crash happens in aac_type_find() function from gsttypefindfunctions.c on line 1122 (second line of code below).
len = ((c.data[offset + 3] & 0x03) << 11) |
(c.data[offset + 4] << 3) | ((c.data[offset + 5] & 0xe0) >> 5);
Local variables received from debugger during one of crashes:
As we can see, offset variable is greater than c.size, so c.data[offset] is out of range, I think that's why segmentation fault happens.
This happens not regularly. The program can work several hours or ten minutes.
But it seems to me that it happens more often if time interval between calls of GstDiscoverer is little. So, there is some probability of crash calling aac_type_find().
I tried GStreamer versions 1.6.1 and latest 1.6.2, the bug exists in both.
Can somebody help me with this problem? Is this Gstreamer bug or maybe I do something wrong?
It was reported to the GStreamer project here and a patch for the crash was merged and will be in the next releases: https://bugzilla.gnome.org/show_bug.cgi?id=759910