How to make bitcoin hard fork - blockchain

I am exploring bitcoin source code for some time and have successfully created a local bitcoin network with new genesis block.
Now i am trying to understand the process of hard forks (if i am using wrong terms here, i am referring to the one where the blockchain is split instead of mining a new genesis).
I am trying find this approach in BitcoinCash source code, but haven't got anywhere so far except the checkpoints.
//UAHF fork block.
{478558, uint256S("0000000000000000011865af4122fe3b144e2cbeea86"
"142e8ff2fb4107352d43") }
So for i understand that the above checkpoint is responsible for the chain split. But i am unable to find the location in source code where this rule is enforced, i.e. the code where it is specified to have a different block than bitcoin after block number 478558.
Can anyone set me into the right direction here?

There is not a specific rule that you put in the source code that says "this is where the fork starts". The checkpoints are just for bootstrapping a new node, they are checked to make sure the right chain is being downloaded and verified.
A hard fork by definition is just a change in consensus rules. By nature, if you introduce new consensus-breaking rules, any nodes that are running Bitcoin will reject those blocks which are incompatible and as soon as one block is rejected (and mined on the other chain) you will now have different chains.
As a side note, you should probably change the default P2P ports and P2P message headers in chainparams.cpp so it doesn't try to connect with other Bitcoin nodes.

Related

What are the files that require backup (ledger) in sawtooth validator

What are the main set of files that are required for orchestration of new network from old data from old sawtooth network ( I don't want to extend old sawtooth network).
I want to backup the essential files that are crucial for the operation of the network from the last block in the ledger.
I have list of files that were generated in sawtooth validator and with poet concenses:
block-00.lmdb
poet-key-state-0371cbed.lmdb
block-00.lmdb-lock
poet_consensus_state-020a4912.lmdb
block-chain-id
poet_consensus_state-020a4912.lmdb-lock
merkle-00.lmdb
poet_consensus_state-0371cbed.lmdb
merkle-00.lmdb-lock
txn_receipts-00.lmdb
poet-key-state-020a4912.lmdb
txn_receipts-00.lmdb-lock
poet-key-state-020a4912.lmdb-lock
What is the significance of each file and what are the consequences if not included when restarting the network or creation of new network with old data in ledger.
Answer for this question could bloat, I will cover most part of it here for the benefit of folks who have this question, especially this will help when they want to deploy the network through Kubernetes. Also similar questions are being asked frequently in the official RocketChat channel.
The essential set of files for the Validator and the PoET are stored in /etc/sawtooth (keys and config directory) and /var/lib/sawtooth (data directory) directories by default, unless changed. Create a mounted volume for these so that they can be reused when a new instance is orchestrated.
Here's the file through which the default validator paths can be changed https://github.com/hyperledger/sawtooth-core/blob/master/validator/packaging/path.toml.example
Note that you've missed keys in the list of essential files in your question and that plays important role in the network. In case of PoET each enclave registration information is stored in the Validator Registry against the Validator's public key. In case of Raft/PBFT consensus engine makes use of keys (members list info) to send peer-peer messages.
In case of Raft the data directory it is /var/lib/sawtooth-raft-engine.
Significance of each of the file you listed may not be important for the most people. However, here goes explanation on important ones
*-lock files you see are system generated. If you see these, then one of the process must have opened the file for write.
block-00.lmdb it's the block store/block chain, has KV pair of block-id to block information. It's also possible to index blocks by other keys. Hyperledger Sawtooth documentation is right place to understand complete details.
merkle-00.lmdb is to store the state root hash/global state. It's merkle tree representation in KV pair.
txn-receipts-00.lmdb file is where transaction execution status is stored upon success. This also has information about events if any associated with those transactions.
Here is a list of files from the Sawtooth FAQ:
https://sawtooth.hyperledger.org/faq/validator/#what-files-does-sawtooth-use

Transport Rule Logical And for Exchange 2010

Good Afternoon,
I have exhausted my googling and best-guess ideas, so I hope someone here has an idea of whether this is possible or not.
I am using Exchange Server 2010 (vanilla) in a test environment and trying to create a Hub Transport Rule using the Exchange Management Console. The requirements of the rules filtering are similar to the following scenario:
1.) If a recipient's address matches (ends with) "#testdomain.com" AND (begins with) "john"
2.) If the sender's address matches (ends with) "#testdomain.com"
3.) Copy the message to the "SupervisorOfJohns#testdomain.com" mailbox
I have no problems doing items 2 and 3, but I cannot figure out how to get item 1 in the same condition. I have come across some threads that simply concluded that MS goofed on this, but I am hesitant to fault them for something which seems like it should be really straightforward. I must be missing something. Expressions I have tried so far...:
1.) (^john)(#testdomain.com$)
2.) ^(john)(#testdomain.com)$
3.) (^john)#testdomain.com
4.) ^john #testdomain.com$
5.) ^(john)#testdomain.com
If you use the interface and +Add them as two separate entries, it treats them as an OR clause (if a recipient address begins with "john", OR it ends with "#testdomain.com"). As you can see from my simplistic attempts, I have barely any clue what can/should work in this case. Any suggestions or ideas would be appreciated.
Respectfully,
B. Whitman
Here's what I ended up using:
john\w*#testdomain.com
The reasoning behind the question is that I'm trying to make a service to catch certain e-mails and do some processing with them. I also wanted to restrict the senders/recipients to certain domains (though some checking will also be done with the processing service). Thanks to hjpotter92 for his solutions!

prioritizing torrent download sequences using libtorrent

Suppose I have 2+ clients (developed by me) ALL using libtorrent ( http://www.rasterbar.com/products/libtorrent/manual.html#queuing )
Can I prioritize download of a file from other clients effectively so that they download the file's pieces/chunks (whatever is torrent terminology here) from beginning of the file towards its end and not quite in random order?
(of course I'm allowing some "multiplexing" / "intertwining" pieces for reasons of availability and performance, but the goal here is to download as linearly and quickly from the start of the file towards the end as possible)
The goal I'm thinking about here is obviously previewing the file quickly. How to do this most effectively using libtorrent / possibly other C++ torrent library?
(I'm not quite interested in torrent implementations using non-binary languages, like Java or Python - I need machine code for reasons of performance and security, so, C, C++ or possibly D would all fit the bill)
You can certainly prioritize pieces and files with torrent_handle::prioritize_pieces() and torrent_handle::prioritize_files(). See the documentation.
This won't be enough to download in-order though. To do that, you can enable sequential download with torrent_handle::set_sequential_download(). This will issue new piece requests in-order. Keep in mind that the time a request take to be satisfied varies a lot depending on which peer you talk to. Making the requests in-order does not necessarily mean receiving the pieces in order.
There is another mechanism to attempt to do that. torrent_handle::set_piece_deadline() is used to set a target completion time for a piece. Such pieces are considered time-critical pieces, and they are ordered by their deadline and the fastest peers are used to request blocks from those pieces, attempting to download them in deadline-order.
Now, I also get the impression that you want two separate clients (presumably running on different machines) to coordinate which pieces they download. Is that right? It's not entirely clear what you're asking about, but there's no simple way of asking libtorrent to do that.
You could write a plugin for libtorrent that implements a new extension message for these clients to chat and coordinate, which could de-select certain pieces the other client is downloading by setting their priority to 0.

Process queue as folder with files. Possible problems?

I have an executable that needs to process records in the database when the command arrives to do so. Right now, I am issuing commands via TCP exchange but I don't really like it cause
a) queue is not persistent between sessions
b) TCP port might get locked
The idea I have is to create a folder and place files in it whose names match the commands I want to issue
Like:
1.23045.-1.1
2.999.-1.1
Then, after the command has been processed, the file will be deleted or moved to Errors folder.
Is this viable or are there some unavoidable problems with this approach?
P.S. The process will be used on Linux system, so Antivirus problems are out of the question.
Yes, a few.
First, there are all the problems associated with using a filesystem. Antivirus programs are one (though I cannot see why it doesn't apply to Linux - no delete locks?). Disk space, file and directory count maximums are others. Then, open file limits and permissions...
Second, race conditions. If there are multiple consumers, more than one of them might see and start processing the command before the first one has [re]moved it.
There are also the issues of converting commands to filenames and vice versa, and coming up with different names for a single command that needs to be issued multiple times. (Though these are programming issues, not design ones; they'll merely annoy.)
None of these may apply or be of great concern to you, in which case I say: Go ahead and do it. See what we've missed that Real Life will come up with.
I probably would use an MQ server for anything approaching "serious", though.

How do I extract the network protocol from the source code of the server?

I'm trying to write a chat client for a popular network. The original client is proprietary, and is about 15 GB larger than I would like. (To be fair, others call it a game.)
There is absolutely no documentation available for the protocol on the internet, and most search results only come back with the client's scripting interface. I can understand that, since used in the wrong way, it could lead to ruining other people's experience.
I've downloaded the source code of a couple of alternative servers, including the one I want to connect to, but those
contain no documentation other than install instructions
are poorly commented (I did a superficial browsing)
are HUGE (the src folder of the target server contains 12 MB worth of .cpp and .h files), and grep didn't find anything related
I've also tried searching their forums and contacting the maintainers of the server, but so far, no luck.
Packet sniffing isn't likely to help, as the protocol relies heavily on encryption.
At this point, all my hope is my ability to chew through an ungodly amount of code. How do I start?
Edit: A related question.
If your original code is encrypted with some well known library like OpenSSL or Ctypto++ it might be useful to write your wrapper for the main entry points of these libraries, then delagating the call to the actual library. If you make such substitution and build the project successfully, you will be able to trace everything which goes out in the plain text way.
If your project is not using third party encryption libs, hopefully it is still possible to substitute the encryption routines with some wrappers which trace their input and then delegate encryption to the actual code.
Your bet is that usually enctyption is implemented in separate, relatively small number of source files so that should be easier for you to track input/output in these files.
Good luck!
I'd say
find the command that is used to send data through the socket (the call depends on the network library)
find references of this command and unroll from there. If you can modify-recompile the server code, it might help.
On the way, you will be able to log decrypted (or, more likely, not yet encrypted) network activity.
IMO, the best answer is to read the source code of the alternative server. Try using a good C++ IDE to help you. It will make a lot of difference.
It is likely that the protocol related material you need to understand will be limited to a subset of the files. These will contain references to network sockets and things. Start from there and work outwards as far as you need to.
A viable approach is to tackle this as a crypto challenge. That makes it easy, because you control so much.
For instance, you can use a current client to send a known message to the server, and then check server memory for that string. Once you've found out in which object the string ends, it also becomes possible to trace its ancestry through the code. Set a breakpoint on any non-const method of the object, and find the stacktraces. This gives you a live view of how messages arrive at the server, and a list of core functions essential to message processing. You can next find related functions (caller/callee of the functions on your list).