I'm trying to implement curve25519 verification with CryptoPP. I tried the libsignal library first, witch shows correct result. Then I tried the same data with CryptoPP, but shows wrong result.
Here is the code using libsignal to verify a signature:
string pub = str2hex("0504f05568cc7a16fa9b4bc1c9d9294a80b7727e349365f855031a180bf0f80910");
ec_public_key* pub_key;
curve_decode_point(&pub_key, (uint8_t*)pub.data(), pub.size(), 0);
string message = str2hex("05f1fd491d63f1860bdaf3f9b0eb46c2494b7f184a32d9e6c859a421ad284f4307");
string signature = str2hex("5e525df3360ea62281efe8fb9e183521105bb3d9ba8ad43be9fac9d87dd216a6ea9e64099f6f05fbcd6e5a39ab239aad8c1e03d27a1378e4bcbf8937eac4300a");
int ret = curve_verify_signature(
pub_key,
(uint8_t*)message.data(), message.size(),
(uint8_t*)signature.data(), signature.size()
);
cout << "ret: " << ret << endl; // shows 1 (correct)
The result is 1 which means corrent. Please note libsignal requires the pub_key begin with a byte 0x05(key-type), not for CryptoPP here.
The code using CryptoPP:
string pub = str2hex("04f05568cc7a16fa9b4bc1c9d9294a80b7727e349365f855031a180bf0f80910");
string message = str2hex("05f1fd491d63f1860bdaf3f9b0eb46c2494b7f184a32d9e6c859a421ad284f4307");
string signature = str2hex("5e525df3360ea62281efe8fb9e183521105bb3d9ba8ad43be9fac9d87dd216a6ea9e64099f6f05fbcd6e5a39ab239aad8c1e03d27a1378e4bcbf8937eac4300a");
ed25519::Verifier verifier((uint8_t*)pub.data());
bool ret = verifier.VerifyMessage(
(uint8_t*)message.data(), message.size(),
(uint8_t*)signature.data(), signature.size()
);
cout << "ret: " << ret << endl; // shows 0 (wrong)
It shows 0, what's wrong with the code?
libsignal has a customized implementation on curve25519_verification:
curve25519 public key is converted to ed25519 public key (see here)
the verification is done using ed25519 signature verification algorithm (see here)
If you try to verify the signature using the converted ed25519 public key with CryptoPP, the verification would fail. That's because the sign and edd25519_verify operations are also slightly adjusted in libsignal.
Related
I would like to create a certificate with ECC. I am using ecdsa_with_SHA3-512 as signature algorithm.
I can succesfully sign the certificate as below.
auto message_digest = EVP_MD_fetch(nullptr,"SHA3-512", nullptr);
if (!message_digest) {
...
}
if(auto ssize = X509_sign(cert,pkey,message_digest)){
...
}
But I can`t verify the signature as below.
auto result = X509_verify(cert,pkey);
if (result <= 0) {
printf("[verify is failed : %d]\n",result);
}
auto errCode = ERR_peek_last_error();
auto errBuf = new char[256];
ERR_error_string(errCode,errBuf);
std::cout << errBuf << "\n";
I get [verify result : -1] error:068000C7:asn1 encoding routines::unknown signature algorithm error message.
I am checking tbs signature and certificate signature objects, they are equal.
if(X509_ALGOR_cmp(signatureAlg, tbsSignature)) {
...
}
Below is tbs signature object fields.
tbs signature ln : ecdsa_with_SHA3-512
tbs signature sn : id-ecdsa-with-sha3-512
tbs signature nid : 1115
As I understand X509_verify() checks the signature algorithm nid from
nid_triple sigoid_srt[] array. And cant find NID_ecdsa_with_SHA3_512 algorithm nid. Because of this, it gives unkown algorithm error.
I am new to cryptography and openssl, What I am missing.
Edit : This hash/signature algorithm combination is not
supported by any of the current releases by itself.
I have following this tutorial to setup multiple organizations. According to step 13 before committing transaction requires both organizations should sign off. How can I test that both organizations are endorsing transaction?
Give the proposal responses you are receiving from endorsing peer you can iterate to check validity of the signatures. Here for example code from Java SDK which handles this:
/*
* Verifies that a Proposal response is properly signed. The payload is the
* concatenation of the response payload byte string and the endorsement The
* certificate (public key) is gotten from the Endorsement.Endorser.IdBytes
* field
*
* #param crypto the CryptoPrimitives instance to be used for signing and
* verification
*
* #return true/false depending on result of signature verification
*/
public boolean verify(CryptoSuite crypto) {
if (isVerified()) { // check if this proposalResponse was already verified by client code
return isVerified();
}
if (isInvalid()) {
this.isVerified = false;
}
FabricProposalResponse.Endorsement endorsement = this.proposalResponse.getEndorsement();
ByteString sig = endorsement.getSignature();
try {
Identities.SerializedIdentity endorser = Identities.SerializedIdentity
.parseFrom(endorsement.getEndorser());
ByteString plainText = proposalResponse.getPayload().concat(endorsement.getEndorser());
if (config.extraLogLevel(10)) {
if (null != diagnosticFileDumper) {
StringBuilder sb = new StringBuilder(10000);
sb.append("payload TransactionBuilderbytes in hex: " + DatatypeConverter.printHexBinary(proposalResponse.getPayload().toByteArray()));
sb.append("\n");
sb.append("endorser bytes in hex: "
+ DatatypeConverter.printHexBinary(endorsement.getEndorser().toByteArray()));
sb.append("\n");
sb.append("plainText bytes in hex: " + DatatypeConverter.printHexBinary(plainText.toByteArray()));
logger.trace("payload TransactionBuilderbytes: " +
diagnosticFileDumper.createDiagnosticFile(sb.toString()));
}
}
this.isVerified = crypto.verify(endorser.getIdBytes().toByteArray(), config.getSignatureAlgorithm(),
sig.toByteArray(), plainText.toByteArray()
);
} catch (InvalidProtocolBufferException | CryptoException e) {
logger.error("verify: Cannot retrieve peer identity from ProposalResponse. Error is: " + e.getMessage(), e);
this.isVerified = false;
}
return this.isVerified;
} // verify
Of course you can achive same results with other SDK in pretty similar way.
I am developing a quick DICOM viewer using DCMTK library and I am following the example provided in this link.
The buffer from the API always returns null for any tag ID, eg: DCM_PatientName.
But the findAndGetOFString() API works fine but returns only the first character of the tag in ASCII, is this how this API should work?
Can someone let me know why the buffer is empty the former API?
Also the DicomImage API also the same issue.
Snippet 1:
DcmFileFormat fileformat;
OFCondition status = fileformat.loadFile(test_data_file_path.toStdString().c_str());
if (status.good())
{
OFString patientName;
char* name;
if (fileformat.getDataset()->findAndGetOFString(DCM_PatientName, patientName).good())
{
name = new char[patientName.length()];
strcpy(name, patientName.c_str());
}
else
{
qDebug() << "Error: cannot access Patient's Name!";
}
}
else
{
qDebug() << "Error: cannot read DICOM file (" << status.text() << ")";
}
In the above snippet name has the ASCII value "50" and the actual name is "PATIENT".
Snippet 2:
DcmFileFormat file_format;
OFCondition status = file_format.loadFile(test_data_file_path.toStdString().c_str());
std::shared_ptr<DcmDataset> dataset(file_format.getDataset());
qDebug() << "\nInformation extracted from DICOM file: \n";
const char* buffer = nullptr;
DcmTagKey key = DCM_PatientName;
dataset->findAndGetString(key,buffer);
std::string tag_value = buffer;
qDebug() << "Patient name: " << tag_value.c_str();
In the above snippet, the buffer is null. It doesn't read the name.
NOTE:
This is only a sample. I am just playing around the APIs for learning
purpose.
The following sample method reads the patient name from a DcmDataset object:
std::string getPatientName(DcmDataset& dataset)
{
// Get the tag's value in ofstring
OFString ofstring;
OFCondition condition = dataset.findAndGetOFString(DCM_PatientName, ofstring);
if(condition.good())
{
// Tag found. Put it in a std::string and return it
return std::string(ofstring.c_str());
}
// Tag not found
return ""; // or throw if you need the tag
}
I have tried your code with your datasets. I just replaced the output to QT console classes to std::cout. It works for me - i.e. it prints the correct patient name (e.g. "PATIENT2" for scan2.dcm). Everything seems correct, except for the fact that you apparently want to transfer the ownership for the dataset to a smart pointer.
To obtain the ownership for the DcmDataset from the DcmFileFormat, you must call getAndRemoveDataset() instead of getDataset(). However, I do not think that your issue is related that. You may want to try my modified snippet:
DcmFileFormat file_format;
OFCondition status = file_format.loadFile("d:\\temp\\StackOverflow\\scan2.dcm");
std::shared_ptr<DcmDataset> dataset(file_format.getAndRemoveDataset());
std::cout << "\nInformation extracted from DICOM file: \n";
const char* buffer = nullptr;
DcmTagKey key = DCM_PatientName;
dataset->findAndGetString(key, buffer);
std::string tag_value = buffer;
std::cout << "Patient name: " << tag_value.c_str();
It probably helps you to know that your code and the dcmtk methods you use are correct, but that does not solve your problem. Another thing I would recommend is to verify the result returned by file_format.loadFile(). Maybe there is a surprise in there.
Not sure if I can help you more, but my next step would be to verify your build environment, e.g. the options that you use for building dcmtk. Are you using CMake to build dcmtk?
Is it possible to obtain the string equivalent of protobuf enums in C++?
e.g.:
The following is the message description:
package MyPackage;
message MyMessage
{
enum RequestType
{
Login = 0;
Logout = 1;
}
optional RequestType requestType = 1;
}
In my code I wish to do something like this:
MyMessage::RequestType requestType = MyMessage::RequestType::Login;
// requestTypeString will be "Login"
std::string requestTypeString = ProtobufEnumToString(requestType);
The EnumDescriptor and EnumValueDescriptor classes can be used for this kind of manipulation, and the
the generated .pb.h and .pb.cc names are easy enough to read, so you can look through them to get details on the functions they offer.
In this particular case, the following should work (untested):
std::string requestTypeString = MyMessage_RequestType_Name(requestType);
See the answer of Josh Kelley, use the EnumDescriptor and EnumValueDescriptor.
The EnumDescriptor documentation says:
To get a EnumDescriptor
To get the EnumDescriptor for a generated enum type, call
TypeName_descriptor(). Use DescriptorPool to construct your own
descriptors.
To get the string value, use FindValueByNumber(int number)
const EnumValueDescriptor * EnumDescriptor::FindValueByNumber(int number) const
Looks up a value by number.
Returns NULL if no such value exists. If multiple values have this >number,the first one defined is returned.
Example, get the protobuf enum:
enum UserStatus {
AWAY = 0;
ONLINE = 1;
OFFLINE = 2;
}
The code to read the string name from a value and the value from a string name:
const google::protobuf::EnumDescriptor *descriptor = UserStatus_descriptor();
std::string name = descriptor->FindValueByNumber(UserStatus::ONLINE)->name();
int number = descriptor->FindValueByName("ONLINE")->number();
std::cout << "Enum name: " << name << std::endl;
std::cout << "Enum number: " << number << std::endl;
I'm using the net-snmp library (version 5.7.1) in a c++ programm under Linux. I have a Web-Frontend where a user can choose a SNMP-Version and configure it. SNMPv1 and SNMPv2 are working just fine, but I got some issues with SNMPv3.
Here is a picture of the frontend: Screenshot of Webinterface (Sorry for not uploading it directly here, but I need at least 10 reputation to do this)
When I start the c++ backend and enter all needed SNMPv3 credentials correctly, everything is working fine and the device is reachable. If I change for example the Auth Protocol from MD5 to SHA, but leave the rest of the credentials the same, I would expect that the device is not reachable anymore. In real it stays reachable. After restarting the backend the device is (as expected) not reachable anymore with the same settings.
After discovering this issue, I ran some tests. For the test I used different users and different settings. They were run with three different devices of different vendors and I got every time the same result. So it can not be device realated issue. The results can be seen here: Test results
My conclusion after testing was, that net-snmp seems to cache the selected auth and priv protocol for one user name. This can be seen very good at Test 2. The first time I use an user name with a specific protocol I get the expected result. After changing the protocol
a different result is expected, but I get still the same result as before.
At the end some information how the SNMP-calls are made:
There is a class called SNMPWrapper, which handels the whole SNMP-communication
Inside the constructor I call init_snmp() to init net-snmp
From the outside I can call only get(), set() and walk(). Every time one of these methods is called, a new SNMP-Session is created (First I create a new Session with snmp_sess_init(), than I set up the things needed and finally I open the session with snmp_sess_open()
After I made the request and received my answer I close the session with snmp_sess_close()
Question: Do I have to do any other clean up before changing a protocol in order to get it work correctly?
Edit: I added some code, that shows the described behaviour
int main(int argc, char** argv) {
struct snmp_session session, session1, *ss, *ss1;
struct snmp_pdu *pdu, *pdu1;
struct snmp_pdu *response, *response1;
oid anOID[MAX_OID_LEN];
size_t anOID_len = MAX_OID_LEN;
struct variable_list *vars;
int status, status1;
init_snmp("snmpapp");
const char* user = "md5";
string authpw = "123123123";
string privpw = "";
string ipString = "192.168.15.32";
char ip[16];
memset(&ip, 0, sizeof (ip));
ipString.copy(ip, sizeof (ip) - 1, 0);
/*
* First request: AuthProto is MD5, no PrivProto is used. The snmp-get
* request is successful
*/
snmp_sess_init(&session); /* set up defaults */
session.peername = ip;
session.version = SNMP_VERSION_3;
/* set the SNMPv3 user name */
session.securityName = strdup(user);
session.securityNameLen = strlen(session.securityName);
// set the authentication method to MD5
session.securityLevel = SNMP_SEC_LEVEL_AUTHNOPRIV;
session.securityAuthProto = usmHMACMD5AuthProtocol;
session.securityAuthProtoLen = USM_AUTH_PROTO_MD5_LEN;
session.securityAuthKeyLen = USM_AUTH_KU_LEN;;
if (generate_Ku(session.securityAuthProto,
session.securityAuthProtoLen,
(u_char *) authpw.c_str(), strlen(authpw.c_str()),
session.securityAuthKey,
&session.securityAuthKeyLen) != SNMPERR_SUCCESS) {
//if code reaches here, the creation of the security key was not successful
}
cout << "SecurityAuthProto - session: " << session.securityAuthProto[9] << " / SecurityAuthKey - session: " << session.securityAuthKey << endl;
ss = snmp_open(&session); /* establish the session */
if (!ss) {
cout << "Couldn't open session1 correctly";
exit(2);
}
cout << "SecurityAuthProto - ss: " << ss->securityAuthProto[9] << " / SecurityAuthKey - ss: " << ss->securityAuthKey << endl;
//send message
pdu = snmp_pdu_create(SNMP_MSG_GET);
read_objid(".1.3.6.1.2.1.1.1.0", anOID, &anOID_len);
snmp_add_null_var(pdu, anOID, anOID_len);
status = snmp_synch_response(ss, pdu, &response);
/*
* Process the response.
*/
if (status == STAT_SUCCESS && response->errstat == SNMP_ERR_NOERROR) {
cout << "SNMP-read success" << endl;
} else {
cout << "SNMP-read fail" << endl;
}
if (response)
snmp_free_pdu(response);
if (!snmp_close(ss))
cout << "Snmp closing failed" << endl;
/*
* Second request: Only the authProto is changed from MD5 to SHA1. I expect,
* that the snmp-get fails, but it still succeeds.
*/
snmp_sess_init(&session1);
session1.peername = ip;
session1.version = SNMP_VERSION_3;
/* set the SNMPv3 user name */
session1.securityName = strdup(user);
session1.securityNameLen = strlen(session1.securityName);
// set the authentication method to SHA1
session1.securityLevel = SNMP_SEC_LEVEL_AUTHNOPRIV;
session1.securityAuthProto = usmHMACSHA1AuthProtocol;
session1.securityAuthProtoLen = USM_AUTH_PROTO_SHA_LEN;
session1.securityAuthKeyLen = USM_AUTH_KU_LEN;
if (generate_Ku(session1.securityAuthProto,
session1.securityAuthProtoLen,
(u_char *) authpw.c_str(), strlen(authpw.c_str()),
session1.securityAuthKey,
&session1.securityAuthKeyLen) != SNMPERR_SUCCESS) {
//if code reaches here, the creation of the security key was not successful
}
cout << "SecurityAuthProto - session1: " << session1.securityAuthProto[9] << " / SecurityAuthKey - session1: " << session1.securityAuthKey << endl;
ss1 = snmp_open(&session1); /* establish the session */
if (!ss1) {
cout << "Couldn't open session1 correctly";
exit(2);
}
cout << "SecurityAuthProto - ss1: " << ss1->securityAuthProto[9] << " / SecurityAuthKey - ss1: " << ss1->securityAuthKey << endl;
//send message
pdu1 = snmp_pdu_create(SNMP_MSG_GET);
read_objid(".1.3.6.1.2.1.1.1.0", anOID, &anOID_len);
snmp_add_null_var(pdu1, anOID, anOID_len);
status1 = snmp_synch_response(ss1, pdu1, &response1);
/*
* Process the response.
*/
if (status1 == STAT_SUCCESS && response1->errstat == SNMP_ERR_NOERROR) {
cout << "SNMP-read success" << endl;
} else {
cout << "SNMP-read fail" << endl;
}
if (response1)
snmp_free_pdu(response1);
snmp_close(ss1);
return 0;
}
I found the solution by myself:
net-snmp caches for every EngineId (device) the users. If there is an existing user for an engineID and you try to open a new session with this user, net-snmp will use the cached one. So the solution was to clear the list with cached users.
With this code snippet I could resolve my problem:
usmUser* actUser = usm_get_userList();
while (actUser != NULL) {
usmUser* dummy = actUser;
usm_remove_user(actUser);
actUser = dummy->next;
}
I hope I can help somebody else with this.
You can also update password for an existing user:
for (usmUser* actUser = usm_get_userList(); actUser != NULL; actUser = actUser->next) {
if (strcmp(actUser->secName, user) == 0) {
//this method calls generate_Ku with previous security data but with specified password
usm_set_user_password(actUser, "userSetAuthPass", authpw.c_str());
break;
}
}