I Have a delphi 7 CGI webService (in windows XP but windows 7 is not out of the table) application in witch i need to access digital certificate to sign a XML document.
I Have imported CApicom_TLB and successfuly got to instanciate the certificate, but with some problems...
the apache server that runs my app runs it with a different windows user in wich i installed the certificate, wich leaves the Certificate Store empty when i query with CAPICOM_CURRENT_USER_STORE flag. I worked-arround it by installing the A1 certificate (Pfx with privatekey) in the local machine (Via MMC console, add new snap-in) and accessing the Certificate Store with the CAPICOM_LOCAL_MACHINE_STORE flag. I get the certificate (i can read its serial number, friendly name, valid to date) but when i try to sign a document, i get "Key Pair does not exist" error.
The same code works (successfuly sign XML) in a normal APP (Not-cgi) with the same PFx.
Code i use to get the certificate:
Store := CoStore.Create;
Store.Open(CAPICOM_LOCAL_MACHINE_STORE, CAPICOM_STORE_NAME, CAPICOM_STORE_OPEN_MAXIMUM_ALLOWED);
Certs := Store.Certificates as ICertificates2;
for i:= 1 to Certs.Count do
begin
Cert := IInterface(Certs.Item[i]) as ICertificate2;
if Cert.SerialNumber = FNumeroSerie
then begin
if DFeUtil.EstaVazio(NumCertCarregado)
then NumCertCarregado := Cert.SerialNumber;
if CertStoreMem = nil
then begin
CertStoreMem := CoStore.Create;
CertStoreMem.Open(CAPICOM_MEMORY_STORE, 'Memoria', CAPICOM_STORE_OPEN_MAXIMUM_ALLOWED);
CertStoreMem.Add(Cert);
end;
Then i use the CertStoreMem to sign usign the folowing
OleCheck(IDispatch(Certificado.PrivateKey).QueryInterface(IPrivateKey,PrivateKey));
xmldsig.store := CertStoreMem;
dsigKey := xmldsig.createKeyFromCSP(PrivateKey.ProviderType, PrivateKey.ProviderName, PrivateKey.ContainerName, 0);
The error (Key pair) is in the last line of code.
There are two approaches: Make the CGI application read the certificate in the same user that installed it (code that works in non-cgi) OR make this work-arround with the localmachine-installed certificate work without key error.
If anyone could help would be much appreciated
If it is an issue with user authentication, and you have (total) control over the server, I would consider wrapping the signing bit in a Active X library, and adding it to a COM+ package; using the Component Services you can control activation and authentification so it runs with impersonation in a separate dllhost.exe
Related
I want to set up my local server to communicate with my client. They build TLS connection using Openssl. I try to implement double side authentication, like server would verify client and client also needs to verify server.
When I use certificates generated by my self, everything works fine. The code is as following. It's C++ code in client. I set up client cert, private key and intermediate cert. In server side I saved a CA cert.
The relationship is: CA signs intermediate cert, intermediate cert signs client cert.
As we know, the reason that we need to provide client private key is the client will signature a "challenge" then send to server. Server could get client public key by certificate chain and use it to decode the encrypt "challenge" to see if they matched. You could see this link for detailed process:
https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake
However in my scenario, I have no permission to get the private key. I only have an API to call, which takes the digest or anything we want to encode as input and return a string encoded by client private key.
Therefore I'm not able to pass any "ClientPrivateKeyFileTest" to TLS.
I searched openssl source code but all handshakes were done in this function: SSL_do_handshake() and I'm not allowed to modify this function.
// load client-side cert and key
SSL_CTX_use_certificate_file(m_ctx, ClientCertificateFileTest, SSL_FILETYPE_PEM);
SSL_CTX_use_PrivateKey_file(m_ctx, ClientPrivateKeyFileTest, SSL_FILETYPE_PEM);
// load intermediate cert
X509* chaincert = X509_new();
BIO* bio_cert = BIO_new_file(SignerCertificateFileTest, "rb");
PEM_read_bio_X509(bio_cert, &chaincert, NULL, NULL);
SSL_CTX_add1_chain_cert(m_ctx, chaincert)
m_ssl = SSL_new(m_ctx);
// get_seocket is my own API
m_sock = get_socket();
SSL_set_fd(m_ssl, m_sock)
// doing handshake and build connection
auto r = SSL_connect(m_ssl);
I think all handshake processes would be done after I call SSL_connect(). So I wonder is there other way I can do to complete the client-authentication?
For example, I could skip adding private key step but set up a callback function somewhere which can handle all cases when SSL needs to use private key to calculate something.
PS: The API is a black box in the client machine.
One more thing, these days I found that openssl engine may help this problem. But does anybody know what kind of engine is useful for this problem? The EC sign, verification or others?
Final update: I implemented a OpenSSL engine to reload EC_KEY_METHOD so that I'm able to use my own sign function.
Thanks a lot!
When using the local development server, the port for my default service always defaults to 8080. However, when I use aetest, the port number always changes. The command used to start the local server when unit testing specifies --port=0. However, since this is in the app engine package, I do not want to modify it. I also can't specify the port number manually since tests are run using go test and not dev_appserver.py.
What I need
The code I am testing requires a particular response from a different microservice to continue execution successfully. To keep testing to this microservice, I am trying to set up a fake endpoint to provide me with the response I need.
Sample of code being tested
func Sample(c *gin.Context) {
...
url := os.Getenv("some_service") + "/some-endpoint"
req, err := http.NewRequest("POST", url, r.Body)
if err != nil {
// error handling
}
...
}
I need the host of the current service being tested to be able to set dummy environment variables for another microservice it interacts with. The host url should be the url of the service currently undergoing tests. This way I can use factory responses since I am not testing the other services in this set of tests.
What I've Tried
Using appengine.DefaultVersionHostname(), however this returns an empty string
Using the http request I create to get the url from that, however it does not include the host as needed (just the path)
Question
Is there any way to obtain the host for the local development server when running unit tests in GAE? Or, is there a way to specify the port number for app engine tests?
I was able to achieve my goal by first getting the list of modules. (It is likely the service name you are looking for will be default, however I chose to use the first name in the list anyways). Using the module name, I got the default version for the module. Using both of these (and an empty instance name), I was able to get the module host name which provided the value I was looking for.
A code sample is shown below:
ml, - := module.List(ctx)
mv, _ := module.DefaultVersion(ctx, ml[0])
hn, _ := appengine.ModuleHostname(ctx, ml[0], mv, "")
Thanks to Dan for pointing me in the right direction.
I am using the example to restrict all except one port for a specific windows service. I took the example from msdn and tried it for OpenVPN windows service. Basically I just edited these two lines:
BSTR bstrServiceName = SysAllocString(L"OpenVPNServiceInteractive");
BSTR bstrAppName = SysAllocString(L"C:\\Program Files\\OpenVPN\\bin\\openvpnserv.exe");
As it needs the shortname and not the display name, I did sc query in my console and found for OpenVPNServiceInteractive, but when I run it doesn't find the service shortname (it fails the handle and says: RestrictService failed: Make sure you specified a valid service shortname)
So it basically can't find the service shortname which I specified. Does it prints (sc query) the real shortname of a service? Why doesn't it finds it?
It failed to restrict the service because I lacked administrator privileges. Ran it as administrator and worked.
I'm wondering if anyone has any success in writing a C/C++ application that uses the Oracle OCI API and authenticates using an Oracle wallet.
I have successfully created the wallet using mkstore and have stored the credentials in it. My tnsnames.ora and sqlnames.ora files have the correct contents, and my ORACLE_HOME and ORACLE_SID environment variables are set correctly as I can use sqlplus /#XE to authenticate a sqlplus session successfully using it.
Within the same terminal I have created a simple C program that allocates the OCIEnv, OCIServer, OCIError and OCIsvcCtx handles and calls OCIEnvCreate(). That all works fine.
I then try calling any one of the "connect" functions, such as OCILogon (also tried OCILogon2 and OCISessionPoolCreate as well), and I always get "invalid username/password". I am trying to call it as I see it defined for my invocation of sqlplus i.e null username and password with 0 length, and dbname of "XE" with appropriate length. (I've also tried dbnames of "#XE" and "/#XE" for completeness)
I see there is a security API for opening wallets and interrogating their contents, but I assumed this was application that want to interact directly with the contents of the wallet (i.e add/remove credentials etc). Maybe this is an incorrect assumption on my part...
There is precious little info out there as to how to do this programatically, so if anyone has any pointers, or a small working example that can simply connect to the database in this way I would be very grateful.
Many thanks
Ben
That's what I found also, there is precious little info out there on how to do this programatically. I finally figured it out by experiementing. It seems you have your sqlnet.ora and tnsnames.ora files set up correctly, so all you need to do is modify your code for attaching to the server and starting the session.
When attaching to the server, your dblink text string should be your connect string in tnsnames.ora for your oracle wallet entry. In your case "XE".
OCIServerAttach (OCIServer *srvhp,
OCIError *errhp,
CONST text *dblink,
sb4 dblink_len,
ub4 mode )
When beginning your session, credt should be set to OCI_CRED_EXT. This validates the credentials externally and since SQLNET.WALLET_OVERRIDE = TRUE is in sqlnet.ora, It uses oracle wallet to validate the connect string. Also, having credt set to OCI_CRED_EXT ignores the username and password session attributes.
OCISessionBegin (OCISvcCtx *svchp,
OCIError *errhp,
OCISession *usrhp,
ub4 credt,
ub4 mode );
That's it. I didn't use OCILogin or OCISessionPoolCreate in my code.
Good luck,
David M.
I'm trying to use SSLSniff's tool, and I have some technical issues... I've been looking for any similar problems, but the only results are from Twitter feeds, with no public useful answer. So, here it is:
(My version of SSLSniff is 0.8) I'm launching sslsniff with args:
sslsniff -a -c cert_and_key.pem -s 12345 -w out.log
where: cert_and_key.pem file is my authority's certificate concatenate with my unencrypted private key (in PEM format of course), and 12345 is the port where I redirect traffic with my iptables rule.
So sslsniff is correctly running:
INFO sslsniff : Certificate ready: [...]
[And anytime I connect with a client, there are these 2 following lines:]
DEBUG sslsniff : SSL Accept Failed!
DEBUG sslsniff : Got exception: Error with SSL connection.
On my client' side, I've register my AC as a trusted CA (with FF). Then when I connect through SSL I'm having the error:
Secure Connection Failed.
Error code: ssl_error_bad_cert_domain
What is super strange (moreover the fact that the certificate is not automatically accepted since it should be signed by my trusted CA) is that I cannot accept the forged certificate by clicking on "Add exception..." : I am always returning to the error page asking me to add an(other) exception...
Moreover, when I try to connect to for example: https://www.google.com, SSLSniff's log is completed with a new line :
DEBUG sslsniff : Encoded Length: 7064 too big for session cache, skipping...
Does anyone know what I'm doing wrong?
-- Edit to summer up the different answers --
The problem is that SSLSniff is not taking care of alternive names when it forges certificates. Apparently, Firefox refuses any certificate as soon as the Common Name doesn't match exactly the domain name.
For example, for Google.com : CN = www.google.com and there is no alternative name. So when you connect to https://www.google.com, it's working fine.
But for Google.fr : CN = *.google.fr, with these alternative names: *.google.fr and google.fr. So when you connect to https://www.google.fr, FF is looking for alternative names and, since it obviously doesn't find any, refuses the malformed certificate.
... So a solution would be to patch/commit... I don't know if Moxie Marlinspike has intentionally forgot this functionnality because it was too complicated, or if he was just not aware of this issue. Anyway, I'll try to have a look at the code.
The session encoded length error message: When caching the SSL session fails, it means that SSL session resumption on subsequent connections will fail, resulting in degraded performance, because a full SSL handshake needs to be done on every request. However, despite using the CPU more heavily, sslsniff will still work fine. The caching fails because the serialized representation of the OpenSSL session object (SSL_SESSION) was larger than the maximum size supported by sslsniff's session cache.
As for your real problem, note that sslsniff does not support X.509v3 subjectAltNames, so if you are connecting to a site whose hostname does not match the subject common name of the certificate, but instead matches only a subjectAltName, then sslsniff will generate a forged certificate without subjectAltNames, which will cause a hostname verification mismatch on the connecting client.
If your problem happens only for some specific sites, let us know the site so we can examine the server certificate using e.g. openssl s_client -connect host:port -showcerts and openssl x509 -in servercert.pem -text. If it happens for all sites, then the above is not the explanation.
Try a straight MITM with a cert you fully control , and make sure you don't have some OCSP/Perspectives/Convergance stuff meddling with things. Other than that, maybe add the cert to the OS trusted roots. I think FF on windows uses the windows cert store (start->run->certmgr.msc). It may also be worth trying with something like Burp to see if the error is localized to SSLSniff or all MITM attempts.