ECDSA key and csr with phpseclib - phpseclib

Does anyone have any information on this or knows if its supported by phpseclib? Or if its in the pipeline?
I am able to genertate key,csr an public key with RSA but would love to be able to do so with ECDSA as that is pretty much the direction of where encryption is headed. More and more Certificate Authorities are starting to adopt this and is supported in both windows and apache servers.
Thanks!

Unfortunately, this is not currently possible. phpseclib only supports RSA atm, although it sounds like ECDSA is in the (long) pipeline:
https://github.com/phpseclib/phpseclib/issues/37#issuecomment-11647092

phpseclib has a branch that supports ECDSA keys, i needed to download this specific branch with composer.
https://github.com/phpseclib/phpseclib/issues/1519
composer require phpseclib/phpseclib:dev-master
include_once 'vendor/autoload.php';
use phpseclib3\File\X509;
use phpseclib3\Crypt\PublicKeyLoader;
$privKey = PublicKeyLoader::load(file_get_contents('/path/to/key.pem'), $password = false);
$x509 = new X509();
$x509->setPrivateKey($privKey);
$x509->setDNProp('id-at-organizationName', 'phpseclib demo cert');
$csr = $x509->signCSR();
echo $x509->saveCSR($csr);

Related

OpenSSL 3.0.2 PKCS12_parse Failure

We're migrating to Openssl 3.0.2, currently experiencing connection issues between a 3.0.2 server and a 1.1.1g client.
According to the logs collected we seem to be having an issue with the loading of the legacy providers.
We are loading both the default and legacy providers programmatically as per the steps outlined in the Wiki for OpenSSL 3.0 - 6.2 Providers without issue.
We are seeing the following error..
error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto\evp\evp_fetch.c:346:Global default library context, Algorithm (RC2-40-CBC : 0), Properties ()
PKCS12_parse() failed = 183. (Using GetLastError from errhandlingapi.h, the 183 error code is obtained)
Worth mentioning that we are only seeing this issue occur when the server is a Windows 2012 server.
Both default and legacy providers are loaded without issue at start.
As pointed by Liam, OpenSSL 3.0 does not support *by default * legacy algorithm RC2-40-CBC.
Fortunately, the legacy library is included in the bin folder in my distribution (https://slproweb.com/products/Win32OpenSSL.html ; full list https://wiki.openssl.org/index.php/Binaries).
So my steps to resolve were
Set OPENSSL_MODULES
Add -legacy option
Set the variable OPENSSL_MODULES
SET OPENSSL_MODULES=C:\Program Files\OpenSSL-Win64\bin
Without -legacy option:
D:\sources\en.Resilience_Temy\config\certificates>openssl pkcs12 -in server.p12 -out saxserver.crt
Enter Import Password:
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
Error outputting keys and certificates
58630000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto\evp\evp_fetch.c:349:Global default library context, Algorithm (RC2-40-CBC : 0), Properties ()
With -legacy option:
D:\sources\en.Resilience_Temy\config\certificates>openssl pkcs12 -in server.p12 -out server.crt -legacy
Enter Import Password:
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:
And file is generated successfully.
This is likely due to RC2-40-CBC more specifically RC2 being disabled by either AD Policy or the OpenSSL library because it is deemed too weak a cipher.
A very simple way to test this:
open wordpad and create a file with hello world in it
run openssl enc -e -rc4-40 -K 1234567890 -in hellowworld.txt -out hellowworld.txt
Assuming this generates an error then this would be stronger evidence that RC2-40 is disabled. From there you can either:
Determine if AD or the OpenSSL library (compile flags) is blocking its use and enable it
Use stronger ciphers in your P12 generation (suggested route)

How can I programmatically make ssl client to trust its own keystore(.p12) file using ssl context

1.I am trying to achieve the similar functionality as in java client for grpc C++
to trust its own keystore(P12 file).
2.I din't find any good documentation related to it . I am new `
to use of SSLcontext. Any pointers will be really helpful.
3.It can be easily achieved in Java using the trustmanager in the below code :
Keystore keystore = Keystore.getInstance("JKS")
InputStream keyStream = new FileInputStream(keystorePATH))
try
{ keystore.load(keyStream, password.toCharArray());}
KeyManagerFactory kmf =
KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
kmf.init(keyStore, password.toCharArray());
javax.net.ssl.TrustManagerFactory tmf =
javax.net.ssl.TrustManagerFactory.getInstance("X509");
tmf.init(keyStore);
final SslContextBuilder ctxBuilder =
SslContextBuilder.forClient().keyManager(kmf).trustmanager(tmf);
return GrpcSslContexts.configure(ctxBuilder).build()
Here we are using the trustmanager and using our own JKs keystore instance.That way we are not sending the certificate explicitly which need to be trusted.Can similar be achieved for GRPC C++ client programmatically.

How to resolve Unable to find valid certification path to requested target for AWS Java Application?

My network is behind ZScaler Proxy. I have installed AWS CLI. I have added all the Amazon Root CA Certificates along with ZScaler CA Root Certificate in a pem file. I have setup AWS_CA_Bundle and my aws cli command for fetching secretsmanager worked.
But when on the same machine, I am trying to fetch SecretManagers using AWS SDK, it gives exception - Unable to find valid certification path to requested target.
Can someone guide me what needs to be done?
Below is the source code
public class AwsSecretManager {
public static AWSSecretManagerPojo getRedshiftCredentialsFromSecretManager(String secretName) throws JsonUtilityException, AwsSecretException {
String secret = getSecret(secretName);
// Gaurav added this.
System.out.println("secret \n" + secret);
if (!StringUtility.isNullOrEmpty(secret)) {
AWSSecretManagerPojo AWSSecretManagerPojo = GsonUtility.getInstance().fromJson(secret, AWSSecretManagerPojo.class,
EdelweissConstant.GSON_TAG);
return AWSSecretManagerPojo;
} else {
throw new AwsSecretException("unable to get redshift credentials from aws secret manager");
}
}
private static String getSecret(String secretName) {
// Gaurav commented below and manually supplied the secret as SSL issue is there.
String region = EdelweissConstant.AWS_SECRET_MANAGER_REGION;
AWSSecretsManager client = AWSSecretsManagerClientBuilder.standard()
.withRegion(region)
.build();
String secret = null;
GetSecretValueRequest getSecretValueRequest = new GetSecretValueRequest()
.withSecretId(secretName);
GetSecretValueResult getSecretValueResult = client.getSecretValue(getSecretValueRequest);
if (getSecretValueResult.getSecretString() != null) {
secret = getSecretValueResult.getSecretString();
}
return secret;
}
}
This question is a bit old but other answers to this topic were all programmatic and on a per-application basis which I didn't like, since I wanted this to work for all my applications. To anyone stumbling upon this, these are the steps that I took to fix this issue in my application.
Download the AWS Root CAs from here. CA1 was good enough for me, but you may need others as well. Java requires the .der format one. You can also get the .pem file, but you'll need to convert it to .der. Note that the certificate from Amazon had a .cer extension, but it's still compatible.
Verify that the certificate is legible for the Java keytool:
$ keytool -v -printcert -file AmazonRootCA1.cer
Certificate fingerprints:
SHA1: <...>
SHA256: <...>
Optionally, verify the public key hash if you'd like (outside the scope of this answer, have a look here if you want).
Import the certificate file to your Java keystore. You can do this for your entire JRE by using the keytool command as follows and answering yes when prompted:
keytool -importcert -alias <some_name> -keystore $JAVA_HOME/jre/lib/security/cacerts -storepass changeit -file AmazonRootCA1.cer
If you've changed the default Java keystore pass (changeit), you'll obviously need to use that one instead.
After this your application should be able to connect to AWS without any certificate issues.
I removed all the Amazon Certificates and just added ZScaler Root Certiicate. And it resolved the issue.

aws root pinning error CURLE_SSL_PINNEDPUBKEYNOTMATCH

I am using libcurl and shifting cert pinning to AWS root as per this document https://www.amazontrust.com/repository/
I used the SHA-256 Hash of Subject Public Key Information data from that website, formed a string:
static string PUBLIC_KEY = "sha256//fbe3018031f9586bcbf41727e417b7d1c45c2f47f93be372a17b96b50757d5a2;sha256//7f4296fc5b6a4e3b35d3c369623e364ab1af381d8fa7121533c9d6c633ea2461;sha256//36abc32656acfc645c61b71613c4bf21c787f5cabbee48348d58597803d7abc9;sha256//f7ecded5c66047d28ed6466b543c40e0743abe81d109254dcf845d4c2c7853c5;sha256//2b071c59a0a0ae76b0eadb2bad23bad4580b69c3601b630c2eaf0613afa83f92";
and set the string to curl
curl_easy_setopt(handle, CURLOPT_PINNEDPUBLICKEY, PUBLIC_KEY.c_str());
The curl error I get is CURLE_SSL_PINNEDPUBKEYNOTMATCH
Google does not have any insight into why, as far as I searched. If anyone has any input on how to fix this and still pin to the root, it would be super useful. Thanks.
Found the reason. Root pinning is not supported yet.
13.11 Support intermediate & root pinning for PINNEDPUBLICKEY
CURLOPT_PINNEDPUBLICKEY does not consider the hashes of intermediate & root certificates when comparing the pinned keys. Therefore it is not compatible with "HTTP Public Key Pinning" as there also intermediate and root certificates can be pinned. This is very useful as it prevents webadmins from "locking themself out of their servers".
Adding this feature would make curls pinning 100% compatible to HPKP and allow more flexible pinning.
https://curl.haxx.se/docs/todo.html

TLS Upgradation

public bool CheckValidationResult(ServicePoint sp,
System.Security.Cryptography.X509Certificates.X509Certificate cert, WebRequest req, int problem)
{
return true;
}
Here , I am in the position to access the web service using Tls 1.2 alone from the web service.
So , after adding the below lines before the return true in the method. ,
Then ,I can be able to access the service and also able to get the response.
Is it the correct way to handle these or any other way is there ? ? ?
Please help anyone who has experience on this .
Thanks in Advance.
//Here given protocol version with or operator accepting tls 1.1 ,1.2 and 1.0 along with ssl3
sp.Expect100Continue = true;
ServicePointManager.Expect100Continue = true;
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3 | SecurityProtocolType.Tls11 | SecurityProtocolType.Tls | SecurityProtocolType.Tls12;
No, that is not the correct way! Although I am aware that 100's of developers do this (return true for certificate validation), however their communication is no longer secure and is now subject to middleman and replay attacks, which are the most-common SSL attacks known to man.
The truth is you need provide a "comprehensive implementation" for certificate validation, which is based-on several RFCs (search for: X.509 Certificate validation). At a minimum you need to check the certificates validity (dates), validate subject signature hash using CA's public key and validate CA's CRL revocations (ensure server certficate has not been revoked). However ...
To keep things simple, without degrading TLS's integrity, you could simply check the "problem" (int problem), then return true or false, based on the value of problem. The OS should set the problem value, based-on internal OS security policy. Personally, I still would prefer "comprehensive implementation", because OS security policy can be weak, inundated or corrupt (hacked). Let me know if you need more details on "comprehensive implementation"