How to rename container name in cryptoki - c++

I have write some code that writes keypair of public and private key in a token. From the keypair, I create pkcs10 and later generate certificate file from it. The certificate file will be inserted to the token. It all run successfully, but somehow the certificate cannot being read by CAPI or Internet Explorer. If i insert a p12 file, it run without a fuss. I suspect that the CKA_LABEL and CKA_ID is the culprit here. In p12, everything use the same name convention. From container, public key, private key, and certificate. However in my method, the container name looks like auto generated. How can i convert it to be same with CKA_ID? Down below is my code in generating keypair that save in container.
rv = g_pFunctionList->C_GenerateKeyPair(hSession,
&ck_gen_ecc,
tPubKey, sizeof(tPubKey) / sizeof(CK_ATTRIBUTE),
tPrvKey, sizeof(tPrvKey) / sizeof(CK_ATTRIBUTE),
&pkcs11_hPubKey, &pkcs11_hPrvKey);
It save in container name like
cont_4440xxxxxxxx
How to change the container name as exactly as CKA_ID ? Can anyone help?

If your cryptoki library allow it, you can rename all the objects by setting new properties of them by calling C_SetAttributeValue function.
In your case it can looks like this:
CK_ATTRIBUTE atAttr[2];
atAttr[0].type = CKA_LABEL;
atAttr[0].pValue = pLabelValue; // <-- pass here new Label value pointer
atAttr[0].ulValueLen = ulLabelLen; // <-- pass here new Label length
atAttr[1].type = CKA_ID;
atAttr[1].pValue = pIDValue; // <-- pass here new ID value pointer
atAttr[1].ulValueLen = ulIDLen; // <-- pass here new ID length
rv = g_pFunctionList->C_SetAttributeValue(hSession, pkcs11_hPubKey, atAttr, 2);
rv = g_pFunctionList->C_SetAttributeValue(hSession, pkcs11_hPrvKey, atAttr, 2);

Related

Value is not a member of Object when trying to pass dynamic variable

I've got the following code
val region = envArgs("region")
// create a client object of class AWSSimpleSystemsManagementClient
val client: AWSSimpleSystemsManagementClient = new AWSSimpleSystemsManagementClient().withRegion(Regions.region)
Region here is a dynamic variable that's passed in the arguments of the glue job. However, with this code I get the following error
value region is not a member of object com.amazonaws.regions.Regions
val client: AWSSimpleSystemsManagementClient = new AWSSimpleSystemsManagementClient().withRegion(Regions.region)
Obviously its just trying to find the string "region" in Regions, how can I force it to instead search for the variable?
A previous poster deleted their answer, but I was able to get this working with the following
val client: AWSSimpleSystemsManagementClient = new AWSSimpleSystemsManagementClient().withRegion(Regions.getCurrentRegion)

Terraform: put variables in tfvar file not work

I defined a variable map my_role in terraform and set its value in abc.tfvar file as follows. if I assign account id as actual value, it works, if I set account id as a variable, it does not work. Does it mean tfvar file only allow actual value, not variable? By the way, I use terraform workspace. Therefore my_role is different based on workspace I select.
The following works:
my_role = {
dev = "arn:aws:iam::123456789012:role/myRole"
test = ...
prod = ...
}
The following does not work:
my_role = {
dev = "arn:aws:iam::${lookup(var.aws_account_id, terraform.workspace)}:role/myRole"
test = ...
prod = ...
}
The following does not work either:
lambdarole = {
dev = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/myRole"
test = ...
prod = ...
}
does
Have you tried following the example on Input Variables?
You can define your abc.tfvars file with:
variable "my_role" {
type = "map"
default = {
"dev" = "arn:aws:iam::123456789012:role/myRole"
"test" = "..."
"prod" = "..."
}
}
And access it with "${lookup(var.my_role, terraform.workspace)}".
Also, according to the from a file, the variables defined in .tfvars files are automatically loaded if you name the file terraform.tfvars, if not, you have to pass as an argument with -var-file=...
Cannot test it right now, but probably is something in this way.
I am replying when terraform 0.12 version is latest one. Solution is simple, you can create one file say vars.tf and declare variables in it.
Example - variable "xyz" {}
Now create terraform.tfvars and initialize it.
Example - xyz="abcd"
No need to pass any runtime args, it will be picked directly.
Terraform has aws_caller_identity data source. You do not need to mention or hand code account id anywhere. It can be fetched using this source.
In any of your .tf file, just include this source and then you can fetch relevant argument value.
This is how you can do it. Define this in any *.tf file
data "aws_caller_identity" "current" {}
Now where ever you want the value of arn or account id, it can be fetched using :
For account id(For terraform0.12):
data.aws_caller_identity.current.account_id
In your case, it would be like this :
dev = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/myRole"
But in order to this work, you need to define data source like shown above in any *.tf file.
For more help, refer following:
URL : https://www.terraform.io/docs/providers/aws/d/caller_identity.html

WMI GetStringValue Returns Null for added registry values

I'm going a little crazy here. I am trying to read the value of a registry string value I manually created and every time it returns a null. If I try to read the value of an existing registry string value, it returns the value just fine. Here is the code I'm using:
ManagementBaseObject rinParams = registry.GetMethodParameters("GetStringValue");
rinParams["hDefKey"] = HKEY_LOCAL_MACHINE;
rinParams["sSubKeyName"] = #"SOFTWARE\Microsoft\.NetFramework";
// rinParams["sValueName"] = "InstallRoot";
rinParams["sValueName"] = "test";
ManagementBaseObject routParams = registry.InvokeMethod("GetStringValue", rinParams, null);
if (routParams.Properties["sValue"].Value != null)
{
routput = routParams.Properties["sValue"].Value.ToString();
}
What I did was set the code to use the above code to read the value of the "InstallRoot" in the registry HKEY_LOCAL_Machine\SOFTWARE\Microsoft.NetFramework\InstallRoot which is a REG_SZ value. I get a successful read and the directory path is returned just fine. I then created another REG_SZ variable name "test" in the same .NetFramework key and commented out the "sValueName" = "InstallRoot" and replaced it with a line for "sValueName" = "test". When I run this code under the debugger, I get a return value of null.
I've tried adding keys under SOFTWARE and values in that key like SOFTWARE\Testkey and I get the same result every time. It will not let me read a key I create, it always returns null, but it seems like I can read any key that already existed. I don't understand why I'm getting this result.
An FYI, is that I am doing a remote registry read from a Windows 7 PC. My ultimate goal is to be able to add registry keys and values. I've tried to do that and I always get an error exception "Not Found" when I try that, so I'm trying to get down to the easiest thing possible, which is to read a key and even that's not working. Any ideas for what might be going on here?
This is how I'm making the remote connection to the Windows 2008r2 server to read the registry:
ConnectionOptions connectoptions = new ConnectionOptions();
connectoptions.Impersonation = ImpersonationLevel.Impersonate;
connectoptions.Authentication = AuthenticationLevel.PacketPrivacy;
connectoptions.Username = "administrator";
connectoptions.Password = **password redacted**
ManagementScope scope = new ManagementScope(#"\\" + systemIP + #"\root\default");
scope.Options = connectoptions;
ManagementClass registry = new ManagementClass(scope, new ManagementPath("StdRegProv"), null);

Why does WebSharingAppDemo-CEProviderEndToEnd sample still need a client db connection after scope creation to perform sync

I'm researching a way to build an n-tierd sync solution. From the WebSharingAppDemo-CEProviderEndToEnd sample it seems almost feasable however for some reason, the app will only sync if the client has a live SQL db connection. Can some one explain what I'm missing and how to sync without exposing SQL to the internet?
The problem I'm experiencing is that when I provide a Relational sync provider that has an open SQL connection from the client, then it works fine but when I provide a Relational sync provider that has a closed but configured connection string, as in the example, I get an error from the WCF stating that the server did not receive the batch file. So what am I doing wrong?
SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder();
builder.DataSource = hostName;
builder.IntegratedSecurity = true;
builder.InitialCatalog = "mydbname";
builder.ConnectTimeout = 1;
provider.Connection = new SqlConnection(builder.ToString());
// provider.Connection.Open(); **** un-commenting this causes the code to work**
//create anew scope description and add the appropriate tables to this scope
DbSyncScopeDescription scopeDesc = new DbSyncScopeDescription(SyncUtils.ScopeName);
//class to be used to provision the scope defined above
SqlSyncScopeProvisioning serverConfig = new SqlSyncScopeProvisioning();
....
The error I get occurs in this part of the WCF code:
public SyncSessionStatistics ApplyChanges(ConflictResolutionPolicy resolutionPolicy, ChangeBatch sourceChanges, object changeData)
{
Log("ProcessChangeBatch: {0}", this.peerProvider.Connection.ConnectionString);
DbSyncContext dataRetriever = changeData as DbSyncContext;
if (dataRetriever != null && dataRetriever.IsDataBatched)
{
string remotePeerId = dataRetriever.MadeWithKnowledge.ReplicaId.ToString();
//Data is batched. The client should have uploaded this file to us prior to calling ApplyChanges.
//So look for it.
//The Id would be the DbSyncContext.BatchFileName which is just the batch file name without the complete path
string localBatchFileName = null;
if (!this.batchIdToFileMapper.TryGetValue(dataRetriever.BatchFileName, out localBatchFileName))
{
//Service has not received this file. Throw exception
throw new FaultException<WebSyncFaultException>(new WebSyncFaultException("No batch file uploaded for id " + dataRetriever.BatchFileName, null));
}
dataRetriever.BatchFileName = localBatchFileName;
}
Any ideas?
For the Batch file not available issue, remove the IsOneWay=true setting from IRelationalSyncContract.UploadBatchFile. When the Batch file size is big, ApplyChanges will be called even before fully completing the previous UploadBatchfile.
// Replace
[OperationContract(IsOneWay = true)]
// with
[OperationContract]
void UploadBatchFile(string batchFileid, byte[] batchFile, string remotePeer1
I suppose it's simply a stupid example. It exposes "some" technique but assumes you have to arrange it in proper order by yourself.
http://msdn.microsoft.com/en-us/library/cc807255.aspx

Add permission levels to sharepoint list using Web Services

I need to add permission levels to a list like: Full Control, Contribute, Manage Hierarchy, view Only, etc.
I see here: "programatically add user permission to a list in sharepoint", that this can be done using the Object Model. How would I do the same using Web Services?
I tried using the permissions.asmx web services, it works for some of them giving the correct mask like 1011028991 for Approve, 138612833 for read, but it doesn't work for others like Manage Hierarchy, Restricted Read, and any other user created role (permission level). Instead of the correct name I get: Auto-generated Permission Level edda2384-2672-4e24-8f31-071d61a8c303
Any help will be appreciated.
OK, here is a code example, to obtain the mask I based this code on the one from this forum.
string sPermissionName = "Manage Hierarchy"; // if I use Read, Approve, Contribute, Full Control, it works!
string sListName = "testList";
string sGroupName = string.Format("{0}_ManageHierarchy", sListName);
// Create an aux group just to obtain the mask number later
using (SPUserGroup.UserGroup ug = new SPUserGroup.UserGroup())
{
ug.Credentials = new NetworkCredential("user", "pasword");
ug.Url = "http://testSite/_vti_bin/UserGroup.asmx";
ug.AddGroup(sGroupName, "testDomain\\user", "user", "testDomain\\user", "Manage Hierarchy test");
ug.AddGroupToRole(sPermissionName, sGroupName);
}
using (SPPermissions.Permissions per = new SPPermissions.Permissions())
{
per.Credentials = new NetworkCredential("user", "password");
per.Url = "http://testSite/_vti_bin/Permissions.asmx";
XmlNode perms = per.GetPermissionCollection(sListName, "list");
XmlNode n = perms.SelectSingleNode(string.Format("/*[local-name()='Permissions']/*[local-name()='Permission' " +
"and #MemberIsUser='False' and #GroupName='{0}']", sGroupName));
// Here we get the Mask for the role
int iMask = int.Parse(n.Attributes["Mask"].Value);
Console.WriteLine("The mask is:{0}", iMask); // Just to see the mask, I get 2129075183 for Manage Hierarchy
// Here I want to add some user to the list with the specified permission level
// But I get for example: Auto-generated Permission Level edda2384-2672-4e24-8f31-071d61a8c303
// Also, If later I execute the GetPermissionCollection, I see that the mask they got is: 2129075199 and not what I passed which was: 2129075183
per.AddPermission(sListName, "list", "testDomain\\user01", "user", iMask);
per.AddPermission(sListName, "list", "testDomain\\user02", "user", iMask);
}