how to get shared access signature of Azure container by C++ - c++

I want to use C++ Azure API to generate a Shared Access Signature for a container on Azure and get the access string. But cannot find any good example. Almost all examples are in C#. Only found this, https://learn.microsoft.com/en-us/azure/storage/files/storage-c-plus-plus-how-to-use-files
Here is what I did,
// Retrieve a reference to a previously created container.
azure::storage::cloud_blob_container container = blob_client.get_container_reference(s2ws(eventID));
// Create the container if it doesn't already exist.
container.create_if_not_exists();
// Get the current permissions for the event.
auto blobPermissions = container.download_permissions();
// Create and assign a policy
utility::string_t policy_name = s2ws("Signature" + eventID);
azure::storage::blob_shared_access_policy policy = azure::storage::blob_shared_access_policy();
// set expire date
policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_days(10));
//give read and write permissions
policy.set_permissions(azure::storage::blob_shared_access_policy::permissions::read);
azure::storage::shared_access_policies<azure::storage::blob_shared_access_policy> policies;
//add the new shared policy
policies.insert(std::make_pair(policy_name, policy));
blobPermissions.set_policies(policies);
blobPermissions.set_public_access(azure::storage::blob_container_public_access_type::off);
container.upload_permissions(blobPermissions);
auto token = container.get_shared_access_signature(policy, policy_name);
After run this, I can see the policy is successfully set on the container, but the token got by the last line is not right. And there will always be an exception when exiting this function, the breakpoint locates in _Deallocate().
Could someone tell me what's wrong with my code? Or some examples about this? Thank you very much.
Edited
The token I got looks like,
"sv=2016-05-31&si=Signature11111122222222&sig=JDW33j1Gzv00REFfr8Xjz5kavH18wme8E7vZ%2FFqUj3Y%3D&spr=https%2Chttp&se=2027-09-09T05%3A54%3A29Z&sp=r&sr=c"
By this token, I couldn't access my blobs. The right token created by "Microsoft Azure Storage Explorer" using this policy looks like,
?sv=2016-05-31&si=Signature11111122222222&sr=c&sig=9tS91DUK7nkIlIFZDmdAdlNEfN2HYYbvhc10iimP1sk%3D
About the exception, I put all these code in a function. If without the last line, everything is okay. But if added the last line, while exiting this function, it will throw an exception and said a breakpoint was triggered. It stopped at the last line of _Deallocate() in "C:\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.25017\include\xmemory0",
::operator delete(_Ptr);
Have no idea why this exception being thrown and how to debug because it seems it cannot be caught by my code.
Edited
After changed the last line to,
auto token = container.get_shared_access_signature(azure::storage::blob_shared_access_policy(), policy_name);
The returned token is right, I can access my blobs by using it. But the annoying exception is still there :-(
Edited
Just found the exception only happened when building in Debug. If in Release, everything is ok. So maybe it's related to compiling environment.

When creating a Shared Access Signature (SAS), there are a few permissions you set: SAS Start/Expiry, Permissions, IP ACLing, Protocol restrictions etc. Now what you could do is create an access policy on the blob container with these things, create an ad-hoc SAS (i.e. without access policy) with these things or combine these two to create a SAS token.
One key thing to keep in mind is that if something is defined in an access policy, you can't redefine them when creating a SAS. So for example, let's say you create an access policy with just Read permission and nothing else, then you can't provide any permissions when creating a SAS token while using this access policy. You can certainly define the things which are not there in the access policy (for example, you can define a SAS expiry if it is not defined in access policy).
If you look at your code (before edit), what you're doing is creating an access policy with some permissions and then creating a SAS token using the same permissions and access policy. That's why it did not work. However when you created a SAS token from Microsoft's Storage Explorer, you will notice that it only included the access policy (si=Signature11111122222222) and none of the other parameters and that's why it worked.
In your code after edit you did not include any permissions but only used the access policy (in a way you did what Storage Explorer is doing) and that's why things worked after edit.
I hope this explains the mystery behind not working/working SAS tokens.

Related

GCP: Is it possible to have an access to a resource if don't have project access?

It is my first expirience in Google Cloud Platform and I'm confused.
I've got an access to a resource:
xxx#gmail.com has granted you the following roles for resource resource_name(projects/project_name/datasets/ClientsExport/tables/resource_name) BigQuery Data Editor
But if I open BigQuery Data Editor, I don't see project_name and resource_name. Search by resource_name also returns no result.
Is it only access that I have in the project (I didn't get another accesses and mails).
Could you please help me with this? Maybe should I get some additional access to resource_name will be available? If is there another way to find the resource?
Thank you in advance!
In the message you have access to BigQuery data inside a table. You can query them from your project, you are autorised to access them (and to write also, because you are editor).
However, this table isn't in your project, it's in another project that's why you don't see it directly in the BigQuery console. In addition, you haven't the right to read the metadata (roles/bigquery.metadataViewer) on the dataset of the other project. Eventually, you can't also view the table schema in the console, but the bq CLI allow you to view it.
I had some discussions with Google BigQuery team about that (because I got the same issue in my company), and updates should happen by the end of the year (or soon in 2022) to fix this "view" issue in the console.
It looks like you have IAM permission to access a specific resource in BigQuery but cannot access it from the GUI.
Some reasons you may not see access on your GUI:
You have permission to interact with BigQuery but don't have access to any of the data.
You aren't a member of the organization which provided the resources and they have higher level permissions (on the org level) which prevents sharing of resources outside of the org.
Your access is restricted to the command line/app level. (If your account is a service account then this is likely the case.)

Unable to create AWS key pair using console

I tried to create new AWS key pairs and the option to create disappeared
Does anyone know why?
It would be worth checking the IAM permissions associated with the User who is trying to create the key pair. Contact the Administrator (presumably you?) and investigate. I would suggest creating a Group with Permissions and adding them to that.
I performed an experiment and added aDeny policy to my IAM User that prevented me from being able to create a keypair.
I then tried to launch an instance and the option to create a keypair (in the dialog box you show above) was still available. So, the display does not vary according to permissions.
Therefore, something else is causing your situation. I would recommend trying it in a different browser. Also, check the underlying HTML to see whether the option is coded on the web page. Something is causing it to disappear.

access credentials error in Copy Command in S3

I am facing access credentials error when i ran copy Command in S3.
my copy command is :
copy part from 's3://lntanbusamplebucket/load/part-csv.tbl'
credentials 'aws_access_key_id=D93vB$;yYq'
csv;
error message is:
error: Invalid credentials. Must be of the format: credentials 'aws_iam_role=...' or 'aws_access_key_id=...;aws_secret_access_key=...[;token=...]'
'aws_access_key_id=?;
aws_secret_access_key=?''
Could you please can any one explain what is aws_access_key_id and aws_secret_access_key ?
where we can see this?
Thanks in advance.
Mani
The access key you're using looks more like a secret key, they usually look something like "AKIAXXXXXXXXXXX".
Also, don't post them openly in StackOverflow questions. If someone gets a hold of a set of access keys, they can access your AWS environment.
Access Key & Secret Key are the most basic form of credentials / authentication used in AWS. One is useless without the other, so if you've lost one of the two, you'll need to regenerate a set of keys.
To do this, go into the AWS console, go to the IAM services (Identity and Access Management) and go into users. Here, select the user that you're currently using (probably yourself) and go to the Security Credentials tab.
Here, under Access keys, you can see which sets of keys are currently active for this user. You can only have 2 sets active at one time, so if there's already 2 sets present, delete one and create a new pair. You can download the new pair as a file called "credentials.csv" and this will contain your user, access key and secret key.

How to restrict createObject() on certain java classes or packages?

I want to create a secure ColdFusion environment, for which I am using multiple sandboxes configuration. The following tasks are easily achievable using the friendly administrator interface:
Restricting CFtags like: cfexecute, cfregistry and cfhttp.
Disabling Access to Internal ColdFusion Java components.
Access only to certain server and port ranges by third-party resources.
And the others using configuration of the web server accordingly.
The Problem:
So I was satisfied with the setup only to encounter later that regardless of the restriction applied to the cfexecute tag one can use java.lang.Runtime to execute system files or scripts easily;
String[] cmd = {"cmd.exe", 'net stop "ColdFusion 10 Application Server"'};
Process p = Runtime.getRuntime().exec(cmd);
or using the java.lang.ProcessBuilder:
ProcessBuilder pb = new ProcessBuilder("cmd.exe", 'net stop "ColdFusion 10 Application Server"');
....
Process myProcess = pb.start();
The problem is that I cannot find any solutions which allows me to disable these two classes: java.lang.Runtime & java.lang.ProcessBuilder for the createObject().
For the note: I have tried the file restriction in the sanbox and os permission as well, but unfortunately they seem to work on an I/O file operations only and I cannot mess with security policies of the system libraries as they might be used internally by ColdFusion.
Following the useful suggestions from #Leigh and #Miguel-F, I tried my hands on implementing the Security Manager and Policy. Here's the outcome:
1. Specifying an Additional Policy File at runtime instead of making changes to the default java.policy file. To enable this, we add the following parameters to JVM arguments using CFAdmin interface or alternatively appending it to the jvm.args line in the jvm.config file :
-Djava.security.manager -Djava.security.policy="c:/policies/myRuntime.policy"
There is a nice GUI utility inside jre\bin\ called policytool.exe which allows you to manage policy entries easily and efficiently.
2. We have enforced the Security manager and provided our custom security policy file which contains:
grant codeBase "file:///D:/proj/secTestProj/main/-"{
permission java.io.FilePermission
"<<ALL FILES>>", "read, write, delete";
};
Here we are setting FilePermission for all files to read, write, delete excluding execute from the list as we do not want any type of file to be executed using the java runtime.
Note: The codebase can be set to an empty string if we want the policy to be applied to all the applications irrespective of the source.
I really wished for a deny rule in policy file to make things easier similar to the grant rule we're using, but there isn't unfortunately. If you need to put in place a set of complex security policies, you can use Prograde library, which implements policy file with deny rule (stack ref.).
You could surely replace <<ALL FILES>> with individual file and set permissions accordingly or for a better control use a combination of <<ALL FILES>> and individual file permissions.
References: Default Policy Implementation and Policy File Syntax, Permissions in JDK and Controlling Applications
This approach solves our core issue: denying execution of files using java runtime by specifying permissions allowed on a file. In other approach, we can implement Security Manager directly in our application to define policy file from there, instead of defining it in our JVM args.
//set the policy file as the system securuty policy
System.setProperty("java.security.policy", "file:/C:/java.policy");
// create a security manager
SecurityManager sm = new SecurityManager();
//alternatively, get the current securiy manager using System.getSecuriyManager()
//set the system security manager
System.setSecurityManager(sm);
To be able to set it, we need these permissions inside our policy file:
permission java.lang.RuntimePermission "setSecurityManager";
permission java.lang.RuntimePermission "createSecurityManager";
permission java.lang.RuntimePermission "usePolicy";
Using Security Manager object inside an application has its own advantages as it exposes many useful methods For instance: CheckExec(String cmd) which checks whether a calling thread is allowed to create a sub-process or not.
//perform the check
try{
sm.checkExec("notepad.exe");
}
catch(SecurityException e){
//do something...show warning.
}

trouble with AWS SWF using IAM roles

I've noticed on AWS that if I get IAM role credentials (key, secret, token) and set them as appropriate environment variables in a python script, I am able to create and use SWF Layer1 objects just fine. However, it looks like the Layer2 objects do not work. For example, if I have boto and os imported, and do:
test = boto.swf.layer2.ActivityWorker()
test.domain = 'someDomain'
test.task_list = 'someTaskList'
test.poll()
I get an exception that the security token is not valid, and indeed, if I dig through the object, the security token is not set. This even happens with:
test = boto.swf.layer2.ActivityWorker(session_token=os.environ.get('AWS_SECURITY_TOKEN'))
I can fix this by doing:
test._swf.provider.security_token = os.environ.get('AWS_SECURITY_TOKEN')
test.poll()
but seems pretty hacky and annoying because I have to do this every time I make a new layer2 object. Anyone else noticed this? Is this behavior intended for some reason, or am I missing something here?
Manual management of temporary security credentials is not only "pretty hacky", but also less secure. A better alternative would be to assign an IAM role to the instances, so they will automatically have all permissions of that Role without requiring explicit credentials.
See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html