Is there a way to switch between or reset role based credentials using `okta-aws` command line utility? - amazon-web-services

Going through Okta's github repo it doesn't seem like there is an easy way to switch between AWS roles using their command line utility.
Normally I get temporary credentials using okta-aws default sts get-caller-identity. If I'm switching between projects and need to likewise switch my temporary credentials to a different role, the best method I've found thus far is to delete two files, ~/.okta/profile & ~/.okta/.current-session, then re-run the above command.
I've also found that when OKTA_STS_DURATION in .okta/config.properties is set for longer than is configured for that AWS role, you either need to wait until the local duration setting expires or reset the credentials process using the same method as above.
Is there a way to switch between or reset role based credentials using okta-aws command line utility?
Note: I'm using role as interchangeable with profile -- please correct me if this is inaccurate.

At this point the best solution is to switch over to gimme-aws-creds. After you run through the setup it ends up being easier to use, and appears to be better maintained by Nike.

Related

Is there a way to specify AWS_SESSION_TOKEN when using SQLWorkbench and Athena JDBC driver?

I am using SQLWorkbench to connect to AWS Athena and SQLWorkbench Variables section to specify AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. This works. However, when I have to connect to accounts, which require AWS_SESSION_TOKEN, the connection fails. I can connect by modifying credentials file, but that's inconvenient. Is there a better way?
I received an answer from AWS support, and at this point, according to them, it appears that the driver does not support AWS_SESSION_TOKEN parameter.
Answering the question, which appeared on the thread, if you have to use session token, it appears that the only way is to modify your aws credentials file. This can be done either by adding a section or modifying default. Here is an example of a connection string for the former, where simba_session is a profile in credentials:
jdbc:awsathena://AwsRegion=us-west-2;AwsCredentialsProviderClass=com.simba.athena.amazonaws.auth.profile.ProfileCredentialsProvider;AwsCredentialsProviderArguments=simba_session;
If you don't need to use session token, you can specify AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY by pressing the Variables button and adding the keys/values. In this case, the connection string can look like this:
jdbc:awsathena://AwsRegion=us-west-2;AwsCredentialsProviderClass=com.simba.athena.amazonaws.auth.DefaultAWSCredentialsProviderChain;
Also note that you can add S3OutputLocation (if needed) and a Workgroup (if needed) by pressing Extended Properties button and adding keys/values, rather than doing it in the connection string.

How to specify the GCP Credential Location in application.properties file (for using the Pub/Sub in GCP)?

This seems straightforward to do that passing the Service Account key file (generated from the GCP console) by specifying the file location in the application.properties file. However, I tried all the following options:
1. spring.cloud.gcp.credentials.location=file:/home/my_user_id/mp6key.json
2. spring.cloud.gcp.credentials.location=file:src/main/resources/mp6key.json
3. spring.cloud.gcp.credentials.location=file:./main/resources/mp6key.json
4. spring.cloud.gcp.credentials.location=file:/src/main/resources/mp6key.json
It all ended up with the same error:
java.io.FileNotFoundException: /home/my_user_id/mp6key.json (No such file or directory)
Could anyone advise where I should put the key file and then how should I specify the path to the file properly?
The same programs run successfully in Ecplise with messages published and subscribed using the Pub/Sub processing from GCP (using the Project Id/Service Account key generated in GCP), but now stuck with the above issue after deployed to run on GCP.
As mentioned in the official documentation, the credentials file can be obtained from a number of different locations such as the file system, classpath, URL, etc.
for example, if the service account key file is stored in the classpath as src/main/resources/key.json, pass the following property
spring.cloud.gcp.credentials.location=classpath:key.json
if the key file is stored somewhere else in your local file system, use the file prefix in the property value
spring.cloud.gcp.credentials.location=file:<path to key file>
My line looks like this:
spring.cloud.gcp.credentials.location=file:src/main/resources/[my_json_file]
And this works.
The following also works if I put it in the root of the project directory:
spring.cloud.gcp.credentials.location=file:./[my_json_file]
Have you tried to follow this quickstart? Please, try to follow it thoughtfully and explain if you get any error finishing the quickstart.
Anyway, before running your Java script, try running on the console the following (please modify with the exact path where you store your key):
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/mp6key.json"
How are you authenticating your credentials in your Java script?
My answer is easy: if you run you code on GCP, you don't have to use service account key file. Problem eliminated, problem solved!
More seriously, have a look on service identity. I don't know what is your current service (Compute? Function? Cloud Run?). Anyway, you can attach any service account on GCP components. Then, when you code, simply use the default credential. Automatically the component identity is loaded. No key to manage, no key to store securely, no key to rotate!
If you provide more detail on your target platform, I could provide your some guidance to achieve this.
Keep in mind that the service account key file are designed to be used by automatic apps (w/o user account involved) hosted outside GCP (on prem, other Cloud Provider, a CI/CD, Apigee,...)
UPDATE
When you use your personal account, you can also use the default credential.
Install gcloud SDK on your computer
Use the command gcloud auth application-default login
Follow the instructions
Enjoy!
If it doesn't work, get the <path> displayed after the login command and set this value in the environment variable named GOOGLE_APPLICATION_CREDENTIALS.
If you definitively want to use service account key file (which are a security issue for the previous reason, but...), you can use it locally
Either set the json key file path into the GOOGLE_APPLICATION_CREDENTIALS environment variable
Or run this command gcloud auth activate-service-account --key-file=<path to your json key file>
Provided your file is in the resources folder try
file://mp6key.json
using file:// instead of file:/ works for me at least

InvalidSignatureException when using boto3 for dynamoDB on aws

Im facing some sort of credentials issue when trying to connect to my dynamoDB on aws. Locally it all works fine and I can connect using env variables for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_DEFAULT_REGION and then
dynamoConnection = boto3.resource('dynamodb', endpoint_url='http://localhost:8000')
When changing to live creds in the env variables and setting the endpoint_url to the dynamoDB on aws this fails with:
"botocore.exceptions.ClientError: An error occurred (InvalidSignatureException) when calling the Query operation: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details."
The creds are valid as they are used in a different app which talks to the same dynamoDB. Ive also tried not using env variables but rather directly in the method but the error persisted. Furthermore, to avoid any issues with trailing spaces Ive even used the credentials directly in the code. Im using Python v3.4.4.
Is there maybe a header that also should be set that Im not aware of? Any hints would be apprecihated.
EDIT
Ive now also created new credentials (to make sure there are only alphanumerical signs) but still no dice.
You shouldn't use the endpoint_url when you are connecting to the real DynamoDB service. That's really only for connecting to local services or non-standard endpoints. Instead, just specify the region you want:
dynamoConnection = boto3.resource('dynamodb', region_name='us-west-2')
It sign that your time zone is different. Maybe you can check your:
1. Time zone
2. Time settings.
If there are some automatic settings, you should fix your time settings.
"sudo hwclock --hctosys" should do the trick.
Just wanted to point out that accessing DynamoDB from a C# environment (using AWS.NET SDK) I ran into this error and the way I solved it was to create a new pair of AWS access/secret keys.
Worked immediately after I changed those keys in the code.

How to restrict createObject() on certain java classes or packages?

I want to create a secure ColdFusion environment, for which I am using multiple sandboxes configuration. The following tasks are easily achievable using the friendly administrator interface:
Restricting CFtags like: cfexecute, cfregistry and cfhttp.
Disabling Access to Internal ColdFusion Java components.
Access only to certain server and port ranges by third-party resources.
And the others using configuration of the web server accordingly.
The Problem:
So I was satisfied with the setup only to encounter later that regardless of the restriction applied to the cfexecute tag one can use java.lang.Runtime to execute system files or scripts easily;
String[] cmd = {"cmd.exe", 'net stop "ColdFusion 10 Application Server"'};
Process p = Runtime.getRuntime().exec(cmd);
or using the java.lang.ProcessBuilder:
ProcessBuilder pb = new ProcessBuilder("cmd.exe", 'net stop "ColdFusion 10 Application Server"');
....
Process myProcess = pb.start();
The problem is that I cannot find any solutions which allows me to disable these two classes: java.lang.Runtime & java.lang.ProcessBuilder for the createObject().
For the note: I have tried the file restriction in the sanbox and os permission as well, but unfortunately they seem to work on an I/O file operations only and I cannot mess with security policies of the system libraries as they might be used internally by ColdFusion.
Following the useful suggestions from #Leigh and #Miguel-F, I tried my hands on implementing the Security Manager and Policy. Here's the outcome:
1. Specifying an Additional Policy File at runtime instead of making changes to the default java.policy file. To enable this, we add the following parameters to JVM arguments using CFAdmin interface or alternatively appending it to the jvm.args line in the jvm.config file :
-Djava.security.manager -Djava.security.policy="c:/policies/myRuntime.policy"
There is a nice GUI utility inside jre\bin\ called policytool.exe which allows you to manage policy entries easily and efficiently.
2. We have enforced the Security manager and provided our custom security policy file which contains:
grant codeBase "file:///D:/proj/secTestProj/main/-"{
permission java.io.FilePermission
"<<ALL FILES>>", "read, write, delete";
};
Here we are setting FilePermission for all files to read, write, delete excluding execute from the list as we do not want any type of file to be executed using the java runtime.
Note: The codebase can be set to an empty string if we want the policy to be applied to all the applications irrespective of the source.
I really wished for a deny rule in policy file to make things easier similar to the grant rule we're using, but there isn't unfortunately. If you need to put in place a set of complex security policies, you can use Prograde library, which implements policy file with deny rule (stack ref.).
You could surely replace <<ALL FILES>> with individual file and set permissions accordingly or for a better control use a combination of <<ALL FILES>> and individual file permissions.
References: Default Policy Implementation and Policy File Syntax, Permissions in JDK and Controlling Applications
This approach solves our core issue: denying execution of files using java runtime by specifying permissions allowed on a file. In other approach, we can implement Security Manager directly in our application to define policy file from there, instead of defining it in our JVM args.
//set the policy file as the system securuty policy
System.setProperty("java.security.policy", "file:/C:/java.policy");
// create a security manager
SecurityManager sm = new SecurityManager();
//alternatively, get the current securiy manager using System.getSecuriyManager()
//set the system security manager
System.setSecurityManager(sm);
To be able to set it, we need these permissions inside our policy file:
permission java.lang.RuntimePermission "setSecurityManager";
permission java.lang.RuntimePermission "createSecurityManager";
permission java.lang.RuntimePermission "usePolicy";
Using Security Manager object inside an application has its own advantages as it exposes many useful methods For instance: CheckExec(String cmd) which checks whether a calling thread is allowed to create a sub-process or not.
//perform the check
try{
sm.checkExec("notepad.exe");
}
catch(SecurityException e){
//do something...show warning.
}

trouble with AWS SWF using IAM roles

I've noticed on AWS that if I get IAM role credentials (key, secret, token) and set them as appropriate environment variables in a python script, I am able to create and use SWF Layer1 objects just fine. However, it looks like the Layer2 objects do not work. For example, if I have boto and os imported, and do:
test = boto.swf.layer2.ActivityWorker()
test.domain = 'someDomain'
test.task_list = 'someTaskList'
test.poll()
I get an exception that the security token is not valid, and indeed, if I dig through the object, the security token is not set. This even happens with:
test = boto.swf.layer2.ActivityWorker(session_token=os.environ.get('AWS_SECURITY_TOKEN'))
I can fix this by doing:
test._swf.provider.security_token = os.environ.get('AWS_SECURITY_TOKEN')
test.poll()
but seems pretty hacky and annoying because I have to do this every time I make a new layer2 object. Anyone else noticed this? Is this behavior intended for some reason, or am I missing something here?
Manual management of temporary security credentials is not only "pretty hacky", but also less secure. A better alternative would be to assign an IAM role to the instances, so they will automatically have all permissions of that Role without requiring explicit credentials.
See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html