trouble with AWS SWF using IAM roles - amazon-web-services

I've noticed on AWS that if I get IAM role credentials (key, secret, token) and set them as appropriate environment variables in a python script, I am able to create and use SWF Layer1 objects just fine. However, it looks like the Layer2 objects do not work. For example, if I have boto and os imported, and do:
test = boto.swf.layer2.ActivityWorker()
test.domain = 'someDomain'
test.task_list = 'someTaskList'
test.poll()
I get an exception that the security token is not valid, and indeed, if I dig through the object, the security token is not set. This even happens with:
test = boto.swf.layer2.ActivityWorker(session_token=os.environ.get('AWS_SECURITY_TOKEN'))
I can fix this by doing:
test._swf.provider.security_token = os.environ.get('AWS_SECURITY_TOKEN')
test.poll()
but seems pretty hacky and annoying because I have to do this every time I make a new layer2 object. Anyone else noticed this? Is this behavior intended for some reason, or am I missing something here?

Manual management of temporary security credentials is not only "pretty hacky", but also less secure. A better alternative would be to assign an IAM role to the instances, so they will automatically have all permissions of that Role without requiring explicit credentials.
See: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html

Related

boto3/aws: resource vs session

I can use resource like this way
s3_resource = boto3.resource('s3')
s3_bucket = s3_resource.Bucket(bucket)
Also I can use session like this way.
session = boto3.session.Session()
s3_session = session.resource("s3", endpoint_url=self.endpoint_url)
s3_obj = s3_session.Object(self.bucket, key)
Internally, does session.resource("s3" uses boto3.resource('s3')?
Normally, people ask about boto3 client vs resource.
Calls using client are direct API calls to AWS, while resource is a higher-level Pythonic way of accessing the same information.
In your examples, you are using session, which is merely a way of caching credentials. The session can then be used for either client or resource.
For example, when calling the AWS STS assume_role() command, a set of temporary credentials is returned. These can be stored in a session and API calls can be made using these credentials.
There is effectively no difference between your code samples, since no specific information has been stored in the session object. If you have nothing to specifically configure in the session, then you can skip it entirely.

Is there a way to switch between or reset role based credentials using `okta-aws` command line utility?

Going through Okta's github repo it doesn't seem like there is an easy way to switch between AWS roles using their command line utility.
Normally I get temporary credentials using okta-aws default sts get-caller-identity. If I'm switching between projects and need to likewise switch my temporary credentials to a different role, the best method I've found thus far is to delete two files, ~/.okta/profile & ~/.okta/.current-session, then re-run the above command.
I've also found that when OKTA_STS_DURATION in .okta/config.properties is set for longer than is configured for that AWS role, you either need to wait until the local duration setting expires or reset the credentials process using the same method as above.
Is there a way to switch between or reset role based credentials using okta-aws command line utility?
Note: I'm using role as interchangeable with profile -- please correct me if this is inaccurate.
At this point the best solution is to switch over to gimme-aws-creds. After you run through the setup it ends up being easier to use, and appears to be better maintained by Nike.

How can I insure that my retrieval of secrets is secure?

Currently I am using Terraform and Aws Secrets Manager to store and retrieve secrets, and I would like to have some insight if my implementation is secure, and if not how can I make it more secure. Let me illustrate with what I have tried.
In secrets.tf I create a secret like (this needs to be implemented with targeting):
resource "aws_secretsmanager_secret" "secrets_of_life" {
name = "top-secret"
}
I then go to the console and manually set the secret in AWS Secrets manager.
I then retrieve the secrets in secrets.tf like:
data "aws_secretsmanager_secret_version" "secrets_of_life_version" {
secret_id = aws_secretsmanager_secret.secrets_of_life.id
}
locals {
creds = jsondecode(data.aws_secretsmanager_secret_version.secrets_of_life.secret_string)
}
And then I proceed to use the secret (export them as K8s secrets for example) like:
resource "kubernetes_secret" "secret_credentials" {
metadata {
name = "kubernetes_secret"
namespace = kubernetes_namespace.some_namespace.id
}
data = {
top_secret = local.creds["SECRET_OF_LIFE"]
}
type = "kubernetes.io/generic"
}
It's worth mentioning that I store tf state remotely. Is my implementation secure? If not, how can I make it more secure?
yes I can confirm it is secure since you accomplished the following:
plain text secrets out of your code.
Your secrets are stored in a dedicated secret store that enforces encryption and strict access control.
Everything is defined in the code itself. There are no extra manual steps or wrapper scripts required.
Secret manager support rotating secrets, which is useful in case a secret got compromised.
The only thing I can wonder about is using a Terraform backend that supports encryption like s3, and avoid commet the state file to your source control.
Looks good, as #asri suggests it a good secure implementation.
The risk of exposure will be in the remote state. It is possible that the secret will be stored there in plain text. Assuming you are using S3, make sure that the bucket is encrypted. If you share tf state access with other developers, they may have access to those values in the remote state file.
From https://blog.gruntwork.io/a-comprehensive-guide-to-managing-secrets-in-your-terraform-code-1d586955ace1
These secrets will still end up in terraform.tfstate in plain text! This has been an open issue for more than 6 years now, with no clear plans for a first-class solution. There are some workarounds out there that can scrub secrets from your state files, but these are brittle and likely to break with each new Terraform release, so I don’t recommend them.
Hi I'm working on similar things, here're some thoughts:
when running Terraform for the second time, the secret will be in plain text in state files which are stored in S3, is S3 safe enough to store those sensitive strings?
My work is using the similar approach: run terraform create an empty secret / dummy strings as placeholder -> manually update to real credentials -> run Terraform again to tell the resource to use the updated credentials. The thing is that when we deploy in production, we want the process as automate as possible, this approach is not ideal ut I haven't figure out a better way.
If anyone has better ideas please feel free to leave a comment below.

how to get shared access signature of Azure container by C++

I want to use C++ Azure API to generate a Shared Access Signature for a container on Azure and get the access string. But cannot find any good example. Almost all examples are in C#. Only found this, https://learn.microsoft.com/en-us/azure/storage/files/storage-c-plus-plus-how-to-use-files
Here is what I did,
// Retrieve a reference to a previously created container.
azure::storage::cloud_blob_container container = blob_client.get_container_reference(s2ws(eventID));
// Create the container if it doesn't already exist.
container.create_if_not_exists();
// Get the current permissions for the event.
auto blobPermissions = container.download_permissions();
// Create and assign a policy
utility::string_t policy_name = s2ws("Signature" + eventID);
azure::storage::blob_shared_access_policy policy = azure::storage::blob_shared_access_policy();
// set expire date
policy.set_expiry(utility::datetime::utc_now() + utility::datetime::from_days(10));
//give read and write permissions
policy.set_permissions(azure::storage::blob_shared_access_policy::permissions::read);
azure::storage::shared_access_policies<azure::storage::blob_shared_access_policy> policies;
//add the new shared policy
policies.insert(std::make_pair(policy_name, policy));
blobPermissions.set_policies(policies);
blobPermissions.set_public_access(azure::storage::blob_container_public_access_type::off);
container.upload_permissions(blobPermissions);
auto token = container.get_shared_access_signature(policy, policy_name);
After run this, I can see the policy is successfully set on the container, but the token got by the last line is not right. And there will always be an exception when exiting this function, the breakpoint locates in _Deallocate().
Could someone tell me what's wrong with my code? Or some examples about this? Thank you very much.
Edited
The token I got looks like,
"sv=2016-05-31&si=Signature11111122222222&sig=JDW33j1Gzv00REFfr8Xjz5kavH18wme8E7vZ%2FFqUj3Y%3D&spr=https%2Chttp&se=2027-09-09T05%3A54%3A29Z&sp=r&sr=c"
By this token, I couldn't access my blobs. The right token created by "Microsoft Azure Storage Explorer" using this policy looks like,
?sv=2016-05-31&si=Signature11111122222222&sr=c&sig=9tS91DUK7nkIlIFZDmdAdlNEfN2HYYbvhc10iimP1sk%3D
About the exception, I put all these code in a function. If without the last line, everything is okay. But if added the last line, while exiting this function, it will throw an exception and said a breakpoint was triggered. It stopped at the last line of _Deallocate() in "C:\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.10.25017\include\xmemory0",
::operator delete(_Ptr);
Have no idea why this exception being thrown and how to debug because it seems it cannot be caught by my code.
Edited
After changed the last line to,
auto token = container.get_shared_access_signature(azure::storage::blob_shared_access_policy(), policy_name);
The returned token is right, I can access my blobs by using it. But the annoying exception is still there :-(
Edited
Just found the exception only happened when building in Debug. If in Release, everything is ok. So maybe it's related to compiling environment.
When creating a Shared Access Signature (SAS), there are a few permissions you set: SAS Start/Expiry, Permissions, IP ACLing, Protocol restrictions etc. Now what you could do is create an access policy on the blob container with these things, create an ad-hoc SAS (i.e. without access policy) with these things or combine these two to create a SAS token.
One key thing to keep in mind is that if something is defined in an access policy, you can't redefine them when creating a SAS. So for example, let's say you create an access policy with just Read permission and nothing else, then you can't provide any permissions when creating a SAS token while using this access policy. You can certainly define the things which are not there in the access policy (for example, you can define a SAS expiry if it is not defined in access policy).
If you look at your code (before edit), what you're doing is creating an access policy with some permissions and then creating a SAS token using the same permissions and access policy. That's why it did not work. However when you created a SAS token from Microsoft's Storage Explorer, you will notice that it only included the access policy (si=Signature11111122222222) and none of the other parameters and that's why it worked.
In your code after edit you did not include any permissions but only used the access policy (in a way you did what Storage Explorer is doing) and that's why things worked after edit.
I hope this explains the mystery behind not working/working SAS tokens.

access credentials error in Copy Command in S3

I am facing access credentials error when i ran copy Command in S3.
my copy command is :
copy part from 's3://lntanbusamplebucket/load/part-csv.tbl'
credentials 'aws_access_key_id=D93vB$;yYq'
csv;
error message is:
error: Invalid credentials. Must be of the format: credentials 'aws_iam_role=...' or 'aws_access_key_id=...;aws_secret_access_key=...[;token=...]'
'aws_access_key_id=?;
aws_secret_access_key=?''
Could you please can any one explain what is aws_access_key_id and aws_secret_access_key ?
where we can see this?
Thanks in advance.
Mani
The access key you're using looks more like a secret key, they usually look something like "AKIAXXXXXXXXXXX".
Also, don't post them openly in StackOverflow questions. If someone gets a hold of a set of access keys, they can access your AWS environment.
Access Key & Secret Key are the most basic form of credentials / authentication used in AWS. One is useless without the other, so if you've lost one of the two, you'll need to regenerate a set of keys.
To do this, go into the AWS console, go to the IAM services (Identity and Access Management) and go into users. Here, select the user that you're currently using (probably yourself) and go to the Security Credentials tab.
Here, under Access keys, you can see which sets of keys are currently active for this user. You can only have 2 sets active at one time, so if there's already 2 sets present, delete one and create a new pair. You can download the new pair as a file called "credentials.csv" and this will contain your user, access key and secret key.