Digging through the code (consider this, for instance), I found that I can read the attribute using:
instance.block_device_mapping['/dev/sdz'].delete_on_termination
...and toggle it using:
instance.modify_attribute('blockdevicemapping', ['/dev/sdz=1']) # toggle on
instance.modify_attribute('blockdevicemapping', ['/dev/sdz']) # toggle off
But it's a-symmetrical and I feel like I'm missing some higher level functionality.
Shouldn't it be more like:
block_device_type = instance.block_device_mapping['/dev/sdz']
block_device_type.delete_on_termination = True
block_device_type.save() # I made this API up
?
You turn this setting on and off with a list of the formatted string '%s=%d'.
Switch to on
>>> inst.modify_attribute('blockDeviceMapping', ['/dev/sda1=1'])
Switch to off
>>> inst.modify_attribute('blockDeviceMapping', ['/dev/sda1=0'])
I verified changes outside of python after each attempt to change the setting using:
$ aws ec2 describe-instance-attribute --instance-id i-7890abcd --attribute blockDeviceMapping
Calling inst.modify_attribute('blockDeviceMapping', ['/dev/sda1']) (the string lacks =0) did not produce any change.
Assigning to inst.block_device_mapping['/dev/sda1'].delete_on_termination also did not produce any change.
After calling modify_attribute, the value of delete_on_termination on local block device objects is unchanged.
I walk through the whole process at:
http://f06mote.com/post/77239804736/amazon-ec2-instance-safety-tweak-turn-off-delete-on
Related
I'm doing what I think is a very simple thing to check that alpakka is working:
val awsCreds = AwsBasicCredentials.create("xxx", "xxx")
val credentialsProvider = StaticCredentialsProvider.create(awsCreds)
implicit val staticCreds = S3Attributes.settings(S3Ext(context.system).settings.withCredentialsProvider(credentialsProvider)
.withS3RegionProvider(new AwsRegionProvider {val getRegion: Region = Region.US_EAST_2}))
val d = S3.checkIfBucketExists(action.bucket)
d foreach { msg => log.info("mgs: " + msg.toString)}
When I run this I get
msgs: NotExists
But the bucket referred to by action.bucket does exist, and I can access it using these credentials. What's more, when I modify the credentials (by changing the secret key), I get the same message. What I should get, according to the documentation, is AccessDenied.
I got to this point because I didn't think the environment was picking up on the right credentials - hence all the hard-coded values. But now I don't really know what could be causing this behavior.
Thanks
Update: The action object is just a case class consisting of a bucket and a path. I've checked in debug that action.bucket and action.path point to the things they should be - in this case an S3 bucket. I've also tried the above code with just the string bucket name in place of action.bucket.
Just my carelessness . . .
An errant copy added an extra implicit system to the mix. Some changes were made to implicit materializers in akka 2.6 and I think those, along with the extra implicit actor system, made for a weird mix.
i want to set or modify an environment variable in my lambda script.
I need to save a value for the next call of my script.
For exemple i create an environment variable with the aws lambda console and don't set value. After that i try this :
import boto3
import os
if os.environ['ENV_VAR']:
print(os.environ['ENV_VAR'])
os.environ['ENV_VAR'] = "new value"
In this case my value will never print.
I tried with :
os.putenv()
but it's the same result.
Do you know why this environment variable is not set ?
Thank you !
Consider using the boto3 lambda command, update_function_configuration to update the environment variable.
response = client.update_function_configuration(
FunctionName='test-env-var',
Environment={
'Variables': {
'env_var': 'hello'
}
}
)
I need to save a value for the next call of my script.
That's not how environment variables work, nor is it how lambda works. Environment variables cannot be set in a child process for the parent - a process can only set environment variables in its own and child process environments.
This may be confusing to you if you set environment variables at the shell, but in that case, the shell is the long running process setting and getting your environment variables, not the programs it calls.
Consider this example:
from os import environ
print environ['A']
environ['A'] = "Set from python"
print environ['A']
This will only set env A for itself. If you run it several times, the initial value of A is always the shell's value, never the value python sets.
$ export A="set from bash"
$ python t.py
set from bash
Set from python
$ python t.py
set from bash
Set from python
Further, even if that wasn't the case, it wouldn't work reliably with aws lambda. Lambda runs your code on whatever compute resources are available at the time; it will typically cache runtimes for frequently executed functions, so in these cases data could be written to the filesystem to preserve it. But if the next invocation wasn't run in that runtime, your data would be lost.
For your needs, you want to preserve your data outside the lambda. Some obvious options are: write to s3, write to dynamo, or, write to sqs. The next invocation would read from that location, achieving the desired result.
AWS Lambda just executes the piece of code with given set of inputs. Once executed, it returns the output and that's all. If you want to preserve the output for your next call, then you probably need to store that in DB or Queue as Dan said. I personally use SQS in conjunction with SNS that sends me notifications about current state. You can even store the end result like success or failure in SQS which you can use for next trigger. Just throwing the options here, rest all depends on your requirements.
I am trying to enable versioning and lifecycle policies on my Amazon S3 buckets. I understand that it is possible to enable Versioning first and then apply LifeCycle policy on that bucket. If you see the image below, that will confirm this idea.
I have then uploaded a file several times which created several versions of the same file. I then deleted the file and still able to see several versions. However, if I try to restore a file, I see that the Initiate Restore option is greyed out.
I would like to ask anyone who had any similar issue or let me know what I am doing wrong.
Thanks,
Bucket Versioning on Amazon S3 keeps all versions of objects, even when they are deleted or when a new object is uploaded under the same key (filename).
As per your screenshot, all previous versions of the object are still available. They can be downloaded/opened in the S3 Management Console by selecting the desired version and choosing Open from the Actions menu.
If Versions: Hide is selected, then each object only appears once. Its contents is equal to the latest uploaded version of the object.
Deleting an object in a versioned bucket merely creates a Delete Marker as the most recent version. This makes the object appear as though it has been deleted, but the prior versions are still visible if you click the Versions: Show button at the top of the console. Deleting the Delete Marker will make the object reappear and the contents will be the latest version uploaded (before the deletion).
If you want a specific version of the object to be the "current" version, either:
Delete all versions since that version (making the desired version that latest version), or
Copy the desired version back to the same object (using the same key, which is the filename). This will add a new version, but the contents will be equal to the version you copied. The copy can be performed in the S3 Management Console -- just choose Copy and then Paste from the Actions Menu.
Initiate Restore is used with Amazon Glacier, which is an archival storage system. This option is not relevant unless you have created a Lifecycle Policy to move objects to Glacier.
With the new console, you can do it as following.
Click on the Deleted Objects button
You will see your deleted object below, Select it
Click on More -> Undo delete
If you have a lot of deleted files to restore. You might want to use a script to do the job for you.
The script should
Get the versions of objects in your bucket using the Get object versions API
Inspect the versions data to get Delete Marker (i.e. delete objects) name and version id
Delete the markers found using the marker names and version ids using Delete object API
Python example with boto:
This example script deletes delete markers found one by one once.
#!/usr/bin/env python
import boto
BUCKET_NAME = "examplebucket"
DELETE_DATE = "2015-06-08"
bucket = boto.connect_s3().get_bucket(BUCKET_NAME)
for v in bucket.list_versions():
if (isinstance(v, boto.s3.deletemarker.DeleteMarker) and
v.is_latest and
DELETE_DATE in v.last_modified):
bucket.delete_key(v.name, version_id=v.version_id)
Python example with boto3:
However, if you have thousands of objects, this could be a slow process. AWS does provide a way to batch delete objects with a maximum batch size of 1000.
The following example script searches your objects with a prefix, and test them if they are deleted ( i.e. current version is a delete marker ) and them batch delete them. It is set to search 500 objects in your bucket in each batch, and try to delete multiple object with a batch no more than 1000 objects.
import boto3
client = boto3.client('s3')
def get_object_versions(bucket, prefix, max_key, key_marker):
kwargs = dict(
Bucket=bucket,
EncodingType='url',
MaxKeys=max_key,
Prefix=prefix
)
if key_marker:
kwargs['KeyMarker'] = key_marker
response = client.list_object_versions(**kwargs)
return response
def get_delete_markers_info(bucket, prefix, key_marker):
markers = []
max_markers = 500
version_batch_size = 500
while True:
response = get_object_versions(bucket, prefix, version_batch_size, key_marker)
key_marker = response.get('NextKeyMarker')
delete_markers = response.get('DeleteMarkers', [])
markers = markers + [dict(Key=x.get('Key'), VersionId=x.get('VersionId')) for x in delete_markers if
x.get('IsLatest')]
print '{0} -- {1} delete markers ...'.format(key_marker, len(markers))
if len(markers) >= max_markers or key_marker is None:
break
return {"delete_markers": markers, "key_marker": key_marker}
def delete_delete_markers(bucket, prefix):
key_marker = None
while True:
info = get_delete_markers_info(bucket, prefix, key_marker)
key_marker = info.get('key_marker')
delete_markers = info.get('delete_markers', [])
if len(delete_markers) > 0:
response = client.delete_objects(
Bucket=bucket,
Delete={
'Objects': delete_markers,
'Quiet': True
}
)
print 'Deleting {0} delete markers ... '.format(len(delete_markers))
print 'Done with status {0}'.format(response.get('ResponseMetadata', {}).get('HTTPStatusCode'))
else:
print 'No more delete markers found\n'
break
delete_delete_markers(bucket='data-global', prefix='2017/02/18')
I have realised that I can perform and Initiate Restore operation once the object is stored on Gliacer, as shown by the Storage Class of the object. To restore a previous copy on S3, the Delete marker on the current Object has to be removed.
I would like to increase the deploy time, in a stack layer that hosts many apps (AWS Opsworks).
Currenlty I get the following error:
Eror
[2014-05-05T22:27:51+00:00] ERROR: Running exception handlers
[2014-05-05T22:27:51+00:00] ERROR: Exception handlers complete
[2014-05-05T22:27:51+00:00] FATAL: Stacktrace dumped to /var/lib/aws/opsworks/cache/chef-stacktrace.out
[2014-05-05T22:27:51+00:00] ERROR: deploy[/srv/www/lakers_test] (opsworks_delayed_job::deploy line 65) had an error: Mixlib::ShellOut::CommandTimeout: Command timed out after 600s:
Thanks in advance.
First of all, as mentioned in this ticket reporting a similar issue, the Opsworks guys recommend trying to speed up the call first (there's always room for optimization).
If that doesn't work, we can go down the rabbit hole: this gets called, which in turn calls Mixlib::ShellOut.new, which happens to have a timeout option that you can pass in the initializer!
Now you can use an Opsworks custom cookbook to overwrite the initial method, and pass the corresponding timeout option. Opsworks merges the contents of its base cookbooks with the contents of your custom cookbook - therefore you only need to add & edit one single file to your custom cookbook: opsworks_commons/libraries/shellout.rb:
module OpsWorks
module ShellOut
extend self
# This would be your new default timeout.
DEFAULT_OPTIONS = { timeout: 900 }
def shellout(command, options = {})
cmd = Mixlib::ShellOut.new(command, DEFAULT_OPTIONS.merge(options))
cmd.run_command
cmd.error!
[cmd.stderr, cmd.stdout].join("\n")
end
end
end
Notice how the only additions are just DEFAULT_OPTIONS and merging these options in the Mixlib::ShellOut.new call.
An improvement to this method would be changing this timeout option via a chef attribute, that you could in turn update via your custom JSON in the Opsworks interface. This means passing the timeout attribute in the initial Opsworks::ShellOut.shellout call - not in the method definition. But this depends on how the shellout method actually gets called...
From within a running Java application running on beanstalk, how can I get the beanstalk version label that is currently running?
[Multiple Edits later...]
After a few back-and-forth comments with Sony (see below), I wrote the following code which works for me now. If you put meaningful comments in your version label when you deploy, then this will tell you what you're running. We have a continuous build environment, so we can get our build environment to supply a label that leads to the check-in comments for the related code. Put this all together, and your server can tell you exactly what code its running relative to your source code check-ins. Really useful for us. OK now I'm actually answering my own question here, but with invaluable help from Sony. Seems a shame you can't remove the hard-coded values and query for those at runtime.
String getMyVersionLabel() throws IOException {
Region region = Region.getRegion(Regions.fromName("us-west-2")); // Need to hard-code this
AWSCredentialsProvider credentialsProvider = new ClasspathPropertiesFileCredentialsProvider();
AWSElasticBeanstalkClient beanstalk = region.createClient(AWSElasticBeanstalkClient.class, credentialsProvider, null);
String environmentName = System.getProperty("PARAM2", "DefaultEnvironmentName"); // Need to hard-code this too
DescribeEnvironmentsResult environments = beanstalk.describeEnvironments();
for (EnvironmentDescription ed : environments.getEnvironments()) {
if (ed.getEnvironmentName().equals( environmentName)) {
return "Running version " + ed.getVersionLabel() + " created on " + ed.getDateCreated());
break;
}
}
return null;
}
You can use AWS Java SDK and call this directly.
See the details of describeApplicationVersions API for how to get all the versions in an application.Ensure to give your regions as well (otherwise you will get the versions from the default AWS region).
Now, if you need to know the version deployed currently, you need to call additionally the DescribeEnvironmentsRequest. This has the versionLabel, which tells you the the version currently deployed.
Here again, if you need to know the environment name in the code, you need to pass it as a param to the beanstalk configuration in the aws console, and access as a PARAM.