I want to create a directory EFS using boto3, is it possible?
Are there any possible workarounds for this using python scripts?
Thanks for help in advance
Boto3 is not able to do this. Just mount it and use mkdir instead.
I have used a lambda function with EFS mounted on it and used boto3 script from local system which invokes lambda and lambda function creates the directory.
Solution for the future readers of this post. here is lambda code.
You can change the path to your required path you mounted on lambda.
import json
import os
def lambda_handler(event, context):
user_name = event["user_name"]
path = f"/mnt/home/{user_name}"
try:
os.mkdir(path)
except FileExistsError:
print(f"directory with user_name {user_name} already exists")
print(os.listdir("/mnt/home/"))
Related
I'm unsure why boto3 can't find my EC2 instance.
conn = boto.ec2.connect_to_region('us-east-1')
instance = conn.get_all_instances(["i-0d8d1c65cba1e9066"])[0].instances[0]
Is giving me the error:
The instance ID 'i-0d8d1c65cba1e9066' does not exist
In AWS:
I think you are using the old boto library. You need to use boto3 version.
py -m pip install boto3 # to install boto3 on your machine
Here is the snippet.
import boto3
ec2_client = boto3.client("ec2", region_name="us-east-1")
response = ec2_client.describe_instances(InstanceIds=["i-0d8d1c65cba1e9066"])
print(response)
Hope this helps. Mark it as answered after testing.
I have python (3.8) Lambda function that connected to EFS, in mount /mnt/my-mount.
I want to run a bash script via the function, so I created another file script.sh.
This is the python function:
import json
import os
def lambda_handler(event, context):
os.system("sh script.sh")
and bash script script.sh:
#!/bin/bash
touch hello.txt
and I get the following error:
cannot touch script.sh: Read-only file system
Notes:
I can create file using the python function (f.write).
If I run os.system("chmod 777 a.sh"), again I get Read-Only file system.
If I use rc = subprocess.call("bash a.sh"), I get No such file or directory: 'bash a.sh'
The EFS has access point for user 1000:1000 with 777 permissions.
This could be due to having the Lambda role without proper write permissions. You should attach the proper policy or define your custom one:
I have a python code placed in S3.
That python code would be reading an excel file as source file placed in S3 and will do some transformations.
I have created a Lambda function which will get triggered once there will be a PUT event on the S3(whenever source gets placed to the S3 folder).
Requirement is to run that python code using the same Lambda function or to have the python code configured within the same Lambda function.
Thanks in advance.
You can download the python code to /tmp/ temporary storage in lambda. Then you can import the file inside your code using the import statement. Make sure your import statement gets executed after you have downloaded the file to tmp.
You can also have a look here to see other methods to run new script from within script.
EDIT:
Here's how you can download to /tmp/
import boto3
s3 = boto3.client('s3')
s3.download_file('bucket_name','filename.py','/tmp/filename.py')
Make sure your lambda role has permission to access s3
I am creating a DeepLens project to recognise people, when one of select group of people are scanned by the camera.
The project uses a lambda, which processes the images and triggers the 'rekognition' aws api.
When I trigger the API from my local machine - I get a good response
When I trigger the API from AWS console - I get failed response
Problem
After much digging, I found that the 'boto3' (AWS python library) is of version:
1.9.62 - on my local machine
1.8.9 - on AWS console
Question
Can I upgrade the 'boto3' library version on the AWS lambda console ?? If so, how ?
If you don't want to package a more recent boto3 version with you function, you can download boto3 with each invocation of the Lambda. Remember that /tmp/ is the directory that Lambda will allow you to download to, so you can use this to temporarily download boto3:
import sys
from pip._internal import main
main(['install', '-I', '-q', 'boto3', '--target', '/tmp/', '--no-cache-dir', '--disable-pip-version-check'])
sys.path.insert(0,'/tmp/')
import boto3
from botocore.exceptions import ClientError
def handler(event, context):
print(boto3.__version__)
You can achieve the same with either Python function with dependencies or with a Virtual Environment.
These are the available options other than that you also try to contact Amazon team if they can help you with up-gradation.
I know, you're asking for a solution through Console, but this is not possible (as of my knowledge).
To solve this you need to provide the boto3 version you require to your lambda (either with the solution from user1998671 or with what Shivang Agarwal is proposing). A third solution is to provide the required boto3 version as a layer for the lambda. The big advantage of the layer is that you can re-use it for all your lambdas.
This can be achieved by following the guide from AWS (the following is mainly copied from the linked guide from AWS):
IMPORTANT: Make sure to adjust boto3-mylayer with a for you suitable name.
Create a lib folder by running the following command:
LIB_DIR=boto3-mylayer/python
mkdir -p $LIB_DIR
Install the library to LIB_DIR by running the following command:
pip3 install boto3 -t $LIB_DIR
Zip all the dependencies to /tmp/boto3-mylayer.zip by running the following command:
cd boto3-mylayer
zip -r /tmp/boto3-mylayer.zip .
Publish the layer by running the following command:
aws lambda publish-layer-version --layer-name boto3-mylayer --zip-file fileb:///tmp/boto3-mylayer.zip
The command returns the new layer's Amazon Resource Name (ARN), similar to the following one:
arn:aws:lambda:region:$ACC_ID:layer:boto3-mylayer:1
To attach this layer to your lambda execute the following:
aws lambda update-function-configuration --function-name <name-of-your-lambda> --layers <layer ARN>
To verify the boto version in your lambda you can simply add the following two print commands in your lambda:
print(boto3.__version__)
print(botocore.__version__)
I have master/worker EC2 instances that I'm using for Grinder tests. I need to try out a load test that directly gets files from an S3 bucket, but I'm not sure how that would look in Jython for the Grinder test script.
Any ideas or tips? I've looked into it a little and saw that Python has the boto package for working with AWS - would that work in Jython as well?
(Edit - adding code and import errors for clarification.)
Python approach:
Did "pip install boto3"
Test script:
from net.grinder.script.Grinder import grinder
from net.grinder.script import Test
import boto3
# boto3 for Python
test1 = Test(1, "S3 request")
resource = boto3.resource('s3')
def accessS3():
obj = resource.Object(<bucket>,<key>)
test1.record(accessS3)
class TestRunner:
def __call__(self):
accessS3()
The error for this is:
net.grinder.scriptengine.jython.JythonScriptExecutionException: : No module named boto3
Java approach:
Added aws-java-sdk-1.11.221 jar from .m2\repository\com\amazonaws\aws-java-sdk\1.11.221\ to CLASSPATH
from net.grinder.script.Grinder import grinder
from net.grinder.script import Test
import com.amazonaws.services.s3 as s3
# aws s3 for Java
test1 = Test(1, "S3 request")
s3Client = s3.AmazonS3ClientBuilder.defaultClient()
test1.record(s3Client)
class TestRunner:
def __call__(self):
result = s3Client.getObject(s3.model.getObjectRequest(<bucket>,<key>))
The error for this is:
net.grinder.scriptengine.jython.JythonScriptExecutionException: : No module named amazonaws
I'm also running things on a Windows computer, but I'm using Git Bash.
Given that you are using Jython, I'm not sure whether you want to execute the S3 request in java or python syntax.
However, I would suggest following along with the python guide at the link below.
http://docs.ceph.com/docs/jewel/radosgw/s3/python/