I'm unsure why boto3 can't find my EC2 instance.
conn = boto.ec2.connect_to_region('us-east-1')
instance = conn.get_all_instances(["i-0d8d1c65cba1e9066"])[0].instances[0]
Is giving me the error:
The instance ID 'i-0d8d1c65cba1e9066' does not exist
In AWS:
I think you are using the old boto library. You need to use boto3 version.
py -m pip install boto3 # to install boto3 on your machine
Here is the snippet.
import boto3
ec2_client = boto3.client("ec2", region_name="us-east-1")
response = ec2_client.describe_instances(InstanceIds=["i-0d8d1c65cba1e9066"])
print(response)
Hope this helps. Mark it as answered after testing.
Related
I don't know If this will be possible at the first place.
The requirement is to launch ec2 instance using aws sdk (I know this is possible) using based on some application login.
Then I wanted to install some application on the newly launched instance let's say docker.
Is this possible using sdk? Or My idea itself is wrong and there is a better solution to the scenario?
Can i run a command on a Running instance using SDK?
Yes you can install anything on EC2 when it is launched by providing script/commands in userdata section. This is also possible from AWS SDK https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_UserData.html
you can pass command like yum install docker in userdata
UserData='yum install docker'
installing applications uysing cmd on a running command is possible using boto3.
ssm_client = boto3.client('ssm')
response = ssm_client.send_command(
InstanceIds=['i-xxxxxxx'],
DocumentName="AWS-RunShellScript",
Parameters={'commands': ['echo "abc" > 1.txt']}, )
command_id = response['Command']['CommandId']
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId='i-xxxxxxxx',
)
print(output)
ssm client Runs commands on one or more managed instances.
You can also optimize this using function
def execute_commands(ssm_client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", #preconfigured documents
Parameters={'commands': commands},
InstanceIds=instance_ids,
)
return resp
ssm_client = boto3.client('ssm')
commands = ['ifconfig']
instance_ids = ['i-xxxxxxxx']
execute_commands_on_linux_instances(ssm_client, commands, instance_ids)
I want to create a directory EFS using boto3, is it possible?
Are there any possible workarounds for this using python scripts?
Thanks for help in advance
Boto3 is not able to do this. Just mount it and use mkdir instead.
I have used a lambda function with EFS mounted on it and used boto3 script from local system which invokes lambda and lambda function creates the directory.
Solution for the future readers of this post. here is lambda code.
You can change the path to your required path you mounted on lambda.
import json
import os
def lambda_handler(event, context):
user_name = event["user_name"]
path = f"/mnt/home/{user_name}"
try:
os.mkdir(path)
except FileExistsError:
print(f"directory with user_name {user_name} already exists")
print(os.listdir("/mnt/home/"))
I am trying to create a model quality monitor job, using the class ModelQualityMonitor from Sagemaker model_monitor, and i think i have all the import statements defined yet i get the message cannot import name error
from sagemaker import get_execution_role, session, Session
from sagemaker.model_monitor import ModelQualityMonitor
role = get_execution_role()
session = Session()
model_quality_monitor = ModelQualityMonitor(
role=role,
instance_count=1,
instance_type='ml.m5.xlarge',
volume_size_in_gb=20,
max_runtime_in_seconds=1800,
sagemaker_session=session
)
Any pointers are appreciated
Are you using an Amazon SageMaker Notebook? When I run your code above in a new conda_python3 Amazon SageMaker notebook, I don't get any errors at all.
Example screenshot output showing no errors:
If you're getting something like NameError: name 'ModelQualityMonitor' is not defined then I suspect you are running in a Python environment that doesn't have the Amazon SageMaker SDK installed in it. Perhaps try running pip install sagemaker and then see if this resolves your error.
I've wrote a function that is using pyhive to read from Hive. Running it locally it works fine. However when trying to use lambda function I got the error:
"Could not start SASL: b'Error in sasl_client_start (-4) SASL(-4): no mechanism available: No worthy mechs found'"
I've tried to use the guidelines in this link:
https://github.com/cloudera/impyla/issues/201
However, I wasn't able to use latest command:
yum install cyrus-sasl-lib cyrus-sasl-gssapi cyrus-sasl-md5
since the system I was using to build is ubuntu that doesn't support the yum function.
Tried to install those packages (using apt-get):
sasl2-bin libsasl2-2 libsasl2-dev libsasl2-modules libsasl2-modules-gssapi-mit
like described in:
python cannot connect hiveserver2
But still no luck. Any ideas?
Thanks,
Nir.
You can follow this github issue. I am able to connect Hive server2 with LDAP authentication using the pyhive library in AWS Lambda with Python 2.7. What I have done to make it work is:
Took one EC2 instance or launch container with AMI used in Lambda.
Run the following commands to install the required dependencies
yum upgrade
yum install gcc
yum install gcc-g++
sudo yum install cyrus-sasl cyrus-sasl-devel cyrus-sasl-ldap #include cyrus-sals dependency for authentication mechanism you are using to connect to hive
pip install six==1.12.0
Bundle up the /usr/lib64/sasl2/ to Lambda and set os.environ['SASL_PATH'] = os.path.join(os.getcwd(), /path/to/sasl2. Verify if .so files are presented on os.environ['SASL_PATH'] path.
My Lambda code looks like:
from pyhive import hive
import logging
import os
os.environ['SASL_PATH'] = os.path.join(os.getcwd(), 'lib/sasl2')
log = logging.getLogger()
log.setLevel(logging.INFO)
log.info('Path: %s',os.environ['SASL_PATH'])
def lambda_handler(event, context):
cursor = hive.connect(host='hiveServer2Ip', port=10000, username='userName', auth='LDAP',password='password').cursor()
SHOW_TABLE_QUERY = "show tables"
cursor.execute(SHOW_TABLE_QUERY)
tables = cursor.fetchall()
log.info('tables: %s', tables)
log.info('done')
I have master/worker EC2 instances that I'm using for Grinder tests. I need to try out a load test that directly gets files from an S3 bucket, but I'm not sure how that would look in Jython for the Grinder test script.
Any ideas or tips? I've looked into it a little and saw that Python has the boto package for working with AWS - would that work in Jython as well?
(Edit - adding code and import errors for clarification.)
Python approach:
Did "pip install boto3"
Test script:
from net.grinder.script.Grinder import grinder
from net.grinder.script import Test
import boto3
# boto3 for Python
test1 = Test(1, "S3 request")
resource = boto3.resource('s3')
def accessS3():
obj = resource.Object(<bucket>,<key>)
test1.record(accessS3)
class TestRunner:
def __call__(self):
accessS3()
The error for this is:
net.grinder.scriptengine.jython.JythonScriptExecutionException: : No module named boto3
Java approach:
Added aws-java-sdk-1.11.221 jar from .m2\repository\com\amazonaws\aws-java-sdk\1.11.221\ to CLASSPATH
from net.grinder.script.Grinder import grinder
from net.grinder.script import Test
import com.amazonaws.services.s3 as s3
# aws s3 for Java
test1 = Test(1, "S3 request")
s3Client = s3.AmazonS3ClientBuilder.defaultClient()
test1.record(s3Client)
class TestRunner:
def __call__(self):
result = s3Client.getObject(s3.model.getObjectRequest(<bucket>,<key>))
The error for this is:
net.grinder.scriptengine.jython.JythonScriptExecutionException: : No module named amazonaws
I'm also running things on a Windows computer, but I'm using Git Bash.
Given that you are using Jython, I'm not sure whether you want to execute the S3 request in java or python syntax.
However, I would suggest following along with the python guide at the link below.
http://docs.ceph.com/docs/jewel/radosgw/s3/python/