opensearch authentication with opensearch-py on aws lambda - amazon-web-services

I am trying to connect to AWS OpenSearch domain from AWS Lambda using the opensearch python client (development purposes, non production).
I was trying the following:
from opensearchpy import OpenSearch
import boto3
from requests_aws4auth import AWS4Auth
import os
import config
my_region = os.environ['AWS_REGION']
service = 'es' # still es???
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, my_region, service, session_token=credentials.token)
openSearch_endpoint = config.openSearch_endpoint
# sth wrong here:
openSearch_client = OpenSearch(hosts = [openSearch_endpoint], auth = awsauth)
as per the following blogs:
https://aws.amazon.com/blogs/database/indexing-metadata-in-amazon-elasticsearch-service-using-aws-lambda-and-python/
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/search-example.html
but it does not work (it does not want to authenticate, "errorMessage":"AuthorizationException(403, '')" . However if I don't use the python client but simply go through requests instead:
import requests
host = config.openSearch_endpoint
url = host + '/' +'_cat/indices?v'
# this one works:
r = requests.get(url, auth=awsauth)
, my lambda function does communicate with the OpenSearch domain.
I consulted the OpenSearch() documentation but it is not clear to me how its parameters map to boto3 session credentials, and/or to AWS4Auth. So what should this line
openSearch_client = OpenSearch(hosts = [openSearch_endpoint], auth = awsauth)
be?

actually managed to find the solution a couple of hours later:
from opensearchpy import OpenSearch, RequestsHttpConnection
my_region = os.environ['AWS_REGION']
service = 'es' # still es?
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, my_region, service, session_token=credentials.token)
host = config.openSearch_endpoint
openSearch_client = OpenSearch(
hosts=[openSearch_endpoint],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
ssl_assert_hostname = False,
ssl_show_warn = False,
connection_class=RequestsHttpConnection
)

OpenSearch Service requires port 443 for incoming requests therefore you need to add a new Inbound Rule under Security Group attached to your OpenSearch Service domain.
Try the rule:
Type: HTTPS
Protocol: TCP
Port range: 443
Source: 0.0.0.0/0 (Anywhere-IPv4)
Additionally, you should have a Resource-based policy for your Lambda function to perform requests to your OpenSearch Service domain.

Related

AWS SeS From Lambda (works with Python, not with .Net)

I have a .NET lambda that sends emails via SES. I have a VPCe for SeS coupled with all of the required port, subnet & ACL config.
I created a Python lambda to do exactly the same as my .NET Lambda. It's joined to the same subnets, same security group & uses the same SMTP user and password.
The Python code works fine, connects to SES and sends the emails without fail but my .NET Lambda doesn't.
The last thing in my logs is:
ConnectAsync is going to be called
When calling the uri/path it says "Service Unavailable, 500."
I know the code works as if it's in debug on my dev machine pointing at the www-ses endpoint, it works fine.
I've also tried Nat gateways as an alternative (same issue) but I've settled with VPCe as I know my Python lambda works fine.
.NET:
try
{
using var smtp = new SmtpClient();
LambdaLogger.Log($"smtp ConnectAsync is going to be called for host {Configuration["SmtpHost"]} and port {Configuration["SmtpPort"].To<int>()} ");
//await smtp.ConnectAsync(Configuration["SmtpHost"], Configuration["SmtpPort"].To<int>(), SecureSocketOptions.StartTlsWhenAvailable);
await smtp.ConnectAsync("vpce-0af6f6a54b8c6d652-ltbm2mcx.email-smtp.eu-west-2.vpce.amazonaws.com", 587, SecureSocketOptions.StartTlsWhenAvailable);
LambdaLogger.Log($"SMTP connection created on {Configuration["SmtpHost"]} and port {Configuration["SmtpPort"].To<int>()}.");
await smtp.AuthenticateAsync(Configuration["SESUserName"], Configuration["SESPassword"]);
LambdaLogger.Log($"User auth completed for user .");
await smtp.SendAsync(email);
LambdaLogger.Log("email sent.");
emailMessage.isDeleted = true;
_applicationContext.EmailMessages.Update(emailMessage);
smtp.Disconnect(true);
}
catch (Exception ex)
{
emailMessage.Retries = emailMessage.Retries++;
_applicationContext.EmailMessages.Update(emailMessage);
LambdaLogger.Log($"Email Sending Error: {ex.ToFullMessage()}");
}
await _applicationContext.SaveChangesAsync();
Python:
import json
import smtplib
from email.mime.text import MIMEText
def lambda_handler(event, context):
sender = '<removd>'
receivers = ['<removed>']
port = 587
msg = MIMEText('This is test mail')
msg['Subject'] = 'Test mail'
msg['From'] = '<removed>'
msg['To'] = '<remvoed>'
with smtplib.SMTP('vpce-0af6f6a54b8c6d652-ltbm2mcx.email-smtp.eu-west-2.vpce.amazonaws.com', 587) as server:
server.starttls()
server.login('<removed>', '<removed>')
server.sendmail(sender, receivers, msg.as_string())
print("Successfully sent email")

Connect to RabbitMQ instance remotely (created on AWS)

I'm having trouble connecting to a RabbitMQ instance (it's my first time doing so). I've spun one up on AWS, and been given access to an admin panel which I'm able to access.
I'm trying to connect to the RabbitMQ server in python/pika with the following code:
import pika
import logging
logging.basicConfig(level=logging.DEBUG)
credentials = pika.PlainCredentials('*******', '**********')
parameters = pika.ConnectionParameters(host='a-25c34e4d-a3eb-32de-abfg-l95d931afc72f.mq.us-west-1.amazonaws.com',
port=5671,
virtual_host='/',
credentials=credentials,
)
connection = pika.BlockingConnection(parameters)
I get pika.exceptions.IncompatibleProtocolError: StreamLostError: ("Stream connection lost: ConnectionResetError(54, 'Connection reset by peer')",) when I run the above.
you're trying to connect through AMQP protocol and AWS is using AMQPS, you should add ssl_options to your connection parameters like this
import ssl
logging.basicConfig(level=logging.DEBUG)
credentials = pika.PlainCredentials('*******', '**********')
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
parameters = pika.ConnectionParameters(host='a-25c34e4d-a3eb-32de-abfg-l95d931afc72f.mq.us-west-1.amazonaws.com',
port=5671,
virtual_host='/',
credentials=credentials,
ssl_options=pika.SSLOptions(context)
)
connection = pika.BlockingConnection(parameters)

input data to aws elasticsearch using boto3 or es library

I have a lot of data that I want to send to aws elasticsearch. by looking at the https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-gsg-upload-data.html aws website it uses curl -Xput However I want to use python to do this therefore I've looked into boto3 documentation but cannot find a way to input data.
https://boto3.amazonaws.com/v1/documentation/api/1.9.42/reference/services/es.html I cannot see any method that inserts data.
This seems very basic job. Any help?
You can send the data to elastic search using HTTP interface. Here is the code sourced from
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-request-signing.html
from requests_aws4auth import AWS4Auth
import boto3
host = '' # For example, my-test-domain.us-east-1.es.amazonaws.com
region = '' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
es = Elasticsearch(
hosts = [{'host': host, 'port': 443}],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
connection_class = RequestsHttpConnection
)
document = {
"title": "Moneyball",
"director": "Bennett Miller",
"year": "2011"
}
es.index(index="movies", doc_type="_doc", id="5", body=document)
print(es.get(index="movies", doc_type="_doc", id="5"))
EDIT
To confirm whether data is pushed to the elastic cache under your index, you can try to do an HTTP GET by replacing the domain and index name
search-my-domain.us-west-1.es.amazonaws.com/_search?q=movies

Copy AMI to another region using boto3

I'm trying to automate "Copy AMI" functionality I have on my AWS EC2 console, can anyone point me to some Python code that does this through boto3?
From EC2 — Boto 3 documentation:
response = client.copy_image(
ClientToken='string',
Description='string',
Encrypted=True|False,
KmsKeyId='string',
Name='string',
SourceImageId='string',
SourceRegion='string',
DryRun=True|False
)
Make sure you send the request to the destination region, passing in a reference to the SourceRegion.
To be more precise.
Let's say the AMI you want to copy is in us-east-1 (Source region).
Your requirement is to copy this into us-west-2 (Destination region)
Get the boto3 EC2 client session to us-west-2 region and then pass the us-east-1 in the SourceRegion.
import boto3
session1 = boto3.client('ec2',region_name='us-west-2')
response = session1.copy_image(
Name='DevEnv_Linux',
Description='Copied this AMI from region us-east-1',
SourceImageId='ami-02a6ufwod1f27e11',
SourceRegion='us-east-1'
)
I use high-level resources like EC2.ServiceResource most of time, so the following is the code I use to use both EC2 resource and low-level client,
source_image_id = '....'
profile = '...'
source_region = 'us-west-1'
source_session = boto3.Session(profile_name=profile, region_name=source_region)
ec2 = source_session.resource('ec2')
ami = ec2.Image(source_image_id)
target_region = 'us-east-1'
target_session = boto3.Session(profile_name=profile, region_name=target_region)
target_ec2 = target_session.resource('ec2')
target_client = target_session.client('ec2')
response = target_client.copy_image(
Name=ami.name,
Description = ami.description,
SourceImageId = ami.id,
SorceRegion = source_region
)
target_ami = target_ec2.Image(response['ImageId'])

fetch vpc peering connection id in boto

I wish to retrieve the vpc peering connection id in boto, something that would be done by "aws ec2 describe-vpc-peering-connections". I couldn't locate its boto equivalent. Is it possible to retrieve it in boto ?
boto3 is different from the previous boto. Here is the solution in boto3:
import boto3
prevar = boto3.client('ec2')
var1 = prevar.describe_vpc_peering_connections()
print(var1)
In boto you would use boto.vpc.get_all_peering_connections(), as in:
import boto.vpc
c = boto.vpc.connect_to_region('us-east-1')
vpcs = c.get_all_vpcs()
vpc_peering_connection = c.create_vpc_peering_connection(vpcs[0].id, vpcs[1].id)
Get all vpc peering IDs
import boto.vpc
conn = boto.vpc.connect_to_region('us-east-1')
vpcpeering = conn.get_all_vpc_peering_connections()
for peering in vpcpeering:
print peering.id
If you know the accepter VPC id and requester vpc ID, you should get the vpc peering id by this way:
import boto.vpc
conn = boto.vpc.connect_to_region('us-east-1')
peering = conn.get_all_vpc_peering_connections(filters = {'accepter-vpc-info.vpc-id' = 'vpc-12345abc','requester-vpc-info.vpc-id' = 'vpc-cba54321'})[0]
print peering.id
If that's the only vpc peering in your environment, an easier way:
import boto.vpc
conn = boto.vpc.connect_to_region('us-east-1')
peering = conn.get_all_vpc_peering_connections()[0]
peering.id