I have setup a new ElasticSearch cluster on AWS which is only allowing access to a specific IAM user.
However, I'm trying to connect to this from Ruby and looked at using the AWS SDK but that has no methods for actually making HTTP operations against your ES cluster, only accessing the configuration APIs.
As usual, this requires all the AWS request signing stuff that they require for API access, but I can't find anything that indicates how to do this stuff. I'm using Ruby.
Essentially, what I'm after is being able to make GET and PUT requests to this cluster using the IAM user creds. IP restriction isn't an option for me.
You can make signed, secure requests to Amazon Elasticsearch from Ruby. I did the following with an app on Heroku.
Ensure you have elasticsearch gem >= v1.0.15 as support for this was only implemented there Dec 4th, 2015.
You also need this gem:
gem 'faraday_middleware-aws-signers-v4'
Example from the elasticsearch-ruby/elasticsearch-transport
documentation:
You can use any standard Faraday middleware and plugins in the configuration block, for example sign the requests for the AWS Elasticsearch service:
With the following code:
require 'faraday_middleware/aws_signers_v4'
client = Elasticsearch::Client.new(url: ENV['AWS_ENDPOINT_URL']) do |f|
f.request :aws_signers_v4,
credentials: Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']),
service_name: 'es',
region: 'us-east-1'
f.adapter Faraday.default_adapter
end
This also works with the searchkick gem with Rails. Set Searchkick.client using the above example, in an initializer:
# config/initializers/elasticsearch.rb
require 'faraday_middleware/aws_signers_v4'
Searchkick.client = Elasticsearch::Client.new(url: ENV['AWS_ENDPOINT_URL']) do |f|
f.request :aws_signers_v4,
credentials: Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY']),
service_name: 'es',
region: 'us-east-1'
f.adapter Faraday.default_adapter
end
Related
I'm currently trying to connect to my enterprise s3 URL (which is not amazon web-service) using boto3 and I have the following error.
EndpointConnectionError: Could not connect to the endpoint URL: "https://s3.fr-par.amazonaws.com/my_buket...." which is absolutely not the enpoint given in the code.
s3 = boto3.resource(service_name='s3',
aws_access_key_id= 'XXXXXX',
aws_secret_access_key='YYYYYYY',
endpoint_url= 'https://my_buket.s3.my_region.my_company_enpoint_url')
my_bucket=s3.Bucket(s3_bucket_name)
bucket_list = []
for file in my_bucket.objects.filter(Prefix='boston.csv'):
bucket_list.append(file.key)
As can be seen in the error image boto3 tries to connect to a amazonaws url, which is not that of my enterprise. Finally I want to indicate that I am able to connect to my enterprise s3 using minIO https://docs.min.io/ which indicate there no errors in the aws_access_key_id, the aws_secret_access_key and endpoint_url I use with boto3.
I have executed the code using a python 3.9 environment (Boto3 version 1.22.1) a anaconda 3.9 environment (Boto3 version 1.22.0) and a jupyter notebook always with same error. The OS is an Ubuntu 20.04.4 LTS virtualized on Oracle VM virtual box.
https://my_buket.s3.my_region.my_company_enpoint_url is not the endpoint. The list of valid S3 endpoints is here. But normally you don't have to specify it explicitly. Boto3 will "know" which endpoint to use for each region.
Since some people seem to have the same problem, I'm posting the solution I found.
For some reason the code in the question still doesn't work for me. Alternatively, I handle pointing to my enterprise's S3 just by first creating a session and creating the resource and client from it. Note that in endpoint_url, no bucket is indicated.
Since there is no bucket in endpoint_url, you have access to all buckets associated with the credential pass, and therefore it is necessary to specify the bucket in the resource and client instances methods.
session = boto3.Session(region_name=my_region)
resource = session.resource('s3',
endpoint_url='https://s3.my_region.my_company_enpoint_url',
aws_access_key_id='XXXXXX',
aws_secret_access_key='YYYYYY')
client = session.client('s3',
endpoint_url='https://s3.my_region.my_company_enpoint_url',
aws_access_key_id='XXXXXX',
aws_secret_access_key='YYYYYY')
client.upload_file(path_to_local_file, bucket_name, upload_path,
Callback=call,
ExtraArgs=ExtraArgs)
Problem: Netlify serverless functions run on AWS Lambda. So AWS_ is a reserved prefix in Netlify, meaning I can't use e.g. AWS_SECRET_ACCESS_KEY for my own environment var that I set in the Netlify admin panel.
But the only way I have been auble to authenticate Nodemailer with AWS SES (the email service) is with #aws/aws-sdk and its defaultProvider function that requires process.env.AWS_SECRET_ACCESS_KEY and process.env.AWS_ACCESS_KEY_ID – spelled exactly like that:
import 'dotenv/config'
import nodemailer from 'nodemailer'
import aws from '#aws-sdk/client-ses'
import { defaultProvider } from '#aws-sdk/credential-provider-node'
const ses = new aws.SES({
apiVersion: '2019-09-29',
region: 'eu-west-1',
defaultProvider,
rateLimit: 1
})
const sesTransporter = nodemailer.createTransport({ SES: { ses, aws } })
When building the function locally with the Netlify CLI, emails are sent.
It fails with 403 and InvalidClientTokenId: The security token included in the request is invalid in the live Netlify environment.
Netlify doesn't have a solution afaik, but mention in a forum post that custom env variables in AWS is a thing. I haven't been able to find anything in searches (they didn't provide any links). The AWS docs are pretty unhelpful as always :/
So the question is, how can this be done?
I thought I was clever when I tried the following, but setting the vars just before creating the SES Transport apparently doesn't help:
// Trick Netlify reserved env vars:
process.env.AWS_ACCESS_KEY_ID = process.env.ACCESS_KEY_ID
process.env.AWS_SECRET_ACCESS_KEY = process.env.SECRET_KEY
console.log('AWS access key id ', process.env.AWS_ACCESS_KEY_ID) // Logs the correct key!
console.log('AWS sec key ', process.env.AWS_SECRET_ACCESS_KEY ) // Logs the correct
I am trying to work with the new Instance Metadata Service Version 2 (IMDSv2) API.
It works as expected when I try to query the metadata manually as described on Retrieve instance metadata - Amazon Elastic Compute Cloud.
However, if I try to query for the instance tags it fails with error message:
Couldn't find AWS credentials in environment, credentials file, or IAM role
The tags query is done by the Rusoto SDK that I am using, that works when I set --http-tokens optional as described on Configure the instance metadata options - Amazon Elastic Compute Cloud.
I don't fully understand why setting the machine to work with IMDSv2 would effect the DescribeTags request, as I believe it's not using the same API - so I am guessing that's a side effect.
If I try and do a manual query using curl (instead of using the SDK):
https://ec2.amazonaws.com/?Action=DescribeTags&Filter.1.Name=resource-id&Filter.1.Value.1=ami-1a2b3c4d
I get:
The action DescribeTags is not valid for this web service
Thanks :)
The library that I was using (Rusoto SDK 0.47.0) doesn't support fetching the credentials needed when the host is set to work with the IMDSv2.
The workaround was to manually query for the IAM role credentials.
First, you get the token:
GET /latest/api/token
Next, use the token header "X-aws-ec2-metadata-token" with the value from the previous:
GET /meta-data/iam/security-credentials
Afterwards, use the result from the previous query (and don't forget to set the token header), and query:
GET /meta-data/iam/security-credentials/<query 2 result>
This will provide with the following data:
struct SecurityCredentials {
#[serde(rename = "AccessKeyId")]
access_key_id: String,
#[serde(rename = "SecretAccessKey")]
secret_access_key: String,
#[serde(rename = "Token")]
token: String,
}
Then what I needed to do was to build a custom credentials provider using that data (but this part is already lib specific).
I am trying to use AWS Elasticsearch services.On localhost, I installed elastic search locally which is working perfectly. But, now I want to use AWS elastic search but for that I need credentials like
'aws' => env('AWS_ELASTICSEARCH_ENABLED', false),
'aws_region' => env('AWS_REGION', ''),
'aws_key' => env('AWS_ACCESS_KEY_ID', ''),
'aws_secret' => env('AWS_SECRET_ACCESS_KEY', '')
'aws_credentials' => $credentials
I don't know from where I can get that information. Please help me to know
Bilal, on creating index in AES, you get an endpoint. Simply use that endpoint in your application and send cURL/ CLI request to your index.
Note, in your Access Policy of AES, allow your IP to access AES endpoint.
How to get your AWS credentials is explained in:
Understanding and getting your AWS credentials
Further, you can also setup a local AWS profile on your computer as explained here.
I enjoy using the AWS SDK without having to specify where to find the credentials, it makes it easier to configure on multiple environment where different types of credentials are available.
The AWS SDK for Ruby searches for credentials [...]
Is there some way I can retrieve the code that does this to configure Faraday with AWS ? In order to configure Faraday I need something like
faraday.request(:aws_sigv4,
service: 'es',
credentials: credentials,
region: ENV['AWS_REGION'],
)
Now I would love that this credentials be picked "automatically" as in the aws sdk v3. How Can I do that ?
(ie where is the code in the AWS SDK v3 that does something like
credentials = Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'])
unless credentials.set?
credentials = Aws::InstanceProfileCredentials.new
end
...
The class Aws::CredentialProviderChain is the one in charge of resolving the credentials, but it is tagged as #api private so there's currently no guarantee it will still be there after updates (I've opened a discussion to make it public).
If you're okay to use it, you can resolve credentials like this. I'm going to test it in CI (ENV creds), development (Aws config creds), and staging/production environments (Instance profile creds).
Aws::CredentialProviderChain.new.resolve
You can use it in middlewares like this (for example configuring Elasticsearch/Faraday)
faraday.request(:aws_sigv4,
service: 'es',
credentials: Aws::CredentialProviderChain.new.resolve,
region: ENV['AWS_REGION'],
)
end