I'm trying to create users and roles via the elasticsearch python client documented here: https://elasticsearch-py.readthedocs.io/en/v7.14.1/. If I use HTTP requests alone and if I ignore the certificates, I can reach the application and make requests with the payloads suggested in https://opendistro.github.io/for-elasticsearch-docs/docs/security/access-control/api/. However I'm trying to use a secure connection to get to elasticsearch in AWS. According to their documentation in https://docs.aws.amazon.com/opensearch-service/latest/developerguide/request-signing.html#request-signing-python, I should be using the elastic search client like this:
region = 'my-region-1'
service = 'opensearchservice'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service,
session_token=credentials.token)
elasticsearch = Elasticsearch(
hosts=[{'host': self._host, 'port': 443}],
http_auth=awsauth,
use_ssl=True,
verify_certs=True,
connection_class=RequestsHttpConnection
)
I'm using boto3 to create the session and AWS4Auth to try and get the secure connection. However, I can't find anywhere how to actually send a plain payload to elastic search endpoints. For example, for this endpoint:
curl -X PUT http://localhost:443/_opendistro/_security/api/roles/jesperancinha-role -d "{}" (...)
It seems like that we need to send an index and that's not what I'm looking for. I just want to create a user with a payload like this one:
{
"cluster_permissions" : [
"indices_monitor",
],
"index_permissions" : [
{
"index_patterns" : [
"*"
],
"dls" : "",
"fls" : [ ],
"masked_fields" : [ ],
"allowed_actions" : [
"read",
"indices:monitor/stats"
]
}
],
"tenant_permissions" : [
{
"tenant_patterns" : [
"human_resources"
],
"allowed_actions" : [
"kibana_all_read"
]
}
]
}
It would be great if this could be done via the elasticsearch-py client, but if you have any other idea, please let me know. Thanks!
I hope I didn't get people too confused with my question. I finally found out what I wanted. The elasticsearch client does work, but only for searches and indexing. For administrator tasks, I found out that I need to make normal requests as described in the open distro for elastic search, except that they also need to be signed with Signature Version 4. The whole thing is pretty complicated but very nicely layed out in the AWS website: https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html.
Related
I am working on a project where users can upload files into a S3 bucket, these uploaded files are mapped to a GraphQL key (which was generated by Amplify CLI), and an aws-lambda function is triggered. All of this is working, but the next step I want is for this aws-lambda function to create a second file with the same ownership attributes and POST the location of the saved second file to the GraphQL API.
I figured that this shouldn't be too difficult but I am having a lot of difficulty and can't understand where the problem lies.
BACKGROUND/DETAILS
I want the owner of the data (the uploader) to be the only user who is able to access the data, with the aws-lambda function operating in an admin role and able to POST/GET to API of any owner.
The GraphQL schema looks like this:
type FileUpload #model
#auth(rules: [
{ allow: owner}]) {
id: ID!
foo: String
bar: String
}
And I also found this seemingly-promising AWS guide which I thought would give an IAM role admin access (https://docs.amplify.aws/cli/graphql/authorization-rules/#configure-custom-identity-and-group-claims) which I followed by creating the file amplify/backend/api/<your-api-name>/custom-roles.json and saved it with
{
"adminRoleNames": ["<YOUR_IAM_ROLE_NAME>"]
}
I replaced "<YOUR_IAM_ROLE_NAME>" with an IAM Role which I have given broad access to, including this appsync access:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"appsync:*"
],
"Resource": "*"
}
]
}
Which is the role given to my aws-lambda function.
When I attempt to run a simple API query in my aws-lambda function with the above settings I get this error
response string:
{
"data": {
"getFileUpload": null
},
"errors": [
{
"path": [
"getFileUpload"
],
"data": null,
"errorType": "Unauthorized",
"errorInfo": null,
"locations": [
{
"line": 3,
"column": 11,
"sourceName": null
}
],
"message": "Not Authorized to access getFileUpload on type Query"
}
]
}
my actual python lambda script is
import http
API_URL = '<MY_API_URL>'
API_KEY = '<>MY_API_KEY'
HOST = API_URL.replace('https://','').replace('/graphql','')
def queryAPI():
conn = http.client.HTTPSConnection(HOST, 443)
headers = {
'Content-type': 'application/graphql',
'x-api-key': API_KEY,
'host': HOST
}
print('conn: ', conn)
query = '''
{
getFileUpload(id: "<ID_HERE>") {
description
createdAt
baseFilePath
}
}
'''
graphql_query = {
'query': query
}
query_data = json.dumps(graphql_query)
print('query data: ', query_data)
conn.request('POST', '/graphql', query_data, headers)
response = conn.getresponse()
response_string = response.read().decode('utf-8')
print('response string: ', response_string)
I pass in the API key and API URL above in addition to giving AWS-lambda the IAM role. I understand that only one is probably needed, but I am trying to get the process to work then pare it back.
QUESTION(s)
As far as I understand, I am
providing the appropriate #auth rules to my GraphQL schema based on my goals and (2 below)
giving my aws-lambda function sufficient IAM authorization (via both IAM role and API key) to override any potential restrictive #auth rules of my GraphQL schema
But clearly something is not working. Can anyone point me towards a problem that I am overlooking?
I had similar problem just yesterday.
It was not 1:1 what you're trying to do, but maybe it's still helpful.
So I was trying to give lambda functions permissions to access the data based on my graphql schema. The schema had different #auth directives, which caused the lambda functions to not have access to the data anymore. Even though I gave them permissions via the cli and IAM roles. Although the documentation says this should work, it didn't:
if you grant a Lambda function in your Amplify project access to the GraphQL API via amplify update function, then the Lambda function's IAM execution role is allow-listed to honor the permissions granted on the Query, Mutation, and Subscription types.
Therefore, these functions have special access privileges that are scoped based on their IAM policy instead of any particular #auth rule.
So I ended up adding #auth(rules: [{ allow: custom }]) to all parts of my schema that I want to access via lambda functions.
When doing this, make sure to add "lambda" as auth mode to your api via amplify update api.
In the authentication lambda function, you could then check if the user, who is invoking the function, has access to the requested query/S3 Data.
I have a website which has a React frontend hosted on Firebase and a Django backend which is hosted on Google Cloud Run. I have a Firebase rewrite rule which points all my API calls to the Cloud Run instance. However, I am unable to use the Django admin panel from my custom domain which points to Firebase.
I have tried two different versions of rewrite rules -
"rewrites": [
{
"source": "/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
--- AND ---
"rewrites": [
{
"source": "/api/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "/admin/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
I am able to see the log in page when I go to url.com/admin/, however I am unable to go any further. It just refreshes the page with empty email/password fields and no error message. Just as an FYI, it is not to do with my username and password as I have tested the admin panel and it works fine when accessing it directly using the Cloud Run url.
Any help will be much appreciated.
I didn't actually find an answer to why the admin login page was just refreshing when I was trying to log in using the Firebase rewrite rule, however I thought of an alternative way to access the admin panel using my custom domain.
I have added a custom domain to the Cloud Run instance so that is uses a subdomain of my site domain and I can access the admin panel by using admin.customUrl.com rather than customUrl.com/admin/.
I have made a few API Gateways in AWS.
At this point , I would like curl to the endpoint of each API Gateway in order to start testing. But, I cannot find the endpoint URL:
For example, via CLI, I ran:
aws --profile dev apigateway get-rest-apis --output json
I get: (no endpoint URL)
"apiKeySource": "HEADER",
"description": "API one",
"endpointConfiguration": {
"types": [
"REGIONAL"
]
},
"createdDate": 1570655986,
"policy": "{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\\"*\\\",\\\"Action\\\":\\\"execute-api:Invoke\\\",\\\"Resource\\\":\\\"arn:aws:FOOBAR\\/*\\/*\\/*\\\",\\\"Condition\\\":{\\\"IpAddress\\\":{\\\"aws:SourceIp\\\":[\\\"0.0.0.0\\/0\\\""]}}}]}",
"id": "foobar",
"name": "foobar"
}
you have to deploy the APIs.
you find then the https address in the deployed stage.
or you can use a custom domain name
Amazon has released the ability to create HTTP API's via API gateway. On their website they describe that it is possible to create an HTTP API via AWS CLI: https://docs.aws.amazon.com/apigateway/latest/developerguide/http-api-examples.html#http-api-examples.cli.quick-create.
FOR EXAMPLE:
aws apigatewayv2 create-api --name my-api --protocol-type HTTP --target arn:aws:lambda:us-east-2:123456789012:function:function-name
For REST API's I know it is possible to update the CORS policy via AWS CLI. I was wondering if it is also possible to change/create the CORS policy for HTTP API's via AWS CLI?
I want to use HTTP API's because it saves a lot of money!
Thanks in advance!
This worked for me
$ aws2 apigatewayv2 update-api --api-id $API_ID --cors-configuration AllowHeaders="*",AllowMethods=GET,POST,AllowOrigins="*",MaxAge=3600
{
"ApiEndpoint": "https://$API_ID.execute-api.$AWS_REGION.amazonaws.com",
"ApiId": $API_ID,
"ApiKeySelectionExpression": "$request.header.x-api-key",
"CorsConfiguration": {
"AllowHeaders": [
"*"
],
"AllowMethods": [
"GET",
"POST"
],
"AllowOrigins": [
"*"
],
"MaxAge": 3600
},
"CreatedDate": "2020-01-28T17:41:35+00:00",
"Name": "http-api",
"ProtocolType": "HTTP",
"RouteSelectionExpression": "$request.method $request.path",
"Tags": {}
}
I've set my AWS Elasticsearch instance so that anyone can do anything (create, delete, search, etc.) to it.
These are my permissions (replace $myARN with my Elasticsearch ARN):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "$myARN"
}
]
}
When I PUT a new index:
PUT http://my-elasticsearch-domain.us-west-2.es.amazonaws.com/index-name
Or I DELETE an index:
DELETE http://my-elasticsearch-domain.us-west-2.es.amazonaws.com/index-name
I get this:
{
"acknowledged": true
}
Which means I can create and delete indexes but when I try to POST a reindex I get:
{
"Message": "Your request: '/_reindex' is not allowed."
}
Do I have to sign this request? Why should I have to sign this request but not creating or deleting indexes?
The reason is simply because the Amazon Elasticsearch Service is a kind of restricted environment where you don't have access to the full range of services and endpoints provided by a barebone install of Elasticsearch.
You can check the list of endpoints that you're allowed to use on the Amazon Elasticsearch Service and _reindex is not part of that list.
UPDATE
There's another way to achieve what you want, though. By leveraging Logstash, you can source the data from ES, apply any transformation you wish and sink it back to ES.
input {
elasticsearch {
hosts => ["my-elasticsearch-domain.us-west-2.es.amazonaws.com:80"]
index => "index-name"
docinfo => true
}
}
filter {
mutate {
remove_field => [ "#version", "#timestamp" ]
}
# add other transformations here
}
output {
elasticsearch {
hosts => ["my-elasticsearch-domain.us-west-2.es.amazonaws.com:80"]
manage_template => false
index => "%{[#metadata][_index]}"
document_type => "%{[#metadata][_type]}"
document_id => "%{[#metadata][_id]}"
}
}
Reindex feature will not be available in previous versions 1.5 and 2.3. So currently if you use the versions 1.5 or 2.3, it would be good for you to move on to the latest ES version so that you will get better indexing performance and other features which are not supported in previous versions.
Also have a look into the below link to know more the APIs which are supported in different versions of AWS Elasticsearch. If you look into the 5.1 section you can the “_reindex” is listed there.
http://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-supported-es-operations.html#es_version_5_1
I was able to do this using the following tool
taskrabbit/elasticsearch-dump
After installing it, you can run this on the command line:
elasticdump \
--input=http://es.com:9200/api/search \
--input-index=my_index \
--output=http://es.com:9200/api/search \
--output-index=my_index \
--type=mapping
NOTE: I did have to use the --awsChain option to find my credentials.