Error Updating AWS Elasticsearch Settings Via Command Line - amazon-web-services

I'm attempting to update the settings of an AWS Elasticsearch instance. My command is :
curl -XPUT "https://<index-endpoint>.es.amazonaws.com/_settings" -d #/path/to/settings.json
And I receive the following response:
{
"Message":"Your request: '/_settings' is not allowed."
}
I've read that not all ES commands are not accepted by an AWS instance of ES, but I can't find an alternative for what I'm doing.
Note:
My settings are as follows:
{
"index" : {
"number_of_shards" : "5",
"number_of_replicas" : "1",
"analysis": {
"analyzer": {
"urls-links-emails": {
"type": "custom",
"tokenizer": "uax_url_email"
}
}
}
}
}

You need to apply those settings to a specific index, so your endpoint needs to be something like https://<index-endpoint>.es.amazonaws.com/myindex/_settings
More concretely, your command needs to be like this:
curl -XPUT https://<index-endpoint>.es.amazonaws.com/myindex/_settings --data-binary #/path/to/settings.json

Related

Hunspell Dictionary Config for AWS Elasticsearch

I am trying to install Hunspell Stemming Dictionaries for AWS ElasticSearch v7.10
I have done this previously for a classic unix install of ElasticSearch, which involved unzipping the latest .oxt dictionary file
https://extensions.libreoffice.org/en/extensions/show/english-dictionaries
https://extensions.libreoffice.org/assets/downloads/41/1669872021/dict-en-20221201_lo.oxt
Copying these files to the expected filesystem path:
./config/hunspell/{lang}/{lang}.aff + {lang}.dic
The difference is that AWS ElasticSearch doesn't have backend filesystem. I have assumed we are supposed use S3 instead. I have created a bucket with this file layout and think I have successfully given it public read-only permissions.
s3://hunspell/
http://hunspell.s3-website.eu-west-2.amazonaws.com/
My ElasticSearch schema contains the following analyser
{
"settings": {
"analysis": {
"analyzer": {
//***** Stemmers *****//
// DOCS: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-hunspell-tokenfilter.html
"hunspell_stemmer_en_GB": {
"type": "hunspell",
"locale": "en_GB",
"dedup": true,
"ignore_case": true,
"dictionary": [
"s3://hunspell/en_GB/en_GB.aff",
"s3://hunspell/en_GB/en_GB.dic",
]
}
}
}
}
But mapping PUT command is still returning the following exception
"type": "illegal_state_exception",
"reason": "failed to load hunspell dictionary for locale: en_GB",
"caused_by": {
"type": "exception",
"reason": "Could not find hunspell dictionary [en_GB]"
}
How do I configure Hunspell for AWS ElasticSearch?

configuring Synonyms.txt in AWS hosted elastic search

I am trying to upload sysnonyms.txt in AWS hosted elastic search, but I couldn't find any feasible way to do that. All I have tried is the following.
I am not supposed to use inline sysnonym, since i have a huge list of synonmys. So I tried to use below settings to uplaod synonyms.txt to AWS hosted elastic search,
"settings": {
"analysis": {
"filter": {
"synonyms_filter" : {
"type" : "synonym",
"synonyms_path" : "https://test-bucket.s3.amazonaws.com/synonyms.txt"
}
},
"analyzer": {
"synonyms_analyzer" : {
"tokenizer" : "whitespace",
"type": "custom",
"filter" : ["lowercase","synonyms_filter"]
}
}
}
when I use above settings to create index from Kibana(VPC access), I am getting below exception.
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[0jc0TeJ][x.x.x.x:9300][indices:admin/create]"}],"type":"illegal_argument_exception","reason":"IOException while reading synonyms_path_path: (No such file or directory)"}},"status":400}
Since my Elastic search is hosted my AWS, I cant get node details or etc folder details to upload my file.
Any suggestion on the approach or how to upload file to AWS ES?
The AWS ES service has many limitations, one of which is that you cannot use file-based synonyms (since you don't have access to the filesystem).
You need to list all your synonyms inside the index settings.
"settings": {
"analysis": {
"filter": {
"synonyms_filter" : {
"type" : "synonym",
"synonyms" : [ <--- like this
"i-pod, i pod => ipod",
"universe, cosmos"
]
}
},
"analyzer": {
"synonyms_analyzer" : {
"tokenizer" : "whitespace",
"type": "custom",
"filter" : ["lowercase","synonyms_filter"]
}
}
}
UPDATE:
You can now use file-based synonyms in AWS ES by adding custom packages

Ho to fix aws-cli cloudfront update distribution command?

I have been trying to execute below command but it resulted in an error
aws cloudfront update-distribution --id E29BDBENPXM1VE \
--Origins '{ "Items": [{
"OriginPath": "",
"CustomOriginConfig": {
"OriginSslProtocols": {
"Items": [
"TLSv1",
"TLSv1.1",
"TLSv1.2"
],
"Quantity": 3
}
}
}
]
}'
ERROR::: Unknown options: { "Items": [{
"OriginPath": "",
"CustomOriginConfig": {
"OriginSslProtocols": {
"Items": [
"TLSv1",
"TLSv1.1",
"TLSv1.2"
],
"Quantity": 3
}
}
}
]
}, --Origins
I have to remove cloudfront : OriginSslProtocols:SSLv3
aws cloudfront update-distribution --id E29BDBENPXM1VE \
--Origins '{ "Items": [{
"OriginPath": "",
"CustomOriginConfig": {
"OriginSslProtocols": {
"Items": [
"TLSv1",
"TLSv1.1",
"TLSv1.2"
],
"Quantity": 3
}
}
}
]
}'
1) How to fix above code,if not possible if there any command other than below command to disable/remove OriginSslProtocols:SSLv3
aws cloudfront update-distribution --id E29BDBENPXM1VE --distribution-config file://secure-ssl.json --if-match E35YV3CGILXQDJ
You are using the right command and it should be possible to do what you want.
However, it is slightly more complicated.
The corresponding reference page for the cli command aws cloudfront update-distribution says:
When you update a distribution, there are more required fields than when you create a distribution.
That is why you must follow the steps which are given in the cli reference [1]:
Submit a GetDistributionConfig request to get the current configuration and an Etag header for the distribution.
Update the XML document that was returned in the response to your GetDistributionConfig request to include your changes.
Submit an UpdateDistribution request to update the configuration for your distribution:
In the request body, include the XML document that you updated in Step 2. The request body must include an XML document with a DistributionConfig element.
Set the value of the HTTP If-Match header to the value of the ETag header that CloudFront returned when you submitted the GetDistributionConfig request in Step 1.
Review the response to the UpdateDistribution request to confirm that the configuration was successfully updated.
Optional: Submit a GetDistribution request to confirm that your changes have propagated. When propagation is complete, the value of Status is Deployed .
Fore info about the correct xml format is given in the CloudFront API Reference [2].
References
[1] https://docs.aws.amazon.com/cli/latest/reference/cloudfront/update-distribution.html
[2] https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_UpdateDistribution.html

Sentry cannot find my source code

I cannot display my original code in sentry dashboard.
i get the following errors
Discarded invalid parameter 'type'
Source code was not found for app:///crna-entry.delta?
platform=ios&dev=true&minify=false`
I've configured the app.json as indicated in the docs.
"hooks": {
"postPublish": [
{
"file": "sentry-expo/upload-sourcemaps",
"config": {
"organization": "xxxxx",
"project": "xxxxxxx",
"authToken": "xxxxxxxxxx"
}
}
]
}
I answered this question here
First way
If you are using expo. You should use sentry-expo package which you can find here: sentry-expo
Put this hook to your expo json (app.json) file
{
"expo": {
"hooks": {
"postPublish": [
{
"file": "sentry-expo/upload-sourcemaps",
"config": {
"organization": "<your organization name>",
"project": "<your project name>",
"authToken": "<your auth token here>"
}
}
]
}
}
organization you can find on here https://sentry.io/settings/ which named "Organization Name"
project enter your project name, you can find here: https://sentry.io/organizations/ORGANIZATION_NAME/projects/
authToken create a authToken with this url https://sentry.io/api/
Then run expo publish, it upload the source maps automatically.
Testing Locally
Make sure that you enabled expo development.
add lines;
Sentry.enableInExpoDevelopment = true;
Sentry.config(publicDsn, options).install();
As a Result
On sentry, for only ios, you can able to see the source code where error occured.
BUT: unable to see the source code for ANDROID
https://github.com/getsentry/react-native-sentry/issues/372
Second way (manual upload)
Using the api https://docs.sentry.io/platforms/javascript/sourcemaps/
curl -X POST \
https://sentry.io/api/0/organizations/ORG_NAME/releases/VERSION/files/ \
-H 'Authorization: Bearer AUTH_TOKEN' \
-H 'content-type: multipart/form-data' \
-F file=#script.min.js.map \
-F 'name=~/scripts/script.min.js.map'

Storing elasticsearch snapshots in amazon s3 repository. How does it work

I have an elasticsearch 2.3 installed on my local Linux machine.
I have Amazon S3 storage: I know region, bucketname, accesskey and secretkey.
I want to make a snapshot of elasticsearch indexes in my S3 storage. There is documentation about it here, but it doesn't explain me anything (I am totally new in it.).
So, for example, I am trying to execute this command:
curl -XPUT 'localhost:9200/_snapshot/my_s3_repository?pretty' -H 'Content-Type: application/json' -d '{"type": "s3",
"settings": {"bucket": "ilyabackuptest1", "region": "us-east-1" }}'
And I get a response:
{
"error" : {
"root_cause" : [ {
"type" : "repository_exception",
"reason" : "[my_s3_repository] failed to create repository"
} ],
"type" : "repository_exception",
"reason" : "[my_s3_repository] failed to create repository",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Unknown [repository] type [s3]"
}
},
"status" : 500
}
So how does it work?
UPDATE:
After installing repository-s3 I use the same command and get this. How should it work?
{
"error" : {
"root_cause" : [ {
"type" : "process_cluster_event_timeout_exception",
"reason" : "failed to process cluster event (put_repository [my_s3_repository]) within 30s"
} ],
"type" : "process_cluster_event_timeout_exception",
"reason" : "failed to process cluster event (put_repository [my_s3_repository]) within 30s"
},
"status" : 503
}
You simply need to install the S3 repository plugin first:
bin/plugin install repository-s3
Then you can run again your command to create the S3 repo.