I would like to connect to JDBC DB, e.g. Postgres, via Calcite driver using sqlline shell script wrapper included in the calcite git repo. I'm facing the problem how to specify the target JDBC Postgres driver. Initially I tried this:
CLASSPATH=/Users/davidkubecka/git/calcite/build/libs/postgresql-42.2.18.jar ./sqlline -u jdbc:calcite:model=model.json
The model.json is this:
{
"version": "1.0",
"defaultSchema": "tpch",
"schemas": [
{
"name": "tpch",
"type": "jdbc",
"jdbcUrl": "jdbc:postgresql://localhost/*",
"jdbcSchema": "tpch",
"jdbcUser": "*",
"jdbcPassword": "*"
}
]
}
But
First, I got asked for username and password even though the are already specified in the model.
Second, after filling in the credentials I still get error
java.lang.RuntimeException: java.sql.SQLException: Cannot create JDBC driver of class '' for connect URL 'jdbc:postgresql://localhost/*'
So my question is whether this scenario (using JDBC driver inside Calcite driver via sqlline) is supported and if yes how can I make the connection?
Try including your jdbc Driver within the schema definition, and make sure it is in your classpath. Furthermore, add your database name to the jdbc Url. Your model.json could look like:
{
"version": "1.0",
"defaultSchema": "tpch",
"schemas": [
{
"name": "tpch",
"type": "jdbc",
"jdbcUrl": "jdbc:postgresql://localhost/my_database",
"jdbcSchema": "tpch",
"jdbcUser": "*",
"jdbcPassword": "*",
"jdbcDriver": "org.postgresql.Driver"
}
]
}
Related
I am trying to install Hunspell Stemming Dictionaries for AWS ElasticSearch v7.10
I have done this previously for a classic unix install of ElasticSearch, which involved unzipping the latest .oxt dictionary file
https://extensions.libreoffice.org/en/extensions/show/english-dictionaries
https://extensions.libreoffice.org/assets/downloads/41/1669872021/dict-en-20221201_lo.oxt
Copying these files to the expected filesystem path:
./config/hunspell/{lang}/{lang}.aff + {lang}.dic
The difference is that AWS ElasticSearch doesn't have backend filesystem. I have assumed we are supposed use S3 instead. I have created a bucket with this file layout and think I have successfully given it public read-only permissions.
s3://hunspell/
http://hunspell.s3-website.eu-west-2.amazonaws.com/
My ElasticSearch schema contains the following analyser
{
"settings": {
"analysis": {
"analyzer": {
//***** Stemmers *****//
// DOCS: https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-hunspell-tokenfilter.html
"hunspell_stemmer_en_GB": {
"type": "hunspell",
"locale": "en_GB",
"dedup": true,
"ignore_case": true,
"dictionary": [
"s3://hunspell/en_GB/en_GB.aff",
"s3://hunspell/en_GB/en_GB.dic",
]
}
}
}
}
But mapping PUT command is still returning the following exception
"type": "illegal_state_exception",
"reason": "failed to load hunspell dictionary for locale: en_GB",
"caused_by": {
"type": "exception",
"reason": "Could not find hunspell dictionary [en_GB]"
}
How do I configure Hunspell for AWS ElasticSearch?
I'm trying to create users and roles via the elasticsearch python client documented here: https://elasticsearch-py.readthedocs.io/en/v7.14.1/. If I use HTTP requests alone and if I ignore the certificates, I can reach the application and make requests with the payloads suggested in https://opendistro.github.io/for-elasticsearch-docs/docs/security/access-control/api/. However I'm trying to use a secure connection to get to elasticsearch in AWS. According to their documentation in https://docs.aws.amazon.com/opensearch-service/latest/developerguide/request-signing.html#request-signing-python, I should be using the elastic search client like this:
region = 'my-region-1'
service = 'opensearchservice'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service,
session_token=credentials.token)
elasticsearch = Elasticsearch(
hosts=[{'host': self._host, 'port': 443}],
http_auth=awsauth,
use_ssl=True,
verify_certs=True,
connection_class=RequestsHttpConnection
)
I'm using boto3 to create the session and AWS4Auth to try and get the secure connection. However, I can't find anywhere how to actually send a plain payload to elastic search endpoints. For example, for this endpoint:
curl -X PUT http://localhost:443/_opendistro/_security/api/roles/jesperancinha-role -d "{}" (...)
It seems like that we need to send an index and that's not what I'm looking for. I just want to create a user with a payload like this one:
{
"cluster_permissions" : [
"indices_monitor",
],
"index_permissions" : [
{
"index_patterns" : [
"*"
],
"dls" : "",
"fls" : [ ],
"masked_fields" : [ ],
"allowed_actions" : [
"read",
"indices:monitor/stats"
]
}
],
"tenant_permissions" : [
{
"tenant_patterns" : [
"human_resources"
],
"allowed_actions" : [
"kibana_all_read"
]
}
]
}
It would be great if this could be done via the elasticsearch-py client, but if you have any other idea, please let me know. Thanks!
I hope I didn't get people too confused with my question. I finally found out what I wanted. The elasticsearch client does work, but only for searches and indexing. For administrator tasks, I found out that I need to make normal requests as described in the open distro for elastic search, except that they also need to be signed with Signature Version 4. The whole thing is pretty complicated but very nicely layed out in the AWS website: https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html.
I have a website which has a React frontend hosted on Firebase and a Django backend which is hosted on Google Cloud Run. I have a Firebase rewrite rule which points all my API calls to the Cloud Run instance. However, I am unable to use the Django admin panel from my custom domain which points to Firebase.
I have tried two different versions of rewrite rules -
"rewrites": [
{
"source": "/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
--- AND ---
"rewrites": [
{
"source": "/api/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "/admin/**",
"run": {
"serviceId": "serviceId",
"region": "europe-west1"
}
},
{
"source": "**",
"destination": "/index.html"
}
]
I am able to see the log in page when I go to url.com/admin/, however I am unable to go any further. It just refreshes the page with empty email/password fields and no error message. Just as an FYI, it is not to do with my username and password as I have tested the admin panel and it works fine when accessing it directly using the Cloud Run url.
Any help will be much appreciated.
I didn't actually find an answer to why the admin login page was just refreshing when I was trying to log in using the Firebase rewrite rule, however I thought of an alternative way to access the admin panel using my custom domain.
I have added a custom domain to the Cloud Run instance so that is uses a subdomain of my site domain and I can access the admin panel by using admin.customUrl.com rather than customUrl.com/admin/.
My goal is to have an AWS System Manager Document download a script from S3 and then run that script on the selected EC2 instance. In this case, it will be a Linux OS.
According to AWS documentation for aws:downloadContent the sourceInfo Input is of type StringMap.
The example code looks like this:
{
"schemaVersion": "2.2",
"description": "aws:downloadContent",
"parameters": {
"sourceType": {
"description": "(Required) The download source.",
"type": "String"
},
"sourceInfo": {
"description": "(Required) The information required to retrieve the content from the required source.",
"type": "StringMap"
}
},
"mainSteps": [
{
"action": "aws:downloadContent",
"name": "downloadContent",
"inputs": {
"sourceType":"{{ sourceType }}",
"sourceInfo":"{{ sourceInfo }}"
}
}
]
}
This code assumes you will run this document by hand (console or CLI) and then enter the sourceInfo in the parameter. When running this document by hand, anything entered in the parameter (an S3 URL) isn't accepted. However, I'm not trying to run this by hand, but rather programmatically and I want to hard code the S3 URL into sourceInfo in mainSteps.
AWS does give an example of syntax that looks like this:
{
"path": "https://s3.amazonaws.com/aws-executecommand-test/powershell/helloPowershell.ps1"
}
I've coded the document action in mainSteps like this:
{
"action": "aws:downloadContent",
"name": "downloadContent",
"inputs": {
"sourceType": "S3",
"sourceInfo":
{
"path": "https://s3.amazonaws.com/bucketname/folder1/folder2/script.sh"
},
"destinationPath": "/tmp"
}
},
However, it doesn't seem to work and I receive this error:
invalid format in plugin properties map[sourceInfo:map[path:https://s3.amazonaws.com/bucketname/folder1/folder2/script.sh] sourceType:S3];
error json: cannot unmarshal object into Go struct field DownloadContentPlugin.sourceInfo of type string
Note: I have seen this post that references how to format it for Windows. I did try it, didn't work and doesn't seem relevant to my Linux needs.
So my questions are:
Do you need a parameter for sourceInfo of type StringMap - something that won't be used within the aws:downloadContent {{ sourceInfo }} mainSteps?
How do you properly format the aws:downloadContent action sourceInfo StringMap in mainSteps?
Thank you for your effort in advance.
I had similar issue as I did not want anyone to type the stuff when running. So I added a default to the download content
"sourceInfo": {
"description": "(Required) Blah.",
"type": "StringMap",
"displayType": "textarea",
"default": {
"path": "https://mybucket-public.s3-us-west-2.amazonaws.com/automation.sh"
}
}
System.getenv() is returning json with VCAP_SERVICES : "******". My cloud foundry java spring-boot app is bound to three services. If I give cf env app_name in CLI, its returning all bound services correctly. Also VCAP_APPLICATION and other fields in returned json are just fine except this one.
A Little background:
I need to get service name, label and plan for all the services bound to my app. I'm new to cloud foundry and spring-boot, so don't know how to use spring cloud connectors in my code.
The value in the VCAP_SERVICES environment variable will be a JSON string that you need to parse, and it will give you an object describing all the bound services, including data like name, label, and plan. If you Google "vcap services" or "cloud foundry environment variables" the first result is this doc, and it has a section on VCAP_SERVICES. Here's the example they provide of what this JSON object looks like (after parsing):
{
"elephantsql": [
{
"name": "elephantsql-c6c60",
"label": "elephantsql",
"tags": [
"postgres",
"postgresql",
"relational"
],
"plan": "turtle",
"credentials": {
"uri": "postgres://seilbmbd:ABcdEF#babar.elephantsql.com:5432/seilbmbd"
}
}
],
"sendgrid": [
{
"name": "mysendgrid",
"label": "sendgrid",
"tags": [
"smtp"
],
"plan": "free",
"credentials": {
"hostname": "smtp.sendgrid.net",
"username": "QvsXMbJ3rK",
"password": "HCHMOYluTv"
}
}
]
}
As you suggest wanting to try to to acces this info in your code you should consider the cloud foundry java client, good intro here and its really easy to get up and running. I've found that the api is somewhat limited but its worth looking at - http://docs.cloudfoundry.org/buildpacks/java/java-client.html