not able to connect superset to druid - apache-superset

I have Druid and superset running locally, but I am not able to connect them together. I have sample data wikiticker in Druid. I already installed pydruid with pip3: pip3 install pydruid (I am not sure if I need to install this to any particular location). I have also installed superset using docker-compose locally using This Link, However, I am not able to connect Druid with Superset. I went to Data->Databases->add database. In Connection, I gave Database name as Druid and not sure what to give in SQLALCHEMY URI*
. I tried these:
druid//admin:admin#localhost:8082/wikiticker
pydruid//admin:admin#localhost:8082/wikiticker
druid://admin:admin#localhost:8082/druid/v2/sql
but nothing is working.

As far as I know, Druid has no built-in authentication. The SQLALCHEMY_URI string should be druid+https://localhost:8082/druid/v2/sql/ (or druid+http://localhost:8082/druid/v2/sql/ if you're using HTTP).

As per documentation the connection string should look like this (third variant in the question):
druid://<User>:<password>#<Host>:<Port-default-9088>/druid/v2/sql
Why you cannot connect might be, because of your docker setup. In the context of your superset docker container localhost refers to that particular docker container. For example the database and the redis cache are referred to as db and redis for the connection setup within the docker-compose.yml and the environment variables set in .env.
So you could extend the docker-compose.yml to include the druid container, named druid as well and then connect to it like this:
druid://admin:admin#druid:PORTTHATYOUEXPOSED/druid/v2/sql

There is a good chance that you didn't add the Root Certificate. You can either do that or disable SSL verification. See the documentation here: https://superset.apache.org/docs/databases/druid

Related

Appwrite environment variables ignored

I'm using Appwrite on AWS (started with the pre-canned Appwrite marketplace and upgraded to 0.14.2.305).
In order to allow certificate generation, I need to update _APP_DOMAIN and _APP_DOMAIN_TARGET. however, no matter which value I put there, it is not "ingested" by the app (container restart and reboot of the server did not make any difference)
I also tried to read the values from the docker instance itself - but again - no value was read.
Ideas?
You need to restart the Docker containers :)
Just run a docker compose up -d in your appwrite directory.

MQ Custom Docker Image - MQM Group Not Found

Description: Getting the following error when running a docker build. I thought mqm group would be automatically created by default. Doesn't mention otherwise in the site link below. Can someone else try this?
System Notes:(VS Code- Docker build), windows machine.
Error:
useradd: group 'mqm' does not exist
Reference site for instructions:
IBM MQ Customer Docker Image Instructions
Docker File:
FROM ibmcom/mq
USER root
RUN useradd alice -G mqm && \
echo alice:passw0rd | chpasswd
USER mqm
COPY 20-config.mqsc /etc/mqm/
Duplicate of ibmcom/mq docker image backward compatibility issue
From 9.1.5 the container does not use OS based users or groups. This is to conform to cloud best practices. Instead a file based system is being used. This is so that when you roll-out the container in a cloud into production you can switch to an LDAP based system.
The 9.1.5 container uses htpasswd, with the relevant file in /etc/mqm/
For development, if you are not going to create new users, then you can use the 9.1.5 container. If you want to create new users, then you can use 9.1.4 or earlier, or use htpasswd with bcrypt to create the users.
I was using a deprecated site apparently that's in the docker repo link. I guess its a problem with docker and they can`t remove it. Please follow the instructions here. I had no issue.
https://github.com/ibm-messaging/mq-container

Giving Docker access to db file outside container

I'm trying to test a Django app managed by Docker. Since it's a development project only used by me, I'm using a sqlite3 database backend. However, because I'll be populating this test database with a lot of generated data, and because I don't fully trust Docker, I want to store this sqlite3 db file outside of the container in my home directory, to ensure it doesn't get deleted or lost.
However, by design, Docker makes it difficult for programs inside containers to access files outside of those containers. How do I update my Docker configuration to allow access to this one specific db file in my home directory?
You can mount a host directory into your docker container using -v flag.
For details see this answer: https://stackoverflow.com/a/23455537/7695859.
docker run -v /host/directory:/container/directory -other -options image_name command_to_run
For more details understanding see these official docs.
Use volumes
Manage data in Docker

Migrating from Heroku to Azure - getting the database migration right

I have a Django app live on Heroku. I'm migrating it to Azure, taking advantage of the $120K/yr credit they recently offered me. Here's what I've done so far:
i) I created an Azure VM with Ubuntu (Standard_D1).
ii) I installed postgresql on it (my db of choice)
iii) I pulled my Heroku app's files from my github onto the Azure VM.
iv) I created a postgres DB on the Azure VM, and then ran syncdb to create the required tables.
v) I tweaked postgresql.conf and pg_hba.conf to cater to some tuning requirements and such.
vi) I took a backup from my Heroku app's dashboard, and downloaded it. This backup file's name is a random uuid, without a file format (e.g. f0af6457-1a24-47d0-881c-434f9bef7c92).
vii) I'm now gearing up to use pg_restore to fit the backup in the newly created+synced app on Azure VM.
Does all this sound about right so far? I have 3 questions:
1) Will pg_restore work with the backup I got off Heroku? This backup doesn't have a file format at all; whereas I'm under the impression it has to be a .tar archive to be compatible with pg_restore.
2) My database is called mydbname. The data backup is saved at /datadrive/backup/filename. Thus, in my case is the correct pg_restore command something like: pg_restore -d mydbname /datadrive/backup/filename?
3) Once I successfully load the correct data in my Azure app, the final step, in my opinion, is to route traffic going to the Heroku app instead to the Azure app. For that, I'll tweak DNS entries. Am I missing anything else here, in your opinion?
Essentially the extension shouldn't matter, your restore should work but frankly haven't tested myself with a heroku backup.
However what I would suggest is lets make it a valid .dump file
curl -o latest.dump heroku pg:backups public-url --app <yourappname>
this should be your valid .dump file, though its not any different from the backup you already have..

InfluxDB Cannot see databases from localhost:8083 + Cannot access Command Line Interface

Please feel free to redirect me to any other place if this isn't the right one for this question.
Problem: When I log to the administration panel : "localhost:8083" with "root" "root" I cannot see the existing databases nor the data in it. Also, I have no way to access InfluxDB from the command line.
Also the line sudo /etc/init.d/influxdb start does not work for my setup. I have to go into /etc/init.d/ and run sudo ./influxdb start -config=config.toml in order to get the server running.
I've installed influxDB v0.8 from https://influxdb.com/docs/v0.8/introduction/installation.html for Ubuntu 14.04.
I've been developing a Clojure program using the Capacitor API just to get started and interact with InfluxDB. It runs well, I can create delete, insert and query a database without problems.
netstat -anp | grep LISTEN confirms me that ports 8083 8086 8090 and 8099 are listening.
I've been Googling all around but cannot manage to get a solution.
Thanks for the support and enjoy building things !
Problem solved: the database weren't visible in firefox but everything is visible in Chromium!
Why couldn't I access the CLI ? I was expecting the v0.8 to behave exactly like the v0.9.
You help was appreciated anyway !
For InfluxDB 0.9 the CLI could be started with:
/opt/influxdb/influx
then you can display available databases:
Connected to http://localhost:8086 version 0.9.1
InfluxDB shell 0.9.1
> show databases
name: databases
---------------
name
collectd
graphite
> use collectd
Using database collectd
> show series limit 5
You can try creating new database from CLI:
> CREATE DATABASE mydb
or with curl command:
curl -G 'http://localhost:8086/query' --data-urlencode "q=CREATE DATABASE mydb"
Web UI should be available on http://localhost:8083