How do I delete tables in the QuestDB console? - questdb

I imported a lot of CSV files into my database for testing and I'd like to clear out a few of the tables I don't need. How can I get rid of multiple tables at once? Is there an easy way like selecting many tables in this view:

Easiest way I found was to use the REST API /exec endpoint: https://questdb.io/docs/develop/insert-data/#exec-endpoint
I generated a bash script using the output of the "select name from tables()" Meta function.
Example lines:
curl -G --data-urlencode "query=DROP TABLE 'delete_me.csv'" http://localhost:9000/exec
curl -G --data-urlencode "query=DROP TABLE 'delete_me_also.csv'" http://localhost:9000/exec
If you use the web console (or even /exec to query) select name from tables() can be filtered on a regex just like a regular query.
Converting to a bash script is manual though. I recommend just dumping the table names to csv, then using bash to put in the appropriate quotes, etc.
I did it with awk:
awk -F, '{ print "curl -G --data-urlencode \"query="DROP TABLE \'$0\'"\" http://localhost:9000/exec"}' ~/Downloads/quest_db_drop_tables.sql > ~/Downloads/quest_db_drop_tables.sh

Related

Return just data in BQ CLI with query no "Welcome to BigQuery!" etc output

I am using the gcloud cli to query Big Query.
Example:
bq query --use_legacy_sql=false --format json 'select * from `ga4-extract.analytics_123456789.events_20220722` limit 1;' > data.json
After running this, if I cat data.json I see my data, but above the data in the file is the following text:
root#9e4947a68356:/# cat data.json
Welcome to BigQuery! This script will walk you through the
process of initializing your .bigqueryrc configuration file.
First, we need to set up your credentials if they do not
already exist.
Setting project_id data-extract as the default.
BigQuery configuration complete! Type "bq" to get started.
Then my data appears underneath in desired json format. How can I get rid of this text so it does not show? Tried the following flags after reading documentation, in each case no difference, the above output was still added to my data.json file:
--batch=true
--quiet=true
--headless=true
How can I save my output to data.json without the text above at the top of the json file?
Just do another command which will create this output the first type...
For example in your case :
bq show ga4-extract.analytics_123456789
will show some little information about your dataset and will also create this .biqqueryrc file... Your next command will not have this in the output.

Does QuestDB have EXPLAIN

Postgres has syntax to show the query plan for a SQL statement:
https://www.postgresql.org/docs/9.4/using-explain.html
Does QuestDB have something equivalent so that I can see the query plan that would result from my SQL before actually executing it?
The QuestDB docs show an explain option for the /exec HTTP endpoint. However, I can't get it to work on my instance running v6.2.
It should work like this:
curl -G \
--data-urlencode "query=select * from t limit 2;" \
--data-urlencode "explain=true" \
http://localhost:9000/exec

What modifications do I need to make to my gcloud command to get the list of enabled services in all GCP projects in the below tabular format?

I have the following code block to enumerate the enabled services in all GCP projects:
for project in $(gcloud projects list --format="value(projectId)"); \
do gcloud services list --project $project --format="table[box]($project,config.name,config.title)"; \
done;
It gives me the output in this format:
But I would like the output to be in this format:
How do I accomplish that? Please advise. Thanks.
You can't using gcloud because you need to assemble values from multiple commands: Projects.List and Services.List. gcloud --format applies only to the output from a single command.
You're grabbing all the data you need (Project, Name and Title). I recommend you use bash and the characters output by gcloud to form the table, and synthesize the table output for yourself.
Update
Here's a script that'll get you CSV output that matches your rows:
PROJECTS=$(\
gcloud projects list \
--format="value(projectId)")
for PROJECT in ${PROJECTS}
do
CONFIGS=$(gcloud services list \
--project=${PROJECT} \
--format="csv[no-heading](config.name,config.title.encode(base64))")
for CONFIG in ${CONFIGS}
do
IFS=, read NAME TITLE <<< ${CONFIG}
TITLE=$(echo ${TITLE} | base64 --decode)
echo "${PROJECT},${NAME},\"${TITLE}\""
done
done
NOTE encode(base64) to avoid the unquote title values from being split. base64 --decode reverses this at output. It should be possible to use format but I don't understand the syntax. --format='csv[no-heading](config.name,format("\"{0}\"",config.title))' didn't work.
Then, in the best *nix tradition, you can pipe that output into awk or columns or some other script that formats the CSV as a pretty-printed table. This is the more difficult challenge because you'll want to determine the longest value in each field in order to layout correctly. I'll leave that to you.

How can I programmatically download data from QuestDB?

Is there a way to download query results from the database such as tables or other datasets? The UI supports a CSV file download, but this is manual work to browse and download files at the moment. Is there a way I can automate this? Thanks
You can use the export REST API endpoint, this is what the UI uses under the hood. To export a table via this endpoint:
curl -G --data-urlencode "query=select * from my_table" http://localhost:9000/exp
query= may be any SQL query, so if you have a report with more granularity that needs to be regularly generated, this may be passed into the request. If you don't need anything complicated, you can redirect curl output to file
curl -G --data-urlencode "query=select * from my_table" \
http://localhost:9000/exp > myfile.csv

Delete all data in aws neptune

I have an aws neptune cluster and which I inserted many ntriples and nquads data using sparql http api.
curl -X POST --data-binary 'update=INSERT DATA { http://test.com/s http://test.com/p http://test.com/o . }' http://your-neptune-endpoint:8182/sparql
I would like to clean all the data I inserted(not the instance)
How can I do that?
You can do a SPARQL DROP ALL to delete all your data.
If you want a truly empty database (no data, no metrics in cloudwatch, no audit history etc), then I would highly recommend creating a new cluster up fresh. It takes only a few minutes.
If you want to want to remove only the data you inserted, one strategy is to use named graphs. When you insert the data, insert it into a named graph. When you delete, delete the graph.
To insert, one way is to use a call similar to the insert you gave. Except you insert into a named graph:
curl -X POST --data-binary 'update=INSERT DATA { GRAPH http://www.example.com/named/graph { http://test.com/s http://test.com/p http://test.com/o . } }'
https://endpoint:8182/sparql
Alternative is to insert using Graph Store Protocol:
curl --request POST -H "Content-Type: text/turtle"
--data-raw " http://test.com/s http://test.com/p http://test.com/o . "
'https://endpoint:8182/sparql/gsp/?graph=http%3A//www.example.com/named/graph'
Another is to use the bulk loader, which has namedGraphUri option (https://docs.aws.amazon.com/neptune/latest/userguide/load-api-reference-load.html).
Here is a delete that removes the named graph:
curl --request DELETE 'https://endpoint:8182/sparql/gsp/?graph=http%3A//www.example.com/named/graph'
See https://docs.aws.amazon.com/neptune/latest/userguide/sparql-graph-store-protocol.html for details on the Graph Store Protocol in Neptune.