Is it possible to export a Lucidchart diagram to json and then import that into draw.io? - draw.io

I am trying to convert a large Lucidchart diagram that took quite a while to Draw.io. Draw.io recommends ctr-a, ctr-c, ctr-v, but that doesn't seem to be working. Draw.io also cryptically mentions however:
draw.io supports importing the Lucidchart JSON file format. Lucidchart makes it difficult to obtain that data, so the easiest way to import is to copy and paste from editor to editor.
Has anyone ever figured out how to get the this json from Lucidchart?

Essentially, you're asking about Lucidchart's JSON exportability. Lucidchart supports JSON export from their Cloud Insights product - and steps on how to export here.
Note: this is not going to work for the standard ULM or chart style diagrams, and JSON isn't one of the current export options for the standard diagrams.
One thing you could try to hack together would be to connect your Lucidchart account to one of Zapier's two "search" actions, and then use a trigger to send and structure that data to an application like Firebase / Cloud Firestore. Once in the database, you could export the JSON file. (I haven't tried this particular use case before, but have used Zapier to successfully create a JSON tree structure from data coming from multiple applications). Hope this is helpful.

Related

Automatically migrate JSON data to newest version of JSON schema

I have a service running on my linux machine that reads data stored in a .json file when the machine is booting. The service then validates the incoming JSON data and modifies specific system configurations according to the data. The service is written in C++ and for the validation im using https://github.com/pboettch/json-schema-validator.
In development it was easy to modify the JSON schema and just adapt the data manually. I've started to use semantic versioning for my JSON schema and included it the following way:
JSON schema:
{
"$id": "https://my-company.org/schemas/config/0.1.0/config.schema.json",
"$schema": "http://json-schema.org/draft-07/schema#",
// Start of Schema definition
}
JSON data:
{
"$schema": "https://my-comapny.org/schemas/config/0.1.0/config.schema.json",
// Rest of JSON data
}
With the addition of the version, I am able to check if a version mismatch exists before validating.
What I am looking for is a way to automatically migrate the JSON data to match the newer schema version, if a version mismatch is identified. Is there any way to automatically achieve this, or is the only way to manually edit the JSON data to match the schema?
Since I plan on releasing this as open source I would really like to include some form of automatic migration so I can just ask the user if he wants to migrate to conform to the newest schema version instead of throwing an error, if a version mismatch was identified.
What you're asking for is something which will need to make assumptions to work.
This is an age old problem and similar for databases. You can have schema migrations generated with many simple changes, but this is not viable if you wish to translate existing data automatically too.
Let's look at a basic example. You rename a field.
How would a tool know you've renamed a field vs removed an old one and added a new one? It essentially, cannot.
So, you need to write your migrations by hand.
You could use JSON transformation tools like jq or fx to create migration scripts without writing it in code, which may or may not be preferable. (jq has a steeper learning curve but it's also very powerful.)

Pub/Sub csv data to Dataflow to BigQuery

My pipeline is IoTCore -> Pub/Sub -> Dataflow -> BigQuery. Initially the data I was getting was Json format and the pipeline was working properly. Now I need to shift to csv and the issue is the Google defined dataflow template I was using uses Json input instead of csv. Is there an easy way of transfering csv data from pub/sub to bigquery through dataflow. The template can probably be changed but it is implemented in Java which I have never used so would take a long time to implement. I also considered implementing an entire custom template in python but that would take too long.
Here is a link to the template provided by google:
https://github.com/GoogleCloudPlatform/DataflowTemplates/blob/master/src/main/java/com/google/cloud/teleport/templates/PubSubToBigQuery.java
Sample: Currently my pub/sub messages are JSON and these work correctly
"{"Id":"123","Temperature":"50","Charge":"90"}"
But I need to change this to comma seperated values
"123,50,90"
Very easy: Do nothing!! If you have a look to this line you can see that the type of the messages used is the PubSub message JSON, not your content in JSON.
So, to prevent any issues (to query and to insert), write in another table and it should work nicely!
Can you please share your existing python code where you are parsing JSON format data and new & old data sample, So that it can be customized accordingly.
Moreover you can refer Python code here, it has performed word count transformation logic over PCollection, hopefully it can give you some refence to customize your code accrdingly.

Bulk Tag Bigquery columns with python & Google Cloud Datacatalog

Is there a way to bulk tag bigquery tables with python google.cloud.datacatalog?
If you want to take a look at sample code which uses the python google.cloud.datacatalog client library, I've put together a utilities open source script, that creates bulk Tags using a CSV as source. If you want to use a different source, you may use this script as reference, hope it helps.
create bulk tags from csv
For this purpose you may consider using DataCatalogClient() method which is included in google.cloud.datacatalog_v1 class as a part of PyPI Python google-cloud-datacatalog package leveraging Google Cloud Data Catalog API service.
By the first, you have to enable Data Catalog and BigQuery APIs
in your project;
Install Python Cloud Client Libraries for the Data Catalog API:
pip install --upgrade google-cloud-datacatalog
Set up authentication, exporting
GOOGLE_APPLICATION_CREDENTIALS environment variable holding JSON
file that contains your service account key:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json"
Refer to this example from official documentation that
intelligibly reflects a way creating Data catalog tag template,
attaching appropriate tag fields to the target Bigquery table using
create_tag_template() function.
Having any doubts feel free to extend you initial question or add a comment below this answer, thus we can address particular use case according to your needs.

First time protege user, trying to export a simple ontology to AWS dynamodb

I am currently using protege 5.0 and have created a very simple ontology (the pizza example). I was wondering how I would export this ontology to dynamodb on AWS. I was hoping someone could post a link to a good tutorial on protege 5.0 or walk me through this. Thanks!
If you are using dynamodb just to store the content of a file and to be able to access the file at a specific URL, then the process required is just the same as for any other file type you would store on dynamodb. The default way for Protege and most other OWL related tools to access an ontology is a simple HTTP get from a provided IRI.

AWS S3 error with PFFiles after importing the exported Parse data

Looks like Parse.com stores the PFFile objects on AWS S3 and only stores a reference to the actual files on S3 in Parse for the PFFile object types.
So my problem here is I only get a link to AWS S3 link for my PFFile if I export the data using the out of the box Parse.com export functionality. After I import the same data to my Parse application, for some reason the security setting on those PFFiles on S3 is changed in a way that all PFFiles won't be accessible to me after an import due to security error.
My question is, does anyone know how the security is being set on the PFFiles? Here's a link to PFFile https://parse.com/docs/osx/api/Classes/PFFile.html but I guess this is rather an advanced topic and wasn't revealed on this page.
Also looking a solution for this, all I found is this from their forum:
In this case, the PFFiles are stored in a different app. You might
need to download these files and upload them again to the new app and
update the pointers. I know this is not a great answer but we're
working on making this process more straightforward.
https://www.parse.com/questions/import-pffile-object-not-working-in-iphone-application