GRAKN: Can I load a schema or data from a GRAQL file via the Java API? - vaticle-typeql

Is it possible to load/import a schema or data from a GRAQL file via the Java API, e.g. in an in-memory graph?
import ai.grakn.Grakn
import ai.grakn.GraknTxType
fun main(args: Array<String>) {
val session = Grakn.session(Grakn.IN_MEMORY, "db")
val tx = session.open(GraknTxType.WRITE)
// load a schema / import data from a gql file
tx.close()
session.close()
}
There are examples on the web site for creating a schema via tx.putEntityType etc. or parsing queries but is it also possible to simply import a gql file?

It is possible, you can read the whole file into a string and then do the following:
String readInFile = readWholeFile(....);
tx.graql().parse(readInFile).execute();

Related

Postman send multiple requests

I've got a PATCH request that looks like this:
{{host}}/api/invoice/12345678/withdraw
host is a variable determining the environment.
For this request I need to add a unique authorization token.
The problem is I need to send dozens of such requests. Two things change for each request:
id of invoice (for this case is '12345678')
auth token (herebetoken1).
How can I automate it?
You can use Postman Runner for your problem. In Runner, you can send specified requests in specified iterations and delay with data (json or csv file).
For more info, I suggest you take a look at the links below.
Importing Data Files in Postman
Using CSV and JSON Data Files
Request:
Runner:
Data: (You can choose one of them)
Json Data: (data.json)
csv Data: (data.csv)
Preview Data in Runner:
Result:
use the below pre-request script , and call replace id in url and auth in authorization with {{id}} and {{token}} variables . Use collection runner to execute it .
Replace the hashmap with what you requires
hashmap = {
"1234": "authtoken1",
"2222": "authtoken2"
}
pm.variables.get("count") === undefined ? pm.variables.set("count", 0) : null
let keyval = Object.entries(hashmap)
let count = pm.variables.get("count")
if (count < keyval.length) {
pm.variables.set("id", keyval[pm.variables.get("count")][0])
pm.variables.set("token", keyval[pm.variables.get("count")][1])
pm.variables.set("count", ++count)
keyval.length===count? null:postman.setNextRequest(pm.info.requestName)
}
Example collection:
https://www.getpostman.com/collections/43deac65a6de60ac46b3 , click inport and import by link

Error Pulling Facebook Ad Campaign

I am trying to automate a task for my company. They want me to pull the insights from their ad campaigns and put it in a CSV file. From here I will create a excel sheet that grabs this data and automates the plots that we send to our clients.
I have referenced the example code from the library and I believe where my confusion exists is in who I define 'me' to be in line 14.
token = 'temporary token from facebook API'
VOCO_id = 'AppID'
AppSecret = 'AppSecret'
me = 'facebookuserID'
AppTokensDoNotExpire = 'AppToken'
from facebook_business import FacebookSession
from facebook_business import FacebookAdsApi
from facebook_business.adobjects.campaign import Campaign as AdCampaign
from facebook_business.adobjects.adaccountuser import AdAccountUser as AdUser
session = FacebookSession(VOCO_id,AppSecret,AppTokensDoNotExpire)
api = FacebookAdsApi(session)
FacebookAdsApi.set_default_api(api)
me = AdUser(fbid=VOCO_id)
####my_account = me.get_ad_account()
When I run the following with the hashtag on my_account, I get a return stating that the status is "live" for these but the value of my permissions is not compatible.

how to store python json.dumps into firebase as one payload; without firebase auto creating indexs in front of each object

First time poster, and I'm not really a developer, so perspective is always appreciated :)
Objective:
I am attempting to put (or patch) a json.dumps(mergedFile) into firebase as one payload without firebase auto creating indexes (0, 1, etc..) in front of each object
Problem statement:
I am submitting the following json object into the /testObject path:
[{"test1":"226.69"},{"test2":"7.48"}]
In firebase the response is stored as:
[
{
"testObject": {
0: {
"test1": "226.69"
},
1: {
"test2": "7.48"
}
}
}
]
Background:
The total # of items in the payload of the data I need to store is
just over 5000
If I parse each object via a for loop the data is written as
expected, however, it initiates a new request for each itteriation of
the loop and has a large overhead impact compared to just
dumping one large object in one request.
Here is my Code:
import json
import requests
import xml.etree.ElementTree as ET
def get_data():
try:
print 'hampsters are running...'
# OFFLINE TESTING
sourceFile = 'response.xml'
tree = ET.parse(sourceFile)
root = tree.getroot()
for symbol in root.iter('symbol'):
company = symbol.attrib['company']
location = symbol.attrib['location']
destinationData = {company: location}
mergedFile.append(destinationData)
print('downlaoding the info was a success! :)')
except:
print 'Attempt to download information did not complete successfully :('
def patch_data():
try:
print 'attempting to upload info to database...'
data = json.dumps(mergedFile)
print data
try:
req = requests.put(url, data=data, headers=headers)
req.raise_for_status()
except requests.exceptions.HTTPError as e:
print e
print req.json()
print 'upload to database complete!'
except:
print 'Attempt to upload information did not complete successfully :('
if __name__ == "__main__":
mergedFile = []
auth = "*****"
databaseURL = 'https://*****.firebaseio.com'
headers = {"auth": auth, "print": "pretty"}
# headers = {"auth": auth, "print": "pretty", "Accept": "text/event-stream"}
requestPath = '/testObject.json?auth=' + auth
url = databaseURL + requestPath
get_data()
patch_data()
I feel like its storing an array, but I'm leveraging data = json.dumps(mergedFile) before the put request. Do I have a mis-understanding of how json.dumps works? Based on the output before the request I feel it looks good. I'm also leveraging the requests python module... is this converting the data to an array?
Any insight anyone could provide would be greatly appreciated!
Regards,
James.
The Firebase Database stores arrays as regular key-value pairs, with the keys being numbers. So what you see is the expected behavior.
There are many reasons why Firebase recommends against storing arrays in the database. A few can be found in these links:
Best Practices: Arrays in Firebase
Proper way to store values array-like in Firebase
Firebase documentation on structuring data
Other questions about arrays in Firebase
this answer on arrays vs sets

PowerBI Embedded - Unauthorized when importing pbix

I am getting an Unauthorized response when trying to import a pbix into powerbi embedded. This was working a few days ago as far as I can remember. Here is the code below I am using, it is basically the same from the github example. Has something recently changed? Thanks.
// Create a dev token for import
var devToken = PowerBIToken.CreateDevToken(workspaceCollectionName, workspaceId);
using (var client = CreateClient(devToken))
{
// Import PBIX file from the file stream
var import = await client.Imports.PostImportWithFileAsync(workspaceCollectionName, workspaceId, fileStream, datasetName);
// Example of polling the import to check when the import has succeeded.
while (import.ImportState != "Succeeded" && import.ImportState != "Failed")
{
import = await client.Imports.GetImportByIdAsync(workspaceCollectionName, workspaceId, import.Id);
Console.WriteLine("Checking import state... {0}", import.ImportState);
Thread.Sleep(1000);
}
}
Figured out the issue it was on my side after some refactoring. I was passing in an incorrect workspace id. Not sure why I receive an unauthorized response back when I pass in an incorrect workspace id.

Decoding Django session data in Node js

I am trying to decode Django's session data stored in redis DB to JSON object. I have got session data from redis DB into a variable djangoSessionData, when in do console.log of this data, it looks like this:
���} �(�_auth_user_backend��)django.contrib.auth.backends.ModelBackend��_auth_us er_hash_auth_user_id��2��user_unique_key��abc�u.65��
When I query redis directly and see session data value, it is like this:
"\x80\x04\x95\xaf\x00\x00\x00\x00\x00\x00\x00}\x94(\x8c\x12_auth_user_backend\x94\x8c)django.contrib.auth.backends.ModelBackend\x94\x8c\x0f_auth_user_hash\x94\x8c(6d34e7154c4d217233c7346177325969d1832565\x94\x8c\r_auth_user_id\x94\x8c\x012\x94\x8c\x0fuser_unique_key\x94\x8c\x03abc\x94u."
I am trying to decode it into JSON string using:
var sessionData = new Buffer(djangoSessionData, 'base64').toString();
But when I console.log sessionData, it looks like this:
������z��i���cjx(r���&ں�[i���
�ץ�$zwj�a����Z��߇�מ�ݵ�m�s���^��n}��u�}���ں��ǫ�'v�ǫ����翑��m˿
So looks like it is not decoding properly, how can it be decoded to JSON object in Node.js
Edit:
I am using Django 1.9 and saving session to redis DB using:
SESSION_ENGINE = 'django.contrib.sessions.backends.cached_db'