I tried to use https://github.com/aws/graph-notebook to connect to Blazegraph Database. I have verified Blazegraph is running from the following.
serviceURL: http://192.168.1.240:9999
Welcome to the Blazegraph(tm) Database.
Go to http://192.168.1.240:9999/bigdata/ to get started.
I did the following the jupyter notebook
%%graph_notebook_config
{
"host": "localhost",
"port": 9999,
"auth_mode": "DEFAULT",
"iam_credentials_provider_type": "ENV",
"load_from_s3_arn": "",
"aws_region": "us-west-2",
"ssl": false,
"sparql": {
"path": "blazegraph/namespace/foo/sparql"
}
}
then do the following
%status
gives error {'error': JSONDecodeError('Expecting value: line 1 column 1 (char 0)',)}
I tried to replace host with 192.168.1.240 and is still having the same problem.
Looks like you found a bug!
This is now fixed in https://github.com/aws/graph-notebook/pull/137.
Related
I've created a simple VM in VirtualBox and installed Ubuntu, however, I am unable to import this to AWS and generate an AMI from it.
Operating system: Ubuntu 20.04.4 LTS
Kernel: Linux 5.4.0-104-generic
I've followed the steps provided according to the docs and setup role-policy.json & trust-policy.json:
https://docs.aws.amazon.com/vm-import/latest/userguide/vmie_prereqs.html#vmimport-role
https://docs.aws.amazon.com/vm-import/latest/userguide/vmimport-image-import.html
I keep running into the error:
{
"ImportImageTasks": [
{
"Description": "My server VM",
"ImportTaskId": "import-ami-xxx",
"SnapshotDetails": [
{
"DeviceName": "/dev/sde",
"DiskImageSize": 2362320896.0,
"Format": "VMDK",
"Status": "completed",
"Url": "s3://xxxx/simple-vm.ova",
"UserBucket": {
"S3Bucket": "xxx",
"S3Key": "simple-vm.ova"
}
}
],
"Status": "deleted",
"StatusMessage": "ClientError: We were unable to read your import's initramfs/initrd to determine what drivers your import requires to run in EC2.",
"Tags": []
}
]
}
I've tried changing disk to and from .vdi and .vmdk
I've tried disabling floppy drive and update initramfs
I ran into this error and was able to get around it by using import-snapshot instead of import-image. Then I could deploy the snapshot using the ordinary means of creating an image from the snapshot.
I'm trying to use django with multiple databases. One default postgres-db and another mysql-db. I followed the documentation on how to configure both dbs:
https://docs.djangoproject.com/en/4.0/topics/db/multi-db/
My settings.py looks like this:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "default",
"USER": "----------",
"PASSWORD": "-----------",
"HOST": "localhost",
"PORT": "5432",
},
"data": {
"ENGINE": "django.db.backends.mysql",
"NAME": "data",
"USER": "---------",
"PASSWORD": "---------",
"HOST": "localhost",
"PORT": "3306",
}
}
Things work fine when querying the default database. But it seems that django uses postgres-db sql syntax for queries on the mysql-db:
# Model "Test" lives in the mysql-db.
Test.objects.using('data').all()
# gives me the following exception
psycopg2.errors.SyntaxError: ERROR: SyntaxError at ».«
LINE 1: EXPLAIN SELECT `test`.`id`, `test`.`title`, ...
It even uses the postgres-db client psycopg2 to generate/execute it.
Using a raw query works fine though, this means it does correctly use the mysql-db for the specific query.
Test.objects.using('data').raw('SELECT * FROM test')
# works
How can i configure Django to use the correct SQL syntax when using multiple databases of different engines?
Thanks!
I must create new nexus server on GCP. I have decided to use nfs point for datastorage. All must be done with ansible ( instance is already created with terraform)
I must get the dynamic IP setted by GCP and create the mount point.
It's working fine with gcloud command, but how to get only IP info ?
Code:
- name: get info
shell: gcloud filestore instances describe nfsnexus --project=xxxxx --zone=xxxxx --format='get(networks.ipAddresses)'
register: ip
- name: Print all available facts
ansible.builtin.debug:
msg: "{{ip}}"
result:
ok: [nexus-ppd.preprod.d-aim.com] => {
"changed": false,
"msg": {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": true,
"cmd": "gcloud filestore instances describe nfsnexus --project=xxxxx --zone=xxxxx --format='get(networks.ipAddresses)'",
"delta": "0:00:00.763235",
"end": "2021-03-14 00:33:43.727857",
"failed": false,
"rc": 0,
"start": "2021-03-14 00:33:42.964622",
"stderr": "",
"stderr_lines": [],
"stdout": "['1x.x.x.1xx']",
"stdout_lines": [
"['1x.x.x.1xx']"
]
}
}
Thanks
Just use the proper format string, eg. to get the first IP:
--format='get(networks.ipAddresses[0])'
Find solution just add this:
- name:
debug:
msg: "{{ip.stdout_lines}}"
I'am feeling so stupid :(, I must stop to work after 2h AM :)
Thx
I am trying to deploy an app named abcd with artifact as abcd.war. I want to configure to an external datasource. Below is my abcd.war/META-INF/context.xml file
<Context>
<ResourceLink global="jdbc/abcdDataSource1" name="jdbc/abcdDataSource1" type="javax.sql.DataSource"/>
<ResourceLink global="jdbc/abcdDataSource2" name="jdbc/abcdDataSource2" type="javax.sql.DataSource"/>
</Context>
I configured the below custom JSON during a deployment
{
"datasources": {
"fa": "jdbc/abcdDataSource1",
"fa": "jdbc/abcdDataSource2"
},
"deploy": {
"fa": {
"database": {
"username": "un",
"password": "pass",
"database": "ds1",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
},
"database": {
"username": "un",
"password": "pass",
"database": "ds2",
"host": "reserved-alpha-db.abcd.us-east-1.rds.amazonaws.com",
"adapter": "mysql"
}
}
}
}
I also added the recipe opsworks_java::context during configure phase. But it doesnt seem like working and I always get the message as below
[2014-01-11T16:12:48+00:00] INFO: Processing template[context file for abcd] action create (opsworks_java::context line 16)
[2014-01-11T16:12:48+00:00] DEBUG: Skipping template[context file for abcd] due to only_if ruby block
Can anyone please help on what I am missing with OpsWorks configuration?
You can only configure one datasource using the built-in database.yml. If you want to pass additional information to your environment, please see Passing Data to Applications
I am trying to setup an AWS AMI vagrant provision: http://www.packer.io/docs/builders/amazon-ebs.html
I am using the standard .json config:
{
"type": "amazon-instance",
"access_key": "YOUR KEY HERE",
"secret_key": "YOUR SECRET KEY HERE",
"region": "us-east-1",
"source_ami": "ami-d9d6a6b0",
"instance_type": "m1.small",
"ssh_username": "ubuntu",
"account_id": "0123-4567-0890",
"s3_bucket": "packer-images",
"x509_cert_path": "x509.cert",
"x509_key_path": "x509.key",
"x509_upload_path": "/tmp",
"ami_name": "packer-quick-start {{timestamp}}"
}
It connects fine, and I see it create the instance in my AWS account. However, I keep getting Timeout waiting for SSH as an error. What could be causing this problem and how can I resolve it?
As I mentioned in my comment above this is just because sometimes it takes more than a minute for an instance to launch and be SSH ready.
If you want you could set the timeout to be longer - the default timeout with packer is 1 minute.
So you could set it to 5 minutes by adding the following to your json config:
"ssh_timeout": "5m"