I have some problem writing to a firebird database using IBPP. I can make queries from tables without problems using the SELECT statement, but whenever I try to write values using IBPP::Set or IBPP::Execute, I get an error.
This is how I connect to my database:
db = IBPP::DatabaseFactory(settings.ServerName, settings.DbName, settings.UserName, settings.Password,
"WIN1252", "PAGE_SIZE 8192 DEFAULT CHARACTER SET WIN1252");
db->Connect();
IBPP::Transaction tr = IBPP::TransactionFactory(db);
IBPP::Statement st = IBPP::StatementFactory(db, tr);
Then I wanted to set a specific value:
st->Execute("UPDATE T_GEOMODELL SET Distance= 42.0 WHERE (OBJEKT_ID = 1756056);");
I also tried
st->Prepare("SELECT * FROM T_GEOMODELL WHERE OBJEKT_ID = 1756056");
st->Set(6, "41");
st->Execute();
Here I get the error that "this->mOutRow" is "nullptr".
With the Firebird ISQL tool however, the same command
UPDATE T_GEOMODELL SET Distance= 42.0 WHERE (OBJEKT_ID = 1756056);
works without problems.
I am using Visual C++ 2015 under x64.
Thanks in advance for any help!
Okay, I found the error - I had to start and commit the transaction using tr->Start(); and tr->Commit();
tr->Start();
st->Execute("UPDATE T_GEOMODELL SET Distance= 42.0 WHERE (OBJEKT_ID = 1756056);");
tr->Commit();
Related
i implemented the simple net-snmp simple application into my project. when i tried to use the example OID given at the site: .1.3.6.1.2.1.1.1.0 it worked just fine.
now i need to get data from these oids: https://www.sysadmin.md/snmp-most-useful-linux-oids.html. when i tried to implement these oids into my code i got error: Process finished with exit code 139 (interrupted by signal 11: SIGSEGV).
the simple application code:
add_mibdir(".");
pdu = snmp_pdu_create(SNMP_MSG_GET);//creating a get pdu
if(read_objid(".1.3.6.1.4.1.2021.11.9", id_oid, &id_len)==1)//specifying the oid we want to receive
cout<<"oid read"<<endl;
else
cout<<"couldnt read oid"<<endl;
snmp_add_null_var(pdu, id_oid, id_len);//making room for the response
status = snmp_synch_response(session_handle, pdu, &response);//getting the response
cout<<"status: "<<status<<endl;
string ReturnBuffer;
cout<<"vars: "<<vars->val.string<<endl;
for(vars = response->variables; vars; vars = vars->next_variable)//printing the values we got back
{
cout<<vars->val.string<<endl;
ReturnBuffer.append(reinterpret_cast<char*>(vars->val.string));
}
ReturnBuffer.append("\0"); //test
return ReturnBuffer;
according to the code the oid itself is read and the status of the response is 0 and yet im not getting any of the data and got the error above, i'd appreciate any help thanks in advance.
language: c++
os: linux(ubuntu)
I'm trying to follow a tutorial for using spark from RStudio on DSX, but I'm running into the following error:
> library(sparklyr)
> sc <- spark_connect(master = "CS-DSX")
Error in spark_version_from_home(spark_home, default = spark_version) :
Failed to detect version from SPARK_HOME or SPARK_HOME_VERSION. Try passing the spark version explicitly.
I took the above code snippet from the connect to spark dialog in RStudio:
So I took a look at SPARK_HOME:
> Sys.getenv("SPARK_HOME")
[1] "/opt/spark"
Ok, Lets check that dir exists:
> dir("/opt")
[1] "ibm"
I'm guessing this is the cause of the problem?
NOTE: there are a few similar questions on stackoverflow, but none of them are about IBM's Data Science Experience (DSX).
Update 1:
I tried the following:
> sc <- spark_connect(config = "CS-DSX")
Error in config$spark.master : $ operator is invalid for atomic vectors
Update 2:
An extract from my config.yml. Note that I have many more spark services in my, I've just pasted the first one:
default:
method: "shell"
CS-DSX:
method: "bluemix"
spark.master: "spark.bluemix.net"
spark.instance.id: "7a4089bf-3594-4fdf-8dd1-7e9fd7607be5"
tenant.id: "sdd1-7e9fd7607be53e-39ca506ba762"
tenant.secret: "xxxxxx"
hsui.url: "https://cdsx.ng.bluemix.net"
Note that my config.yml was generated for me.
Update 3:
My .Rprofile looks like this:
# load sparklyr library
library(sparklyr)
# setup SPARK_HOME
if (nchar(Sys.getenv("SPARK_HOME")) < 1) {
Sys.setenv(SPARK_HOME = "/opt/spark")
}
# setup SparkaaS instances
options(rstudio.spark.connections = c("CS-DSX","newspark","cleantest","4jan2017","Apache Spark-4l","Apache Spark-3a","ML SPAAS","Apache Spark-y9","Apache Spark-a8"))
Note that my .Rprofile was generated for me.
Update 4:
I uninstalled sparklyr and restarted the session twice. Next I tried to run:
library(sparklyr)
library(dplyr)
sc <- spark_connect(config = "CS-DSX")
However, the above command hung. I stopped the command and checked the version of sparklyr which seems to be ok:
> ip <- installed.packages()
> ip[ rownames(ip) == "sparklyr", c(0,1,3) ]
Package Version
"sparklyr" "0.4.36"
You cannot use master parameter to connect to bluemix spark service if that is the intent since your kernels are defined in config.yml file, you should be using config parameter instead to connect.
config.yml is loaded up with your available kernel information(spark instances).
Apache Spark-ic:
method: "bluemix"
spark.master: "spark.bluemix.net"
spark.instance.id: "41a2e5e9xxxxxx47ef-97b4-b98406426c07"
tenant.id: "s7b4-b9xxxxxxxx7e8-2c631c8ff999"
tenant.secret: "XXXXXXXXXX"
hsui.url: "https://cdsx.ng.bluemix.net"
Please use config
sc <- spark_connect(config = "Apache Spark-ic")
as suggested in tutorial:-
http://datascience.ibm.com/blog/access-ibm-analytics-for-apache-spark-from-rstudio/
FYI,
By Default, you are connected to , i am working on finding how to change version with config parameter.
> version <- invoke(spark_context(sc), "version")
print(version)
[1] "2.0.2"
Thanks,
Charles.
I had the same issue and fix it as follows:
go to C:\Users\USER_NAME\AppData\Local/spark/ and delete everything you'll find in the directory
Then, in the R console run:
if (!require(shiny)) install.packages("shiny");
library(shiny)
if (!require(sparklyr)) install.packages("sparklyr");
library(sparklyr)
spark_install()
code=1000 [Unavailable exception] message="Cannot achieve consistency level ONE" info={'required_replicas': 1, 'alive_replicas': 0, 'consistency': 'ONE'}
code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}
I am inserting into Cassandra Cassandra 2.0.13(single node for testing) by python cassandra-driver version 2.6
The following are my keyspace and table definitions:
CREATE KEYSPACE test_keyspace WITH replication = { 'class': 'SimpleStrategy', 'replication_factor': '1' };
CREATE TABLE test_table (
key text PRIMARY KEY,
column1 text,
...,
column17 text
) WITH COMPACT STORAGE AND
bloom_filter_fp_chance=0.010000 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.000000 AND
gc_grace_seconds=864000 AND
read_repair_chance=0.100000 AND
replicate_on_write='true' AND
populate_io_cache_on_flush='false' AND
compaction={'class': 'SizeTieredCompactionStrategy'} AND
compression={'sstable_compression': 'SnappyCompressor'};
What I tried:
1) multiprocessing(protocol version set to 1)
each process has its own cluster, session(default_timeout set to 30.0)
def get_cassandra_session():
"""creates cluster and gets the session base on key space"""
# be aware that session cannot be shared between threads/processes
# or it will raise OperationTimedOut Exception
if CLUSTER_HOST2:
cluster = cassandra.cluster.Cluster([CLUSTER_HOST1, CLUSTER_HOST2])
else:
# if only one address is available, we have to use older protocol version
cluster = cassandra.cluster.Cluster([CLUSTER_HOST1], protocol_version=1)
session = cluster.connect(KEY_SPACE)
session.default_timeout = 30.0
return session
2) batch insert (protocol version set to 2 because BatchStatement is enabled on Cassandra 2.X)
def batch_insert(session, batch_queue, batch):
try:
insert_user = session.prepare("INSERT INTO " + db.TABLE + " (" + db.COLUMN1 + "," + db.COLUMN2 + "," + db.COLUMN3 +
"," + db.COLUMN4 + ") VALUES (?,?,?,?)")
while batch_queue.qsize() > 0:
'''batch queue size is 1000'''
row_tuple = batch_queue.get()
batch.add(insert_user, row_tuple)
session.execute(batch)
except Exception as e:
logger.error("batch insert fail.... %s", e)
the above function is invoked by:
batch = BatchStatement(consistency_level=ConsistencyLevel.ONE)
batch_insert(session, batch_queue, batch)
tuples are stored in batch_queue.
3) synchronizing execution
Several days ago I post another question Cassandra update fails , cassandra was complaining about TimeOut issue. I was using synchronize execution for updating.
Can anyone help, is this my code issue or python cassandra-driver issue or Cassandra itself ?
Thanks a million!
If your question is about those errors at the top, those are server-side error responses.
The first says that the coordinator you contacted cannot satisfy the request at CL.ONE, with the nodes it believes are alive. This can happen if all replicas are down (more likely with a low replication factor).
The other two errors are timeouts, where the coordinator didn't get responses from 'live' nodes in a time configured in the cassandra.yaml.
All of these indicate that the cluster you're connected to is not healthy. This could be because it is overwhelmed (high GC pauses), or experiencing network issues. Check the server logs for clues.
I got the following error, which looks very similar:
cassandra.Unavailable: Error from server: code=1000 [Unavailable exception] message="Cannot achieve consistency level LOCAL_ONE" info={'consistency': 'LOCAL_ONE', 'alive_replicas': 0, 'required_replicas': 1}
When I added a sleep(0.5) in the code, it worked fine. I was trying to write too much too fast...
How can I run SQL queries with my fab file as below
def allow_webservers_for_db():
for ip in env.web_servers:
run('echo "GRANT ALL ON %s.* TO \'%s\'#\'%s\' IDENTIFIED BY \'%s\'; | mysql --user=%s --password=%s"' % (env.db_schema, env.db_web_user, ip, env.db_password, env.db_user, env.db_password), pty=True)
run('echo "UPDATE db SET host=\'%s\' where db=\'%s\'; | mysql --user=%s --password=%s --database=mysql"' % (ip, env.db_schema, env.db_web_user, env.db_password), pty=True)
run('echo "UPDATE user SET host=\'%s\' where user=\'%s\';| mysql --user=%s --password=%s --database=mysql"' % (ip, env.db_web_user, env.db_user, env.db_password), pty=True)
Code runs with no error but not doing what it has to do. If I copy and paste the code produced by echo to mysql terminal mysql> query runs properly.
What I'm missing here? Is there anyway to run mysql queries better? I don't want to load it from text file either.
You are just echoing the whole string.
But you want to echo the first part into the pipe to mysql.
Remove the last " and place it between ; and |.
Example for first line:
run('echo "GRANT ALL ON %s.* TO \'%s\'#\'%s\' IDENTIFIED BY \'%s\';" | mysql --user=%s --password=%s' ....
I am trying to configure Magento test automation framework on my system.
When I run phpunit in command line, I am getting following error. Same error I am getting while running test in the netbeans.
Strict Standards: Declaration of Mage_Selenium_Driver::doCommand() should
be compatible with that of PHPUnit_Extensions_SeleniumTestCase_Driver::doCommand()
in C:\MTAF\taf\lib\Mage\Selenium\Driver.php on line 38
..
Fatal error: Call to undefined method PHPUnit_Framework_TestSuite::isPublicTestMethod() in C:\MTAF\taf\lib\Mage\Selenium\TestCase.php on line 2502
Can some one please suggest some solution for the same.
I changed the code, this is the change I performed:
--- a/framework/Mage/Selenium/TestCase.php
+++ b/framework/Mage/Selenium/TestCase.php
## -409,7 +409,7 ## class Mage_Selenium_TestCase extends PHPUnit_Extensions_SeleniumTestCase
$testMethods = array();
$class = new ReflectionClass(self::$_testClass);
foreach ($class->getMethods() as $method) {
- if (PHPUnit_Framework_TestSuite::isPublicTestMethod($method)) {
+ if ($method->isPublic()) {
$testMethods[] = $method->getName();
}
}
I was getting this error when running phpunit >= 4.0. Downgrading to 3.7.x solved it for me.