Is it possible to
db2 connect somedb user myuser using mypwd
db2 precompile myapp.sqx OUTPUT myapp.cxx
when I only have read permission to the REMOTE DB2 database? I'm ONLY trying to select I'm not trying to write to the database, yet the precompile command is complaining that I don't have permission to "create in" ... What can I do differently such that I can query the database using c++ (I already have a ton of code I inherited that uses embedded sql precompiling, but the person who wrote it has write permission to the table, and I don't, so I'm hoping to adapt existing code somehow)
You need to use the BINDFILE option for the PRECOMPILE command if you do not have the ability to create packages in the database:
db2 "precompile myapp.sqx BINDFILE USING myapp.bnd OUTPUT myapp.cxx"
This will generate a file, myapp.bnd that you can use (or provide to your DBA) to create the package at a later date (along with the myapp.cxx file).
Please make sure that you track your bind files carefully with your precompiled code and binaries. The bind files and generated source code are paired, so if you supply the wrong bind file with your binary you'll end up with version mismatch errors.
Related
I need to copy my sql queries to my own drive, I had created these queries using editor in toad for oracle 12.6..Can some one please help me out where is the folder that toad the my queries.?
Problem is that my toad is not working anymore and I need to re-install it so before that I want to get my saved sql files to my own drive.
The queries are saved in a file called:
SAVEDSQL.xml
This file is (in my case) under:
C:\Program Files\Quest Software\Toad for Oracle\User Files
Your saved SQL is stored in SavedSQL.dat. Its location will be... %APPDATA%\Dell\Toad for Oracle\12.6\SavedSQL.dat
If you back that file up you can restore your current installation to a "new install" state without actually reinstalling. Toad does nothing with its installation folder and the likelihood that something is corrupt with the installation is slim. Your user configuration files may have gone squirrely though. To restore your config files choose "Copy User Settings..." from the Utilities menu. Select the option to rest to a clean set. Toad will restart. Close Toad and copy your backed up savedsql.dat back to %APPDATA%\Dell\Toad for Oracle\12.6\SavedSQL.dat.
If you provide info on what it means for Toad to stop working there may be another issue we can resolve to save you from trashing your config files.
SavedSQL.xml contains lots of cryptic looking info about connections, but no scripts.
SavedSQL.dat does not exist
I should mention that I am using version 12.10 Toad for Oracle Xpert
We’re currently upgrading our archaic build system from a bunch of batch scripts to a makefile system using NMake. It’s challenging as we use a custom intermediate language that ends up getting translated to C++ where some of our translators can generate 10’s of files what have a common parts in the file names. The other challenging thing is we use a bunch of CSV files to configure our interfaces and these files get passed through to our configuration tools which generate more source code files. Right now I am focusing on creating the simple rules for our configuration files but can’t seem to figure out a way associate a dependency with a rule if the dependency exists. I tried to use $(wildcard xxx.csv) but found out that this command doesn’t exist for NMake like it does for GNU Make.
So how can I create my rule so that it executes and runs my commands if I have two dependency csv files that will always exists and a third csv file that will exist only when my project calls for it?
[..] will exist only when my project calls for it?
This is a bit unclear. Assuming that there is a command that - depending on some external circumstances - might generate that third csv file, you could use a "stamp file" (I think they call it "pseudo target" in NMAKE):
stamp:
command_that_might_generate_csv3
touch stamp # updates timestamp of "stamp" (or creates it)
target: csv1 csv2 stamp
command_using_all_of csv1 csv2 csv3
I can't open my SAS EG Project with the error message :
"Unable to open file ...abc.egp as valid project file"
This is happen because when my hard disk is full and I was trying to save the project, so it wouldn't let EG to finish writing the project changes.
I've tried to clear the history but no luck.
Thanks
I suspect your SAS EG file might be irreversibly corrupt, so the focus is then on recovery of the file or its content.
If your disk drive is NTFS based, you might be able to recover the file. Check for previous versions in the file properties.
Also, what was the structure of your file inside? If it was a code driven program, then you can make a copy of the file, change extension to "zip" and then unzip the file or look inside for its contents. SAS EG projects are just ZIP archives with XML maps and related SAS code.
The last option is to see if you have logging enabled in your SAS EG. If you do, then all the code you run on that date would be available in your logs, so you can recover the code from the logs.
Regards,
Vasilij
I have been looking at ODB ORM for some time now and had some practice with it. My problem is switching between different DBMS recompiling the code. From my Java background, I can simply change a config file and the ORM works e.g Hibernate. This far I can compile the 'hello' example under 'odb-examples-2.2.0.tar.gz' and connect to MySQL and PostgreSQL successfully.
Please share your ways of resolving this. Code samples would also be very helpful. I would like to simply change databases by say changing a config file. This far, referring to the manual has not helped yet. My system needs to be cross-platform.
Thanks.
If dynamic support is sufficient for you, then the following example will do the trick.
The following command line is needed before compiling the other files :
odb --std c++11 --multi-database dynamic -d common -d mysql -d sqlite \
--generate-query --generate-schema person.hxx
In my example I'm using the command line as they did in the manual (2.10). From what I've read, while you're using the odb::query, odb::transaction, you don't need to do anything else special to work with multiple databases.
I've downloaded the geocommons/geocoder source and have one small sample TigerLine zip file from the census site saved into /opt/tiger/tl_2010_01_state10.zip
I've tried to run the tiger_import tool on this file with the command:
build/tiger_import /opt/tiger/geocoder.db /opt/tiger
with all of the prerequisite gems installed, specifically: Text, fastercsv and sqlite3-ruby gems as well as running make and make install.
However, when I execute tiger_import, I get the error:
ls: /opt/tiger/*/*/tl_*_edges.zip: No such file or directory
although there seems to be a geocoder.db file created in /opt/tiger.
Does any have better information on the steps necessary to build the tiger lines data with the geocoder?
The script expects a directory structure more like the 2009 data:
ftp://ftp2.census.gov/geo/tiger/TIGER2009/
Under your tiger directory, you'll need one state (01_ALABAMA, for instance) and one county (01001_Autauga_County), and inside that you'll need the _addr.zip, _edges.zip, and _featnames.zip files.
It's true: the 2010 data isn't set up this way (there's a giant directory for each shape type, and a file for each county in those directories) but the import script isn't set up to use the 2010 data as it's written now. Given the way the script is set up, you might have less heartache using the 2009 data until the import scripts get updated for the 2010 data. Or until the 2010 all gets published.