How do I connect to an Oracle cloud database within a C++ application? - c++

I have a cloud database with Oracle and I'm trying to learn how to execute SQL commands within my c++ application in Windows.
Here's what I've tried.
1) Using Instant Client (OCCI)
I have tried to follow these docs posted by Oracle
I downloaded instant client, unzipped, and put it under the directory called Oracle
I downloaded the SDK for instant client and put it under the same directory
I downloaded the wallet files and put them under network/admin directory
I set the path variable to the directory Oracle/instant_client and created a user variable called ORACLE_HOME and set it to the same directory
Created a TNS_ADMIN user variable and set it to network/admin
Created an empty visual studio project and added the sdk to the additional include dependencies and added the sdk to the additional library directories under the project properties
Added the .lib files to the additional dependencies under the project properties (linker -> input)
Then I wrote this code
Environment* env;
Connection* conn;
env = Environment::createEnvironment(Environment::DEFAULT);
conn = env->createConnection ("username", "password", "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=myserver111) (PORT=5521))(CONNECT_DATA = (SERVICE_NAME = bjava21)))");
env->terminateConnection(conn);
Environment::terminateEnvironment(env);
It compiles, runs, and builds the environment successfully but an error occurs when it tries to create a connection
2) Using ODBC
Downloaded the ODBC Oracle Driver
Added a new data source name to the system dsn
Tested connection successfully
Abandoned efforts as I couldn't find helpful and simple docs to help me with my project
3) Using Oracle Developer Tools for Visual Studio
I downloaded the developer tools and added them to visual studio
Connected to my Oracle database using Server Explorer
Was able to see my tables and data and modify them using the Server Explorer
Was unable to find docs or be able to add code that allowed me to execute SQL queries
Update
I have removed ORACLE_HOME and TNS_ADMIN as environment variables
I've tried to connect to my remote database using Instant Client SDK but no luck
I have tried the following and nothing has worked
createConnection("username", "password", "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=myserver111) (PORT=5521))(CONNECT_DATA = (SERVICE_NAME = bjava21)))")
createConnection("username", "password", "//host:[port][/service name]")
createConnection("username", "password", "xxx_low")
createConnection("username", "password", "protocol://host:port/service_name?wallet_location=/my/dir&retry_count=N&retry_delay=N")
createConnection("username", "password", "username/password#xxx_low")
I'm able to connect to SQLPlus in a variety of ways but not in my c++ application
Error While Debugging:
Unhandled exception in exe: Microsoft C++ exception: oracle::occi::SQLException at memory location
Full Code
#include <occi.h>
using namespace oracle::occi;
int main() {
Environment* env;
Connection* conn;
env = Environment::createEnvironment(Environment::DEFAULT);
conn = env->createConnection("username", "password", "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp) (HOST=myserver111) (PORT=5521))(CONNECT_DATA = (SERVICE_NAME = bjava21)))");
env->terminateConnection(conn);
Environment::terminateEnvironment(env);
return 0;
}

With Instant Client in general:
don't set ORACLE_HOME. This can have side effects.
you don't need to set TNS_ADMIN since you put the unzipped network files in the default directory.
For cloud:
In the app, use one of the network aliases from the tnsnames.ora file (eg. xxxx_low). You can see the descriptors have extra info that your hard coded descriptor is missing.
ODBC will be exactly the same. Once you have the wallet files extracted to the default network/admin subdirectory you just need to connect with the DB credentials and use a network alias from the tnsnames.ora file.
More info is in my blog post How to connect to Oracle Autonomous Cloud Databases.
Official doc is in Connect to Autonomous Database Using Oracle Database Tools

Related

WebSphere 8.5.5 Error: PLGC0049E: The propagation of the plug-in configuration file failed for the Web server

I just installed a new WebSphere 8.5.5 ESB on Linux Centos 7.
All installation i did with root user.
Than i did the following steps to create a Web Service:
1) create server with user wasadmin
2) Generate plugin
3) Propagate plugin
In the last step i get the error:
PLGC0049E: The propagation of the plug-in configuration file failed for the Web server. test2lsoa01-02Node01Cell.XXXXXXXXX-node.IHSWebserver.
Error A problem was encountered transferring the designated file. Make sure the file exists and has correct access permissions.
The file /u01/apps/IBM/WebSphere/profiles/ApplicationServerProfile1/config/cells/test2lsoa01-02Node01Cell/nodes/XXXXX-node/servers/IHSWebserver/plugin-cfg.xml exist.
I gave him for test chmod 777 plugin-cfg.xml
Still the error is not going away.
Can someone help?
User wsadmin would be the user attempting to move the file. Ensure that ID can access /u01/apps/IBM/WebSphere/profiles/ApplicationServerProfile1/config/cells/test2lsoa01-02Node01Cell/nodes/XXXXX-node/servers/IHSWebserver/plugin-cfg.xml and there should be a target directory as well (in the webserver installation where plugin-cfg.xml is being moved to). Ensure that wsadmin has write access to this target location if propagating using node sync. If using IHS admin, ensure that the userid/password defined in the web server definition has write access to the target location.
A good test would be to access the source plugin-cfg.xml using wsadmin userid and attempt to manually move the file to the target location with the appropriate ID (based upon use of node sync or IHS admin).

Azure WebJob FileTrigger Path 'D:\home\data\...' does not exist

I've created a WebJob to read files from Azure Files when they are created.
When I run it locally it works but it doesn't when I publish the WebJob.
My Main() function is:
static void Main()
{
string connection = "DefaultEndpointsProtocol=https;AccountName=MYACCOUNTNAME;AccountKey=MYACCOUNTKEY";
JobHostConfiguration config = new JobHostConfiguration(connection);
var filesConfig = new FilesConfiguration();
if (config.IsDevelopment)
{
config.UseDevelopmentSettings();
filesConfig.RootPath = #"c:\temp\files";
}
config.UseFiles(filesConfig);
var host = new JobHost(config);
// The following code ensures that the WebJob will be running continuously
host.RunAndBlock();
}
The function to be triggered when the file is created is:
public void TriggerTest([FileTrigger(#"clients\{name}", "*.txt", WatcherChangeTypes.Created)] Stream file, string name, TextWriter log)
{
log.WriteLine(name + " received!");
// ...
}
And the error I get when the WebJob is published is:
[08/17/2016 00:15:31 > 4df213: ERR ] Unhandled Exception: System.InvalidOperationException: Path 'D:\home\data\clients' does not exist.
The ideia is to make the WebJob to trigger when new files are created in the "clients" folder of the Azure Files.
Can someone help me?
According to your requirement, I tested it on my side, then I reproduced your issue.
Unhandled Exception: System.InvalidOperationException: Path 'D:\home\data\clients' does not exist
When publish the WebJob, the FilesConfiguration.RootPath would be set to the “D:\HOME\DATA” directory when running in Azure Web App. You could refer to the source code:
https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions/Extensions/Files/Config/FilesConfiguration.cs
As the following tutorial has mentioned, FilesConfiguration.RootPath should be set to a valid directory.
https://azure.microsoft.com/en-us/blog/extensible-triggers-and-binders-with-azure-webjobs-sdk-1-1-0-alpha1
Please check and make sure that the specified directory is existed in the Web APP which hosts in your WebJob.
Trigger when new files are created in the "clients" folder of the Azure Files via WebJob
As far as I know, there has two triggers for Azure Storage:
QueueTrigger - When an Azure Queue Message is enqueued.
BlobTrigger – When an Azure Blob is uploaded.
The new WebJobs SDK provide a File trigger which could trigger functions based on File events.
However, a file trigger could monitor file additions/changes to a particular directory, but there seems to be no trigger for monitor file additons/changes on Azure File Storage.
In Azure Environment, the "Web-Jobs" are stored in its local folder where known as "D:\home" and "D:\local" is the local folder used by the Web-hooks. I was in need to use a folder for temporary usage of downloading a file from SFTP server and again read the file from that local temporary location file and consume it in my application.
I have used the "D:\local\Temp" as the temporary folder which is created by the code after checking the folder existence, then after creating the folder the code will download a file from server and store to this location and then read from the same location and delete the file from that temporary folder.

google api python client keeps using an old version of my local app engine endpoints

I have two python projects running locally:
A cloud endpoints python project using the latest App Engine version.
A client project which consumes the endpoint functions using the latest google-api-python-client (v 1.5.1).
Everything was fine until I renamed one endpoint's function from:
#endpoints.method(MyRequest, MyResponse, path = "save_ocupation", http_method='POST', name = "save_ocupation")
def save_ocupation(self, request):
[code here]
To:
#endpoints.method(MyRequest, MyResponse, path = "save_occupation", http_method='POST', name = "save_occupation")
def save_occupation(self, request):
[code here]
Looking at the local console (http://localhost:8080/_ah/api/explorer) I see the correct function name.
However, by executing the client project that invokes the endpoint, it keeps saying that the new endpoint function does not exist. I verified this using the ipython shell: The dynamically-generated python code for invoking the Resource has the old function name despite restarting both the server and client dozens of times.
How can I force the api client to get always the latest endpoint api document?
Help is appreciated.
Just after posting the question, I resumed my Ubuntu PC and started Eclipse and the python projects from scratch and now everything works as expected. This sounds like a kind of a http client cache, or a stale python process, which prevented from getting the latest discovery document and generating the corresponding resource code.
This is odd as I have tested running these projects outside and inside Eclipse without success. But I prefer documenting this just in case someone else has this issue.

Neo4jServer in Neo4jConfiguration - 4.1.0?

I've been using the latest code in 4.1.0-BUILD-SNAPSHOT as I need some of the new bug fixes in the 4.1 branch and just noticed that "neo4jServer()" is no longer a method exposed by Neo4jConfiguration. What is the new way to initialize a server connection and an in-memory version for unit tests? Before I was using "RemoteServer" and "InProcessServer", respectively.
Please note, the official documentation will be updated shortly.
In the meantime:
What's changed
SDN 4.1 uses the new Neo4j OGM 2.0 libraries. OGM 2.0 introduces API changes, largely due to the addition of support for Embedded as well as Remote Neo4j. Consequently, connection to a production database is now accomplished using an appropriate Driver, rather than using the RemoteServer or the InProcessServer which are deprecated.
For testing, we recommend using the EmbeddedDriver. It is still possible to create an in-memory test server, but that is not covered in this answer.
Available Drivers
The following Driver implementations are currently provided
http : org.neo4j.drivers.http.driver.HttpDriver
embedded : org.neo4j.drivers.embedded.driver.EmbeddedDriver
A driver implementation for the Bolt protocol (Neo4j 3.0) will be available soon.
Configuring a driver
There are two ways to configure a driver - using a properties file or via Java configuration. Variations on these themes exist (particularly for passing credentials), but for now the following should get you going:
Configuring the Http Driver
The Http Driver connects to and communicates with a Neo4j server over Http. An Http Driver must be used if your application is running in client-server mode. Please note the Http Driver will attempt to connect to a server running in a separate process. It can't be used for spinning up an in-process server.
Properties file configuration:
The advantage of using a properties file is that it requires no changes to your Spring configuration.
Create a file called ogm.properties somewhere on your classpath. It should contain the following entries:
driver=org.neo4j.ogm.drivers.http.driver.HttpDriver
URI=http://user:password#localhost:7474
Java configuration:
The simplest way to configure the Driver is to create a Configuration bean and pass it as the first argument to the SessionFactory constructor in your Spring configuration:
import org.neo4j.ogm.config.Configuration;
...
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.http.driver.HttpDriver")
.setURI("http://user:password#localhost:7474");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), <packages> );
}
Configuring the Embedded Driver
The Embedded Driver connects directly to the Neo4j database engine. There is no server involved, therefore no network overhead between your application code and the database. You should use the Embedded driver if you don't want to use a client-server model, or if your application is running as a Neo4j Unmanaged Extension.
You can specify a permanent data store location to provide durability of your data after your application shuts down, or you can use an impermanent data store, which will only exist while your application is running (ideal for testing).
Create a file called ogm.properties somewhere on your classpath. It should contain the following entries:
Properties file configuration (permanent data store)
driver=org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver
URI=file:///var/tmp/graph.db
Properties file configuration (impermanent data store)
driver=org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver
To use an impermanent data store, simply omit the URI property.
Java Configuration
The same technique is used for configuring the Embedded driver as for the Http Driver. Set up a Configuration bean and pass it as the first argument to the SessionFactory constructor:
import org.neo4j.ogm.config.Configuration;
...
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver")
.setURI("file:///var/tmp/graph.db");
return config;
}
#Bean
public SessionFactory getSessionFactory() {
return new SessionFactory(getConfiguration(), <packages> );
}
If you want to use an impermanent data store (e.g. for testing) do not set the URI attribute on the Configuration:
#Bean
public Configuration getConfiguration() {
Configuration config = new Configuration();
config
.driverConfiguration()
.setDriverClassName
("org.neo4j.ogm.drivers.embedded.driver.EmbeddedDriver")
return config;
}

vss sample hardware provider

I've been trying to follow the instructions to install the sample VSS hardware provider that comes with the Windows SDK. I have been able to compile the code successfully with VS2013 for 64bit platform. However when I try to install the provider i get the following error..
Unregistering the existing application.
Create the catalog object
Get the Applications collection
Populate...
Search for VssSampleProvider application.
Saving changes.
Done.
Creating a new COM+ application
Creating the catalog object
Get the Applications collection
Populate.
Add new application object
Set app name = VssSampleProvider>
Set app description = VSS HW Sample Provider
Set app access check = true- Set encrypted COM communication = true
Set secure references = true
Set impersonation = false
Save changes.
Create Windows service running as Local System
Add the DLL component
ERROR:
Error code: -2146368511 [0x80110401]
Exit code: 113
Description:
Source:
Help file:
Help context: 0
COM+ Errors detected: (1)
(COM+ ERROR 0) on c:\vsssampleprovider\VssSampleProvider.dll
ErrorCode: -2146368475 [0x80110425]
MajorRef: c:\vsssampleprovider\VssSampleProvider.dll
Looking up for COM error code -2146368475 [0x80110425] I could only find that DLL load failed.
Even Viewer logs show a warning saying ...
Unable to load DLL c:\vsssampleprovider\VssSampleProvider.dll
Process Name: dllhost.exe Comsvcs.dll file version: ENU
2001.12.10530.16384
shp during component registration. Unable to validate DLL entry points.
#
Thanks in advance.
Managed to get it working.
Using dependency walker identified that MSVCP120.dll and MSVCR120.dll were not being found.
Copied these dlls from C:\Windows\System32 to the same folder where the VSSSampleProvider.dll was present.