My application is hosted on a central server which serves many customers. It now needs to cross-reference information with the database servers that reside onsite at the customer's location.
I want to store the details of the customer's server alongside their account details (e.g dbname, host, port etc).
However depending on who is logged into the application, I need to supply their data connection details into a <cfquery> function in order to perform lookups. Something like this:
<cfquery name="rsOrders" datasource="{dynamically provided connection string}">
SELECT *
FROM
CompanysDBTable
</cfquery>
I understand that there is an Administrator API which creates a data source programmatically, however using that would mean an extra process in my system and what would happen if the data source details were to be updated by the customer?
So is there any way to do it on the fly like above? That is, supplying the data connection string within the <cfquery> tag.
Or is there a better way of doing this altogether?
You can create application-specific datasources at runtime in ColdFusion 11. See the docs: "Application-specific datasources in Application.cfc". I also discuss this in my blog: "Defining datasources in Application.cfc".
An example would be:
// Application.cfc
component {
this.name = "DSNTest02";
this.datasources = {
scratch_mssql_app = {
database = "scratch",
host = "localhost",
port = "1433",
driver = "MSSQLServer",
username = "scratch",
password = "scratch"
},
scratch_embedded_app = {
database = "C:\apps\adobe\ColdFusion\11\full\cfusion\db\scratch",
driver = "Apache Derby Embedded"
}
};
this.datasource = "scratch_mssql_app";
}
That's the closest you can get with ColdFusion.
If you were to use JDBC directly, you could just give a connection string when creating the connection, but then your code would need to contend with the record sets returned by the JDBC driver, which would not be CFML query objects.
When you get the connection credentials, set up a suitably named datasource in ColdFusion. Put the name of this datasource in the database that has the other account details you mentioned. For customers who did not send you credentials, use a default datasource.
When you run your application, get the datasource along with the other customer information and use it in your cfquery tags.
Related
We are working for FHIR(Fast Healthcare Interoperability Resources).
We have followed “FHIR works on AWS” and deployed the CloudFormation template given by AWS in our AWS environment.Following is the template that we have deployed
https://docs.aws.amazon.com/solutions/latest/fhir-works-on-aws/aws-cloudformation-template.html
Requirement : we want to maintain client specific/customized ids as primary key in the server.
Problem : server not allowing us to override or mainain client specific (customized ) ids as primary key .Infact , in the runtime, it is generating its own ids and ignoring the id given by us.
The FHIR spec allows for you to define your own IDs when using "update as create". This is when you create a new resource in the server, but use a PUT (update) request to the ID you want to create, such as Patient/1, instead of a POST (create) request to the resource URL. The server should return a 201 Created status instead of 200 OK. For more information see https://hl7.org/fhir/http.html#upsert
Not every FHIR server supports this, but if AWS does this is likely how it would work. The field in the CapabilityStatement for this feature is CapabilityStatement.rest.resource.updateCreate
EDIT:
This is possible by modifying the parameters passed to the DynamoDbDataService constructor in the deployment repo's src/config.ts
By default supportUpdateCreate, the second parameter, is set to false
const dynamoDbDataService = new DynamoDbDataService(DynamoDb, false, { enableMultiTenancy });
but you can set it to true to enable this functionality
const dynamoDbDataService = new DynamoDbDataService(DynamoDb, true, { enableMultiTenancy });
We've been getting started with CubeJS. We are using BiqQuery, with the following heirarchy:
Project (All client)
Dataset (Corresponding to a single client)
Tables (Different data-types for a single client)
We'd like to use COMPILE_CONTEXT to allow different clients to access different Datasets based on the JWT that we issue them after authentication. The JWT includes the user info that'd cause our schema to select a different dataset:
const {
securityContext: { dataset_id },
} = COMPILE_CONTEXT;
cube(`Sessions`, {
sql: `SELECT * FROM ${ dataset_id }.sessions_export`,
measures: {
// Count of all session objects
count: {
sql: `Status`,
type: `count`,
},
In testing, we've found that the COMPILE_CONTEXT global variable is set when the server is launched, meaning that even if a different client submits a request to Cube with a different dataset_id, the old one is used by the server, sending info from the old dataset. The Cube docs on Multi-tenancy state that COMPILE_CONTEXT should be used in our scenario (at least, this is my understanding):
Multitenant COMPILE_CONTEXT should be used when users in fact access different databases. For example, if you provide SaaS ecommerce hosting and each of your customers have a separate database, then each ecommerce store should be modelled as a separate tenant.
SECURITY_CONTEXT, on the other hand, is set at Query time, so we tried to also access the appropriate data from SECURITY_CONTEXT like so:
cube(`Sessions`, {
sql: `SELECT * FROM ${SECURITY_CONTEXT.dataset_id}.sessions_export`,
But the query being sent to the database (found in the error log in the Cube dev server) is SELECT * FROM [object Object].sessions_export) AS sessions.
I'd love to inspect the SECURITY_CONTEXT variable but I'm having trouble finding how to do this, as it's only accessible within our cube Sql to my knowledge.
Any help would be appreciated! We are open to other routes besides those described above. In a nutshell, how can we deliver a specific dataset to a client using a unique JWT?
Given that all your datasets are in the same BigQuery database, I think your use-case reflects the Multiple DB Instances with Same Schema part of the documentation (that title could definitely be improved):
// cube.js
const PostgresDriver = require('#cubejs-backend/postgres-driver');
module.exports = {
contextToAppId: ({ securityContext }) =>
`CUBEJS_APP_${securityContext.dataset_id}`,
driverFactory: ({ securityContext }) =>
new PostgresDriver({
database: `${securityContext.dataset_id}`,
}),
};
// schema/Sessions.js
cube(`Sessions`, {
sql: `SELECT * FROM sessions_export`,
}
In PBI desktop file no errors, erro appear only in PBI service on refreshing
ERROR:
Query contains unsupported function. Function name:Odbc.DataSource
Parameter1
Mydsn ' as parameter
used as text - not dynamic
= Odbc.DataSource("dsn=Mydsn", [HierarchicalNavigation=true]) ' no error
as used us text Parameter - not dynamic
= Odbc.DataSource("dsn=" & Parameter1, [HierarchicalNavigation=true]) ' no error,
Odbc_dsn ' Query
= settings[Column2]{0} ' from csv
from csv query
= Odbc.DataSource("dsn=" & Odbc_dsn, [HierarchicalNavigation=true]) ' Query contains unsupported function. Function name: Odbc.DataSource
directly from csv table
= Odbc.DataSource("dsn=" & settings[Column2]{0}, [HierarchicalNavigation=true]) ' Query contains unsupported function. Function name: Odbc.DataSource
No one of Privacy settings is not change anything, tryied all
available ways. (change to none, private, organizational, public,
disabling privacy settings and etc)
How to use odbs source DSN name from csv file?
(Answer to be expanded with additional info provided - see comments on original question)
While I have never imported a DSN name through a CSV, your saying that it works on your local machine makes me accept that this is at least possible so we'll instead focus on issues with the gateway.
My first impression here as to why this might not be working is simply permissions and visibility.
Having worked with a number of PowerBI Service setups, the issue with an unrecognized ODBC DSN usually falls into the following issues:
Is the DSN setup as a system DSN?
Is the gateway setup as a LocalService Account vs PowerBI Gateway Host Account?
Does which user the gateway is setup under actually have permissions to the directory that the data source (or custom connector) that the connection depends on?
So:
Fairly straight forward: all gateway accessible ODBC sources need to be setup on the gateway host as system DSNs, not user DSNs. See your ODBC Data Source Administrator here:
Confirm the On-Premise Gateway "Logon" User on the gateway's host machine? Generally I recommend going to Windows Services and making sure to use the "Local System account" (to inherit permissions) but just consider this during the next step of checking local permissions.
This applies to anything which is "self-hosted" on the local machine that is the gateway host: Whichever account is hosting the powerbi gateway service must also be given explicit permissions to the local resources needed. For example, if you add a custom connector to the documents directory on the gateway host under your user account - make sure the PowerBI default user has access to that directory and file. I.E. File properties -> Security -> User permission etc.
In my experience, 9/10 times one of these things isn't setup right.
Additional note - every time you upgrade or re-install a powerbi gateway host, you will have to change the service login account and double check all permissions. I don't know why but it overwrites that setting by default disabling all refresh until restored.
Edit:
After further thinking, I believe you will eventually run into the roadblock regardless - PowerBI Service's Gateway Data Source mappings are 1-1. After upload you will get this screen in the dataset settings:
Which requires that the data source has been defined in the PowerBI service's settings:
I don't believe that it is currently possible to make that definition a variably composed string per user's request.
Dsn name can be only static and only string
I am really new to datapower gateway scripts & i have small requirement.
In datapower newly Gateway script is added from firmware 7.0 onwards. now i am trying to using the GWSript to develop the code to know the domain name and it's state.
here the input is a export.xml file. from that file i need to capture the domain and state of domains and need to be display as a HTML table.
output
If you have any idea about this. Please advise.
You can get the name of the domain from a service variable:
http://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.2.0/com.ibm.dp.doc/var-service-domain-name_reference.html
Not sure what you mean by 'state' but perhaps this is helpful:
You could interrogate the var://service/system/status/DomainStatus variable
http://www-01.ibm.com/support/knowledgecenter/SS9H2Y_7.2.0/com.ibm.dp.doc/var-service-system-status_reference.html
GatewayScript to get the variables:
var sm = require ('service-metadata');
var domain = sm.getVar('var://service/domain-name');
var domainState = sm.getVar('var://service/system/status/DomainStatus');
Is it possible to generate fixtures from an existing DB in Symfony2/Doctrine? How could I do that?
Example:
I have defined 15 entities and my symfony2 application is working. Now some people are able to browse to the application and by using it it had inserted about 5000 rows until now. Now I want the stuff inserted as fixtures, but I don’t want to do this by hand. How can I generate them from the DB?
There's no direct manner within Doctrine or Symfony2, but writing a code generator for it (either within or outside of sf2) would be trivial. Just pull each property and generate a line of code to set each property, then put it in your fixture loading method. Example:
<?php
$i = 0;
$entities = $em->getRepository('MyApp:Entity')->findAll();
foreach($entities as $entity)
{
$code .= "$entity_{$i} = new MyApp\Entity();\n";
$code .= "$entity_{$i}->setMyProperty('" . addslashes($entity->getMyProperty()); . "'); \n");
$code .= "$manager->persist($entity_{$i}); \n $manager->flush();";
++$i;
}
// store code somewhere with file_put_contents
As I understand your question, you have two databases: the first is already in production and filled with 5000 rows, the second one is a new database you want to use for new test and development. Is that right ?
If it is, I suggest you to create in you test environment two entity manager: the first will be the 'default' one, which will be used in your project (your controllers, etc.). The second one will be used to connect to your production database. You will find here how to deal with multiple entity manager : http://symfony.com/doc/current/cookbook/doctrine/multiple_entity_managers.html
Then, you should create a Fixture class which will have access to your container. There is an "how to" here : http://symfony.com/doc/current/bundles/DoctrineFixturesBundle/index.html#using-the-container-in-the-fixtures.
Using the container, you will have access to both entity manager. And this is the 'magic': you will have to retrieve the object from your production database, and persist them in the second entity manager, which will insert them in your test database.
I point your attention to two points:
If there are relationship between object, you will have to take care to those dependencies: owner side, inversed side, ...
If you have 5000 rows, take care on the memory your script will use. Another solution may be use native sql to retrieve all the rows from your production database and insert them in your test database. Or a SQL script...
I do not have any code to suggest to you, but I hope this idea will help you.
I assume that you want to use fixtures (and not just dump the production or staging database in the development database) because a) your schema changes and the dumps would not work if you update your code or b) you don't want to dump the hole database but only want to extend some custom fixtures. An example I can think of is: you have 206 countries in your staging database and users add cities to those countries; to keep the fixtures small you only have 5 countries in your development database, however you want to add the cities that the user added to those 5 countries in the staging database to the development database
The only solution I can think of is to use the mentioned DoctrineFixturesBundle and multiple entity managers.
First of all you should configure two database connections and two entity managers in your config.yml
doctrine:
dbal:
default_connection: default
connections:
default:
driver: %database_driver%
host: %database_host%
port: %database_port%
dbname: %database_name%
user: %database_user%
password: %database_password%
charset: UTF8
staging:
...
orm:
auto_generate_proxy_classes: %kernel.debug%
default_entity_manager: default
entity_managers:
default:
connection: default
mappings:
AcmeDemoBundle: ~
staging:
connection: staging
mappings:
AcmeDemoBundle: ~
As you can see both entity managers map the AcmeDemoBundle (in this bundle I will put the code to load the fixtures). If the second database is not on your development machine, you could just dump the SQL from the other machine to the development machine. That should be possible since we are talking about 500 rows and not about millions of rows.
What you can do next is to implement a fixture loader that uses the service container to retrieve the second entity manager and use Doctrine to query the data from the second database and save it to your development database (the default entity manager):
<?php
namespace Acme\DemoBundle\DataFixtures\ORM;
use Doctrine\Common\DataFixtures\FixtureInterface;
use Doctrine\Common\Persistence\ObjectManager;
use Symfony\Component\DependencyInjection\ContainerAwareInterface;
use Symfony\Component\DependencyInjection\ContainerInterface;
use Acme\DemoBundle\Entity\City;
use Acme\DemoBundle\Entity\Country;
class LoadData implements FixtureInterface, ContainerAwareInterface
{
private $container;
private $stagingManager;
public function setContainer(ContainerInterface $container = null)
{
$this->container = $container;
$this->stagingManager = $this->container->get('doctrine')->getManager('staging');
}
public function load(ObjectManager $manager)
{
$this->loadCountry($manager, 'Austria');
$this->loadCountry($manager, 'Germany');
$this->loadCountry($manager, 'France');
$this->loadCountry($manager, 'Spain');
$this->loadCountry($manager, 'Great Britain');
$manager->flush();
}
protected function loadCountry(ObjectManager $manager, $countryName)
{
$country = new Country($countryName);
$cities = $this->stagingManager->createQueryBuilder()
->select('c')
->from('AcmeDemoBundle:City', 'c')
->leftJoin('c.country', 'co')
->where('co.name = :country')
->setParameter('country', $countryName)
->getQuery()
->getResult();
foreach ($cities as $city) {
$city->setCountry($country);
$manager->persist($city);
}
$manager->persist($country);
}
}
What I did in the loadCountry method was that I load the objects from the staging entity manager, add a reference to the fixture country (the one that already exists in your current fixtures) and persist it using the default entity manager (your development database).
Sources:
DoctrineFixturesBundle
How to work with Multiple Entity Managers
you could use https://github.com/Webonaute/DoctrineFixturesGeneratorBundle
It add ability to generate fixtures for single entity using commands like
$ php bin/console doctrine:generate:fixture --entity=Blog:BlogPost --ids="12 534 124" --name="bug43" --order="1"
Or you can create full snapshot
php app/console doctrine:generate:fixture --snapshot --overwrite
The Doctrine Fixtures are useful because they allow you to create objects and insert them into the database. This is especially useful when you need to create associations or say, encode a password using one of the password encoders. If you already have the data in a database, you shouldn't really need to bring them out of that format and turn it into PHP code, only to have that PHP code insert the same data back into the database. You could probably just do an SQL dump and then re-insert them into your database again that way.
Using a fixture would make more sense if you were initiating your project but wanted to use user input to create it. If you had in your config file the default user, you could read that and insert the object.
The AliceBundle can help you doing this. Indeed it allows to load fixtures with YAML (or PHP array) files.
For instance you can define your fixtures with:
Nelmio\Entity\Group:
group1:
name: Admins
owner: '#user1->id'
Or with the same structure in a PHP array. It's WAY easier than generating working PHP code.
It also supports references:
Nelmio\Entity\User:
# ...
Nelmio\Entity\Group:
group1:
name: Admins
owner: '#user1'
In the doctrine_fixture cookbook, you can see in the last example how to get the service container in your entity.
With this service container, you can retrieve the doctrine service, then the entity manager. With the entity manager, you will be able to get all the data from your database you need.
Hope this will help you!