BigQuery has session-bound (or script-bound) system variables, described here. Within a session, I can declare the value of one of those variables, e.g. with something like:
set ##dataset_id = 'my_dataset_id';
Now, I'd like to have a view (which I plan on running within a session) that includes something like:
create view foo
as
select ...
from ##dataset_id.my_table
... this doesn't work. Nor does any form of quoting around that variable. It appears use of that variable simply isn't allowed to help identify the namespace of my_table.
If that's true, I'm struggling to see the value of that variable at all. Does anyone know if I can use those variables as so, or how to prevent needing to namespace-bound all instances of my_table? I'd like to manage these query scripts outside of BQ itself, and ideally without templating in everywhere (e.g. {dataset_id}.my_table)
I have tried to replicate your concern,
Please try below:
set ##dataset_id = "SampleDataSetID";
EXECUTE IMMEDIATE format("""
CREATE VIEW sampleView1 as (
SELECT
*
FROM
%s.sampleTable
);
""", ##dataset_id);
I used EXECUTE IMMEDIATE to run the sql dynamically ( using the variable that we set to the session.)
Output:
Related
Postman allows to generate random dummy data by using pre-defined variables, on example this one would be replaced by random company name:
{{$randomCompanyName}}
Using pre-defined variables multiple times return different values per request.
The question is how to save once generated value to the variable for further usage on example in tests, something like(it doesn't work):
pm.variables.set("company", {{$randomCompanyName}});
Thanks.
You can use the .replaceIn() function with that {{...}} syntax in the sandbox.
pm.globals.set("company", pm.variables.replaceIn('{{$randomCompanyName}}'));
I've used a global variable to store the value as you would want to use it again. You could also use either the environment or collectionVariables scope to do the same thing.
We have a situation where we are dealing with a relational source(Oracle). The system is developed in a way where we have to first execute a package which will enable data read from Oracle and user will be able to get results out of select statement. I am trying to find a way on how to implement this in informatica mapping.
What we tried
1. In PreSQL we tried to execute the package and in SQL query we wrote select statement - data not getting loaded in target.
2. In PreSQL we wrote a block in which we are executing the package and just after that(within same beging...end block) we wrote insert statement on top of select statement - This is inserting data through insert statement however I am not in favor of this solution as both source and target are dummy which will confuse people in future.
Is there any possibility to implement this solution somehow by using 1st option.
Please help and suggest.
Thanks
The stored procedure transformation is there for this purpose configure it to execute source pre load
Pre-Sql and data read are not a part of same session. From what I understand, this needs to be done within the same session as otherwise the read is granted only for the session.
What you can do, is create a stored procedure/package that will grant read access and then return the data. Use it as a SQL Override on your SQ. This way SQ will read the data as usual. The concept:
CREATE PROCEDURE ReadMyData AS
BEGIN
execute immediate 'GiveMeTheReadAccess';
select * from MyTable;
END;
And use the ReadMyData on the Source Qualifier.
I have two tables in different databases. In a table A is the data, in the other table B are information for incremental load of the data from the first table. I want to load from table B and store the date of the last successful load from table A in a mapping variable $$LOAD_DATE. To achieve this, I read a date from table B and use the SETVARIABLE() function in a expression to set the $$LOAD_DATE variable. The port in which I do this is marked as output and writes into a dummy flat file. I only read on row of this source!
Then I use this $$LOAD_DATE variable in the Source Filter of the Source Qualifier of table A to only load new records which are younger than the date stored in the $$LOAD_DATE variable.
My problem is that I am not able to set the $$LOAD_DATE variable correctly. It is always the date 1753-1-1-00.00.00, which is the default value for mapping variables of the type date/time.
How do I solve this? How can I store a date in that variable and use it later in a Source Qualifiers source filter? Is it even possible?
EDIT: Table A has too much records to read them all and filter them later. This would be to expensive, so they have to be filtered at source filter level.
Yes, it's possible.
In the first map you have to initialize the variable, like this:
In first session configuration you have to define the Post-session on success variable assignment:
The second map (with your table A) will get the variable after this configuration of the session in Pre-session variable assignment:
It will work.
It is not possible to set a mapping variable and use it's value somewhere else in the same run, because, the variable is actually set when the session completes.
If you really want to implement it using mapping variables you have to create two mappings, one for setting the mapping variable and another for actual incremental load. You can pass a mapping variable value from one session to another in a workflow using a workflow variable. https://stackoverflow.com/a/26849639/2626813
Other solutions could be to use a lookup on B and a filter after that.
You can also write some scripts to query table B and modify the parameter file with the latest $LOAD_DATE value prior to executing the mapping.
Since we're having two different DBs, use two sessions. Get values in the first one and pass the parameters to the second one.
Workflow generates three files (header, detail, trailer) which I combine via post-session command. There are two variables which are set in my mapping, which I want to use in the post-session command like so:
cat header1.out detail1.out trailer1.out > OUTPUT_$(date +%Y%m%d)_$$VAR1_$$VAR2.dat
But this doesn't work and the values are empty, so I get OUTPUT_20151117__.dat.
I've tried creating workflow variables and assigning them via pre-session variable assignment, but this doesn't work either.
What am I missing? Or was this never going to work?
Can you see the values assigned to those variables on the session log or do they appear empty as well?
Creating workflow variables is what I'd try, but you need to assign the values with the post-session variable assignment.
Basically, you store values in a variable in your mapping and pass the values up to the workflow after the session succeeded. Here is how you can achieve that:
Define a Workflow variables $$VAR1 and $$VAR2
Define the variables in your mapping, but chose different names! So i.e. $$M_VAR1 and $$M_VAR2
In your mapping, assign the values to your mapping variables through the functions SetVariable(var as char, value as data type)
In your session, select Post-session on success variable assignment.
In step 4, the current value from $$M_VAR1 (mapping variable) is stored in your workflow variable $$VAR1 and can then be used in the workflow in command tasks like you asked.
A few notes:
I'm not 100% sure if the variable assignment is exectured before the post-session command. If the command is executed first, you could execute your command tasks in an external command task after your session.
Pre-Session variable assignment is used if you pass a value from a workflow variable down to a mapping variable. You can use this if your variables $$VAR1 or $$VAR2 are used inside another mapping and need to be initialized at the beginning.
I am testing object coverage for certain reporting solution. I have hundreds of reports and I need to see if set of objects used in those reports covers the set of all possible objects. I figured out that I could use set collection to store distinct object names and then handle it in some way. As I use free version of SOAPui for time being, structure of my test is first invoking method to get XML view of single report, then use Groovy Script to append found object names into a csv file (File append method). However I would like to append those object after I get rid of duplicates. So suitable solution would be a Set variable where I could store object names from all reports and in last step store this set in a file.
How to create such reusable collection? Is there any other way I missed?
You could just declare a set like below
def setOfNames = [] as Set
// set manipulation
setOfNames.add("a")
//
Or just declare a list first, manipulate it, then finally make a set out of it
def listOfNames = []
// list manipulation
def setOfNames = listOfNames as Set
Refer http://groovy.codehaus.org/JN1015-Collections for more details