Cannot create index on non-empty table - amazon-web-services

I'm currently using AWS Lambda (NodeJS) with AWS QLDB.
The scenario is like this.
I have the first table and its indexes when I deployed the service. So the table and indexes will be created. My problem is that, once I need to add new table and its indexes; it can't create the index because there's existing table.
My workaround to be able to create new table even if there's an existing table in my Ledger is that I'm querying the list of tables I have.
const getTables = async (transactionExecutor: TransactionExecutor) => {
const statement = `SELECT name FROM information_schema.user_tables`;
return await transactionExecutor.execute(statement);
};
Then I have this condition to check if the table is already existing
const tables = JSON.stringify(result.getResultList());
if (
!JSON.parse(tables).some((object): boolean => object.name === process.env.TABLE_NAME)
) {
console.log('TABLE A NOT EXISTING');
await createTable(transactionExecutor, process.env.TABLE_NAME);
}
if (
!JSON.parse(tables).some(
(object): boolean => object.name === process.env.TABLE_NAME_1,
)
) {
console.log('TABLE B NOT EXISTING');
await createTable(transactionExecutor, process.env.TABLE_NAME_1);
}
I don't know how to do it with indexes, I tried using SQL commands in QLDB but it's not working.
I hope you can help me.
Thank you

I'm not quite sure what your question is (the post title and body hint at different things), but I'm going to do my best to answer.
First, QLDB stores data in Ion, not JSON. So, please use the Ion APIs to parse data and not the JSON ones. The reason your code works at all is because Ion is a superset of JSON and the result set doesn't include types that are unknown to JSON. So, for example, if the result set was changed to include an Ion Timestamp, then your code would break.
Next, actually getting a list of tables has first class support in the driver. Simply use driver.getTableNames.
Third, I think you have a question "can I add an index to a non-empty table?". The answer is "no". This is planned functionality and I will update this answer when it is available. UPDATE: Now you can! https://aws.amazon.com/about-aws/whats-new/2020/09/amazon-qldb-launches-index-improvements/
Finally, I think you're also asking if there is a way to list indexes on a table in the same way as you can list tables in a ledger. The answer to that is 'yes'. The documents returned in information_schema.user_tables look like this:
{
tableId:"...",
name:"THE_TABLE_NAME",
indexes:[
{
expr:"[THE_FIELD_BEING_INDEXED]"
}
],
status:"ACTIVE"
}

Related

Google Healthcare API Nodejs - Filter Appointment using start and end date

I cannot filter an Appointment from start and end date using the google healthcare API.
I am trying to recreate the query below:
Appointment?date=ge2023-02-03T04:00:00.000Z&date=le2023-02-05T04:00:00.000Z
This is my javascript code using the library:
const parent = `projects/${projectId}/locations/${cloudRegion}/datasets/${datasetId}/fhirStores/${fhirStoreId}`;
const params = {
parent,
resourceType,
date: `le${end}`,
date: `ge${start}`,
};
const resource = await healthcare.projects.locations.datasets.fhirStores.fhir.search(params);
date: `ge${start}` is ignored because you cannot duplicate the same key.
Is there any other way I can achieve this.
Thanks!
I was able to make it work.
const params = {
parent,
resourceType,
"date:end": `le${end}`,
"date:start": `ge${start}`,
};
Just change "date" to "date:start" and "date:end"
I see that you were able to make this work, but there's a better way. What you essentially want to do here is specify multiple date parameters. The solution that you've come up with does this, but by using non-existent modifiers, :start and :end. By default the FHIR store drop modifiers it does not recognize, so your query as the same as date=le...&date=ge..., which is the end result you want. But if you were using the header Prefer: handling=strict, or set defaultSearchHandlingStrict in your FHIR store config, you'll get an error back from this search.
So what's the correct thing to do? If you want to specify multiple query parameters to be ANDed together (with the Node API), all you need to do is specify an array. In the future, if you wanted to OR them together you would use a single parameter with a comma separated list. So your code becomes
const params = {
parent,
resourceType,
date: [`le${end}`, `ge${start}`],
};
As a side note, the behaviour of search with the resourceType parameter is undefined, the method you want is searchType. These look the same due to the way the code is generated, but they function slightly differently.

CloudWatch Metric Filter for checking JSON key exists

I'm trying to come up with a metric filter expression that filters CloudWatch Logs when a special JSON key attribute is present.
Use case is the following: the application does all kinds of logging(in JSON format) and whenever it has a special JSON key(nested JSON response from third-part service), I would like to filter it.
Example logs:
{"severity":"INFO","msg":"EVENT","event":{"key1":"value1"}}
{"severity":"INFO","msg":"FooService responded","response":{"response_code":800}}
Filter patterns that I've tried that don't work:
{ $.response }
{ $.response = *}
{ $.response = "*"}
{ $.response EXISTS }
{ $.response IS TRUE }
{ $.response NOT NULL }
{ $.response != NULL }
Expected filtering result:
{"severity":"INFO","msg":"FooService responded","response":{"response_code":800}}
{ $.response EXISTS } does the opposite of what I expect(returns the 1st line rather than then 2nd) but I'm not sure how to negate it.
Reference material: Filter and pattern syntax # CloudWatch User Guide
I haven't found a good solution.
But I did find one at least.
If you search for a key being != a specific value, it seems to do a null check on it.
So if you say:
{$.response != "something_no_one_should_have_ever_saved_this_response_as"}
Then you get all entries where response exists in your json, and where it's not your string (hopefully all of the valid entries)
Definitly not a clean solution, but it seems to be pretty functional
I don't have a solution to the task of finding records where a field exists. Indeed, the linked document in the question specifically calls this out as not supported.
but
If we simply reverse our logic this becomes a more tractable problem. Looking at your data, you want All records where there's a response key but that could also be stated as All records where there isn't an events key.
This means you could accomplish the task with {$.event NOT EXISTS}. Of course, this becomes more complicated the more types of log messages you get (I had to chain three different NOT EXISTS queries for my use case) but it does solve the problem.

Can I use a ForAll and UpdateIf within a local offline Powerapps collection?

Can anyone help?I need assistance to collect multiple records in a gallery and save it to a local collection when offline.
When my app is connected, my script uses a ForAll to go through all the gallery items then if the Question ID matches the ID in the gallery, it patches the records to the SQL database. This part works fine.
However, when offline, I collect the items and save them to a local collection called LocalAnswers. It only saves 1 record (instead of 20) and does not pull in the Question ID. I have tried inserting a ForAll and UpdateIf within my Collect function but can't seem to get it right. Any ideas?
If(
Connection.Connected,
ForAll(
Gallery2.AllItems,
UpdateIf(
AuditAnswers,
ID = Value(IDGal.Text),
{
AuditID: IDAuditVar,
Answer: Radio1.Selected.Value,
Action: ActionGal.Text,
AddToActionPlan: tglAction.Value
}
)
),
Collect(
LocalAnswers,
{
AuditID: IDAuditVar,
Answer: Radio1.Selected.Value,
Action: ActionGal.Text,
AddToActionPlan: tglAction.Value
}
)
);
Collect only pulls a single record because you only have a single record defined (everything between the {}).
I don't typically create collections from Gallery.AllItems but rather from a Data Source (Sharepoint, SQL, another Collection, etc.), so not sure if this will work without testing.
Try something like:
ForAll(Gallery2.AllItems,
Collect(colLocalAnswers,
{
AuditID: ThisRecord.AuditID, //or whatever the control's name is
Answer: ThisRecord.Radio1.Selected.Value,
Action: ThisRecord.ActionGal.Text,
AddToActionPlan: ThisRecord.tglAction.Value
}
)
);
SaveDate(colLocalAnswers, "localfile")

Drop column in Dynamo DB table

I've been looking through the AWS Dynamo DB documentation and the Amazon Dynamo interface and it seems like there's no way to remove a column from a table, outside of deleting the entire table with it's contents and starting over, is that true?
If so, why would Amazon not support this?
Try removing all data from that column, it will automatically remove that column.
Using document client with javascript, we can do this:
const paramsUpdate = {
TableName: tableName,
Key: { HashKey: 'hashKey' },
UpdateExpression: 'remove #c ',
ExpressionAttributeNames: { '#c': 'columnName' }
};
documentClient.update(paramsUpdate, (errUpdate) => {
if (errUpdate) log.error(errUpdate);
});
In here we set UpdateExpression with remove sentence
There is a REMOVE action in the DynamoDB API.
DynamoDB does not have a schema definition, and so there is no such thing as a "column". It also means there is no way to delete all attributes with the same name without iterating over each record.
A solution I recommend is to keep these attributes, and to make your code refer to that same data using a fresh attribute name.
For example, attribute content could become content_v2. It might not look so clean, but it's cheap, quick and your old data would be backed up.
Setting all instances of the column value to null clears the column.
In C#, this method does the trick using the persistence framework:
static void RemoveColumn()
{
var myItems = context.ScanAsync<MyObjectType>(null).GetRemainingAsync().Result;
// Foreach item, update
myItems.ForEach(myObject =>
{
myObject.UnwantedColumn = null;
context.Save(myObject);
});
}
Just remove all the data for that one column. On my end, it automatically refreshed, might have to refresh the page.

Search Informatica for text in SQL override

Is there a way to search all the mappings, sessions, etc. in Informatica for a text string contained within a SQL override?
For example, suppose I know a certain stored procedure (SP_FOO) is being called somewhere in an INFA process, but I don't know where exactly. Somewhere I think there is a Post SQL on a source or target calling it. Could I search all the sessions for Post SQL containing SP_FOO ? (Similar to what I could do with grep with source code.)
You can use Repository queries for querying REPO tables(if you have enough access) to get data related with all the mappings,transformations,sessions etc.
Please use the below link to get almost all kind of repo queries.Ur answers can be find in the below link.
https://uisapp2.iu.edu/confluence-prd/display/EDW/Querying+PowerCenter+data
select *--distinct sbj.SUBJECT_AREA,m.PARENT_MAPPING_NAME
from REP_SUBJECT sbj,REP_ALL_MAPPINGS m,REP_WIDGET_INST w,REP_WIDGET_ATTR wa
where sbj.SUBJECT_ID = m.SUBJECT_ID AND
m.MAPPING_ID = w.MAPPING_ID AND
w.WIDGET_ID = wa.WIDGET_ID
and sbj.SUBJECT_AREA in ('TLR','PPM_PNLST_WEB','PPM_CURRENCY','OLA','ODS','MMS','IT_METRIC','E_CONSENT','EDW','EDD','EDC','ABS')
and (UPPER(ATTR_VALUE) like '%PSA_CONTACT_EVENT%'
-- or UPPER(ATTR_VALUE) like '%PSA_MEMBER_CHARACTERISTIC%'
-- or UPPER(ATTR_VALUE) like '%PSA_REPORTING_HH_CHRSTC%'
-- or UPPER(ATTR_VALUE) like '%PSA_REPORTING_MEMBER_CHRSTC%'
)
--and m.PARENT_MAPPING_NAME like '%ARM%'
order by 1
Please let me know if you have any issues.
Another less scientific way to do this is to export the workflow(s) as XML and use a text editor to search through them for the stored procedure name.
If you have read access to the schema where the informatica repository resides, try this.
SELECT DISTINCT f.subj_name folder, e.mapping_name, object_type_name,
b.instance_name, a.attr_value
FROM opb_widget_attr a,
opb_widget_inst b,
opb_object_type c,
opb_attr d,
opb_mapping e,
opb_subject f
WHERE a.widget_id = b.widget_id
AND b.widget_type = c.object_type_id
AND ( object_type_name = 'Source Qualifier'
OR object_type_name LIKE '%Lookup%'
)
AND a.widget_id = b.widget_id
AND a.attr_id = d.attr_id
AND c.object_type_id = d.object_type_id
AND attr_name IN ('Sql Query')--, 'Lookup Sql Override')
AND b.mapping_id = e.mapping_id
AND e.subject_id = f.subj_id
AND a.attr_value is not null
--AND UPPER (a.attr_value) LIKE UPPER ('%currency%')
Yes. There is a small java based tool called Informatica Meta Query.
Using that tool, you can search for any information that is present in the Informatica meta data tables.
If you cannot find that tool, you can write queries directly in the Informatica Meta data tables to get the required information.
Adding few more lines to solution provided by Data Origin and Sandeep.
It is highly advised not to query repository tables directly. Rather, you can create synonyms or views and then query those objects to avoid any damage to rep tables.
In our dev/ prod environment application programmers are not granted any direct access to repo. tables.
As querying the Informatica database isn't the best idea, I would suggest you to export all the workflows in your folder into xml using Repository Manager. From Rep Mgr you can select all of them once and export them at once. Then write a java program to search the pattern from the xml's you have.
I have written a sample prog here, please modify it as per your requirement:
make a spec file with workflow names(specFileName).
main()
{
try {
File inFile = new File(specFileName);
BufferedReader reader = new BufferedReader(newFileReader(infile));
String tectToSearch = '<YourString>';
String currentLine;
while((currentLine = reader.readLine()) != null)
{
//trim newline when comparing with String
String trimmedLine = currentLine.trim();
if(currentline has the string pattern)
{
SOP(specFileName); //specfile name
}
}
reader.close();
}
catch(IOException ex)
{
System.out.println("Error reading to file '" + specFileName +"'");
}
}