DynamoDB does not support batch update, it supports batch put only.
But is it possible to batchPut only of item with key does not exist?
In the batchWriteItem, there is the following note:
For example, you cannot specify conditions on individual put and delete requests, and BatchWriteItem does not return deleted items in the response.
Instead, I would recommend using putItem with a conditional expression. Towards the bottom of the putItem documentation there is the following note:
[...] To prevent a new item from replacing an existing item, use a
conditional expression that contains the attribute_not_exists function
with the name of the attribute being used as the partition key for the
table [...]
So make sure to add the following to ConditionExpression (using NodeJS syntax here)
const params = {
Item: {
userId: {
S: "Beyonce",
}
},
ConditionExpression: "attribute_not_exists(userId)"
};
Related
I'm trying to come up with a metric filter expression that filters CloudWatch Logs when a special JSON key attribute is present.
Use case is the following: the application does all kinds of logging(in JSON format) and whenever it has a special JSON key(nested JSON response from third-part service), I would like to filter it.
Example logs:
{"severity":"INFO","msg":"EVENT","event":{"key1":"value1"}}
{"severity":"INFO","msg":"FooService responded","response":{"response_code":800}}
Filter patterns that I've tried that don't work:
{ $.response }
{ $.response = *}
{ $.response = "*"}
{ $.response EXISTS }
{ $.response IS TRUE }
{ $.response NOT NULL }
{ $.response != NULL }
Expected filtering result:
{"severity":"INFO","msg":"FooService responded","response":{"response_code":800}}
{ $.response EXISTS } does the opposite of what I expect(returns the 1st line rather than then 2nd) but I'm not sure how to negate it.
Reference material: Filter and pattern syntax # CloudWatch User Guide
I haven't found a good solution.
But I did find one at least.
If you search for a key being != a specific value, it seems to do a null check on it.
So if you say:
{$.response != "something_no_one_should_have_ever_saved_this_response_as"}
Then you get all entries where response exists in your json, and where it's not your string (hopefully all of the valid entries)
Definitly not a clean solution, but it seems to be pretty functional
I don't have a solution to the task of finding records where a field exists. Indeed, the linked document in the question specifically calls this out as not supported.
but
If we simply reverse our logic this becomes a more tractable problem. Looking at your data, you want All records where there's a response key but that could also be stated as All records where there isn't an events key.
This means you could accomplish the task with {$.event NOT EXISTS}. Of course, this becomes more complicated the more types of log messages you get (I had to chain three different NOT EXISTS queries for my use case) but it does solve the problem.
I'm currently using AWS Lambda (NodeJS) with AWS QLDB.
The scenario is like this.
I have the first table and its indexes when I deployed the service. So the table and indexes will be created. My problem is that, once I need to add new table and its indexes; it can't create the index because there's existing table.
My workaround to be able to create new table even if there's an existing table in my Ledger is that I'm querying the list of tables I have.
const getTables = async (transactionExecutor: TransactionExecutor) => {
const statement = `SELECT name FROM information_schema.user_tables`;
return await transactionExecutor.execute(statement);
};
Then I have this condition to check if the table is already existing
const tables = JSON.stringify(result.getResultList());
if (
!JSON.parse(tables).some((object): boolean => object.name === process.env.TABLE_NAME)
) {
console.log('TABLE A NOT EXISTING');
await createTable(transactionExecutor, process.env.TABLE_NAME);
}
if (
!JSON.parse(tables).some(
(object): boolean => object.name === process.env.TABLE_NAME_1,
)
) {
console.log('TABLE B NOT EXISTING');
await createTable(transactionExecutor, process.env.TABLE_NAME_1);
}
I don't know how to do it with indexes, I tried using SQL commands in QLDB but it's not working.
I hope you can help me.
Thank you
I'm not quite sure what your question is (the post title and body hint at different things), but I'm going to do my best to answer.
First, QLDB stores data in Ion, not JSON. So, please use the Ion APIs to parse data and not the JSON ones. The reason your code works at all is because Ion is a superset of JSON and the result set doesn't include types that are unknown to JSON. So, for example, if the result set was changed to include an Ion Timestamp, then your code would break.
Next, actually getting a list of tables has first class support in the driver. Simply use driver.getTableNames.
Third, I think you have a question "can I add an index to a non-empty table?". The answer is "no". This is planned functionality and I will update this answer when it is available. UPDATE: Now you can! https://aws.amazon.com/about-aws/whats-new/2020/09/amazon-qldb-launches-index-improvements/
Finally, I think you're also asking if there is a way to list indexes on a table in the same way as you can list tables in a ledger. The answer to that is 'yes'. The documents returned in information_schema.user_tables look like this:
{
tableId:"...",
name:"THE_TABLE_NAME",
indexes:[
{
expr:"[THE_FIELD_BEING_INDEXED]"
}
],
status:"ACTIVE"
}
Example here: https://codesandbox.io/s/j4mo8qpmrw
Docs here: https://www.apollographql.com/docs/link/links/state.html#default
TLDR: This is a todo list, the #client query parameters don't filter out the list.
This is the query, taking in $id as a parameter
const GET_TODOS = gql`
query todos($id: Int!) {
todos(id: $id) #client {
id
text
}
}
`;
The query passes the variable in there
<Query query={GET_TODOS} variables={{ id: 1 }}>
/* Code */
</Query>
But the default resolver doesn't use the parameter, you can see it in the codesandbox.io example above.
The docs say it should work, but I can't seem to figure what I'm missing. Thanks in advance!
For simple use cases, you can often rely on the default resolver to fetch the data you need. However, to implement something like filtering the data in the cache or manipulating it (like you do with mutations), you'll need to write your own resolver. To accomplish what you're trying to do, you could do something like this:
export const resolvers = {
Query: {
todos: (obj, args, ctx) => {
const query = gql`
query GetTodos {
todos #client {
id
text
}
}
`
const { todos } = ctx.cache.readQuery({ query })
return todos.filter(todo => todo.id === args.id)
},
},
Mutation: {},
}
EDIT: Every Type we define has a set of fields. When we return a particular Type (or List of Types), each field on that type will utilize the default resolver to try to resolve its own value (assuming that field was requested). The way the default resolver works is simple -- it looks at the parent (or "root") object's value and if it finds a property matching the field name, it returns the value of that property. If the property isn't found (or can't be coerced into whatever Scalar or Type the field is expecting) it returns null.
That means we can, for example, return an object representing a single Todo and we don't have to define a resolver for its id or text fields, as long as the object has id and text properties on it. Looking at it another way, if we wanted to create an arbitrary field on Todo called textWithFoo, we could leave the cache defaults as is, and create a resolver like
(obj, args, ctx) => obj.text + ' and FOO!'
In this case, a default resolver would do us no good because the objects stored in the cache don't have a textWithFoo property, so we write our own resolver.
What's important to keep in mind is that a query like todos is just a field too (in this case, it's a field on the Query Type). It behaves pretty much the same way any other field does (including the default resolver behavior). With apollo-link-state, though, the data structure you define under defaults becomes the parent or "root" value for your queries.
In your sample code, your defaults include a single property (todos). Because that's a property on the root object, we can fetch it with a query called todos and still get back the data even without a resolver. The default resolver for the todos field will look in the root object (in this case your cache), see a property called todos and return that.
On the flip side, a query like todo (singular) doesn't have a matching property in the root (cache). You need to write a resolver for it to have it return data. Similarly, if you want to manipulate the data before returning it in the query (with or without arguments), you need to include a resolver.
create a view that return only a subset of values from a document, each with its key and value within a json string. like if one given view returns a document as this following, Is it possible to get some fields information for a one request? thank you
{
"total_rows":10,
"offset":3,
"rows":[{
"id":"doc1",
"key":"abc123",
"value": {
"_id":"aaaa",
"_rev":"bbb",
"field1":"abc",
"field2":"bcd",
"field3":"cde",
"field4":"123",
"field5":"789",
"field6":"aa#email.com",
"field7":"ttt",
"field8":"iii",
"field9":[{
"field91":"tyui",
"field92":"55555"
}],
"field10"::"0000",
"field11"::"55555",
"field12"::"0030".........
}
}
I just want to create a view that returns some fields only the following:
{
"field1":"abc",
"field2":"bcd",
"field3":"cde",
"field4":"123",
"field5":"789",
"field6":"aa#email.com",
"field7":"ttt",
"field8":"iii",
"field9":[{
"field91":"tyui",
"field92":"55555"
}]
}
A map function that emits a new document with selected fields only. As an example, let's map field1 (a string) and field9 (an array) only:
function map(doc) {
emit(doc._id, {
field1: doc.field1,
field9: doc.field9
});
}
In the above example, each document will be fired with a key being the original doc ID and the value being the mapped fields you require.
This is useful if you are planning to add a reduce function later.
Depending on your use case, you may just want to emit the mapped objects:
function map(doc) {
emit({
field1: doc.field1,
field9: doc.field9
});
}
Please see http://guide.couchdb.org/draft/views.html
The documentation on building data views is pretty good, you can discover a lot by experimenting..
I've been looking through the AWS Dynamo DB documentation and the Amazon Dynamo interface and it seems like there's no way to remove a column from a table, outside of deleting the entire table with it's contents and starting over, is that true?
If so, why would Amazon not support this?
Try removing all data from that column, it will automatically remove that column.
Using document client with javascript, we can do this:
const paramsUpdate = {
TableName: tableName,
Key: { HashKey: 'hashKey' },
UpdateExpression: 'remove #c ',
ExpressionAttributeNames: { '#c': 'columnName' }
};
documentClient.update(paramsUpdate, (errUpdate) => {
if (errUpdate) log.error(errUpdate);
});
In here we set UpdateExpression with remove sentence
There is a REMOVE action in the DynamoDB API.
DynamoDB does not have a schema definition, and so there is no such thing as a "column". It also means there is no way to delete all attributes with the same name without iterating over each record.
A solution I recommend is to keep these attributes, and to make your code refer to that same data using a fresh attribute name.
For example, attribute content could become content_v2. It might not look so clean, but it's cheap, quick and your old data would be backed up.
Setting all instances of the column value to null clears the column.
In C#, this method does the trick using the persistence framework:
static void RemoveColumn()
{
var myItems = context.ScanAsync<MyObjectType>(null).GetRemainingAsync().Result;
// Foreach item, update
myItems.ForEach(myObject =>
{
myObject.UnwantedColumn = null;
context.Save(myObject);
});
}
Just remove all the data for that one column. On my end, it automatically refreshed, might have to refresh the page.