I have read this page about solace wildcards and had a question:
https://docs.solace.com/PubSub-Basics/Wildcard-Charaters-Topic-Subs.htm
I have a dynamic topic like this
produce/apple/seedless/color
I want to get in this case a list of apples. I want to get all the gala, delicious, ... apples.
I know I can create individual subscribers for each apple type but in the base of my data there could be hundreds and I only want a subset of them. Is it possible to pass in a list of subscriptions for the apple fields? This will be a subscription created on a physical queue listening to dynamic topics created programatically based on the message payload.
example of the subscription I want:
produce/galla,seedless,delicious/seedless/color
not 3 subscriptions:
produce/galla/seedless/color
produce/seedless/seedless/color
produce/delicious/seedless/color
Thanks for all the help
Revisit your topic structure.
Does produce/> contain only apples? Then use produce/> but probably best to rename it apples/.
If produce contains apples and oranges, add a topic level:
produce/apples/...
produce/oranges/...
Then you can just subscribe to
produce/apples/>
Related
I did the amazon personalize deep dive series on youtube. At the timestamp 8:33 in the video, it was mentioned that 'Personalize does not understand negative feedback.' and that any interaction you submit is assumed to be a positive one.
But I think that giving negative feedback could improve the recommendations that we give on a whole. Personalize knowing that a user does not like a given item 'A' would help ensure that it does not recommend items similar to 'A' in the future.
Is there any way in which we can give negative feedback(ex. user doesn't like items x,y,z) to amazon personalize?
A possible way to give negative feedback that I thought of:
Let's say users can give ratings out of 5 for movies. Every time a user gives a rating >= 3 in the interactions dataset, we add an additional interaction in the dataset (i.e we have two interactions saying that a user rated a movie >=3 in the interactions.csv instead of just one). However, if he gives a rating <=2 (meaning he probably doesn't like the movie), we just keep the single interaction of that in the interactions dataset (i.e. we only have one interaction saying that a user rated a movie <=2 in the interactions.csv file)
Would this in any way help to convey to personalize that ratings <=2
are not as important/that the user did not like them?
Negative feedback, where the user explicitly indicates that they dislike an item, is currently not supported as training input for Amazon Personalize. Furthermore, there is currently no way to add weight/reward to specific interactions by event type or event value (see this answer for details).
With that said, you can use impressions with your interactions to indicate items that were seen by the user but that they chose not to interact with. Impressions are only supported by the User-Personalization recipe. From the docs:
Unlike other recipes, which solely use positive interactions (clicking, watching, or purchasing), the User-Personalization recipe can also use impressions data. Impressions are lists of items that were visible to a user when they interacted with (clicked, watched, purchased, and so on) a particular item.
Using this information, a solution created with the User-Personalization recipe can calculate the suitability of new items based on how frequently an item has been ignored, and change recommendations accordingly. For more information see Impressions data.
Impressions are not the same as an explicitly negative interaction but they do imply that the impressed items were considered less relevant/important to the user than the item that they chose to interact with.
Another approach that can be used to consider negative interactions when making recommendations is to have two event types: one event type for positive intent (e.g., "like", "watch", or "purchase") and one event type for dislike (e.g., "dislike"). Then create a Personalize solution that only trains on the positive event type. Finally, at inference time, use a Personalize filter to exclude items that the user has recently disliked.
EXCLUDE ItemID WHERE Interactions.EVENT_TYPE IN ("dislike")
In this case, Personalize is still training only on positive interactions but with the filter you won't recommend items that the user disliked.
First time building a single table design and was just wondering if anyone had any advice/feedback/better ways on the following plan?
Going to be building a basic 'meetup' clone. so e.g. Users can create events, and then users can attend those events essentially.
How the entities in the app relate to eachother:
Entities: (Also added an 'ItemType' to each entity) so e.g. ItemType=Event
Key Structure:
Access Patterns:
Get All Attendees for an event
Get All events for a specific user
Get all events
Get a single event
Global Secondary Indexes:
Inverted Index: SK-PK-index
ItemType-SK-Index
Queries:
1. Get all attendees for an event:
PK=EVENT#E1
SK=ATTENDEE# (begins with)
2. Get All Events for a specific User
Index: SK-PK-index
SK=ATTENDEE#User1
PK=EVENT# (Begins With)
3. Get All Events (I feel like there's a much better way to do this, if there is please let me know)
Index: ItemType-SK-Index
ItemType=Event
SK=EVENT# (Begins With)
4. Get a single event
PK=EVENT#E1
SK=EVENT#E1
A couple of questions I had:
When returning a list of attendees, I'd want to be able to get extra data for that attendee, e.g. First/Lastname etc.
Based on this example: https://aws.amazon.com/getting-started/hands-on/design-a-database-for-a-mobile-app-with-dynamodb/module-5/
To avoid having to duplicate data and handle when data changes, (e.g. user changes name) use Partial Normalization and the BatchGetItem API to retrieve details?
For fuzzy searches etc, is the best approach to stream this data into e.g. elastic/opensearch?
If so, when building API's - would you still use dynamoDB for some queries, or just use elasticsearch for everything?
e.g. for Get All Events - would using an ItemType of 'Events' end up creating a hot partition if there's a huge number of events?
Sorry for the long post, Would appreciate any feedback/advice/better ways to do things, thank you!!
I want to query a LIST inside Marketing cloud, and then use Automation Studio to send subscribers from that list to a Data Extension.
But I don't see a method of querying a custom list?
The list is stored under "Subscribers > MyList".
MYList Screenshot
The _ListSubscribers data view contains subscribers associated with all lists. You'll need to narrow it down by ListID.
You'll find the ID of the List in its properties.
All lists share Profile Attribute values, which can be found in the _EnterpriseAttribute data view. You can relate that back to _ListSubscribers using the SubscriberID.
Heres the relation I'm trying to model in DynamoDB:
My service contains posts and topics. A post may belong to multiple topics. A topic may have multiple posts. All posts have an interest value which would be adjusted based on a combination of likes and time since posted, interest measures the popularity of a post at the current moment. If a post gets too old, its interest value will be 0 and stay that way forever (archival).
The REST api end points work like this:
GET /posts/{id} returns a post-object containing title, text, author name and a link to the authors rest endpoint (doesn't matter for this example) and the number of likes (the interest value is not included)
GET /topics/{name} should return an object with both a list with the N newest posts of the topics as well as one for the N currently most interesting posts
POST /posts/ creates a new post where multiple topics can be specified
POST /topics/
creates a new topic
POST /likes/ creates a like for a specified post (does not actually create an object, just adds the user to the given post-object's list of likers, which is invisible to the users)
The problem now becomes, how do I create a relationship between topics and and posts in DynamoDB NoSql?
I thought about adding a list of copies of posts to tag entries in DynamboDB, where every tag has a list of both the newest and the most interesting Posts.
One way I could do this is by creating a cloudwatch job that would run every 10 minutes and loop through every topic object, finding both the most interesting and newest entries and then replacing the old lists of the topic.
Another job would also have to regularly update the "interest" value of every non archived post (keep in mind both likes and time have an effect on the interest value).
One problem with this is that a lot of posts in the Tag list would be out of date for 10 minutes in case the User makes a change or deletes the post. Likes will also not be properly tracked on the Tags post list. This could perhaps be solved with transactions, although dynamoDB is limited to 10 objects per transaction.
Another problem is that it would require the add-posts-to-tags job to load all the non archived posts into memory in order to manually sort them by both time and interest, split them up by tag and then adding the first N of both sets to the tag lists every 10 minutes.
I also had a another idea, by limiting the tags of a post that are allowed to 1, I could add the tag as a partition key, with the post-time as the sort key, and use a GSI to add Interest as a second sort key.
This does have several downsides though:
very popular tags may be limited to a single parition since all the posts share a single partition key
Tag limit is 1
A cloudwatch job to adjust the Interest value of posts may still be required
It would require use of a GSI which may lead to dangerous race conditions
But it would have the advantage that there are no replications of the post objects aside from the GSI. It would also allow basically infinite paging of all posts by date instead of being limited to just the N newest posts.
So what is a good approach here? It seams both of my solutions have horrible dealbreakers. Is this just one of those problems that NoSQL simply can't solve?
You are trying to model relational data using a non relational DB ,
to do this I would use 2 types of DB ,
I would store in dynamo the post information
in your example it would be :
GET /posts/{id}
POST /posts/
POST /likes/creates
For the topic related information I would use Elastic search (Amazon Elasticsearch Service)
GET /topics/{name} : the search index would stored the full topic info as well post id's that , and the relevant fields you want to search for (in your case update date to get the most recent posts)
what this will entail is background process (in dynamoDB this can be done via streams) that takes changes to the dynamoDB for new post's , update to like count etc.. and populates the search index.
Note: this can also be solved using graphDB but for scaling purposes better separate the source of the data (post's ) and the data relations (topic).
For work i am going to use use DNS records between two servers using a software made by the company. Mostly SRV, NAPTR and A records.
To propagate the information i have to create a new type of message which is going to be sent by a function in our software managing all messages.
Instead of creating 3 types of messages "SRv" "NAPTR "A" i thought about creating only one kind - general for all DNS records - with a part of the message dedicated to the type; NAPTR, A, SRV, MX etc...
I would like to have advice for the fields needed in this message, for example which fields are common for each DNS record type to include them in all messages, and which fields are specials for each records ? (Maybe creating a Data field in the message for additionnal informations specific for each type (prefix and protocol for NAPTR for example))
Actually for NAPTR (only one i did) i have various variables like TTL, zone, that i receive.
And i put every one in a stream and update it with :
system("nsupdate update.txt")
the file filled with the oss looks like this:
update add test.zone 60 NAPTR 10 100 "S" "SIP+D2T" "" _sip._tcp.zone.
send
But i would like to have a more general message which adapts for various DNS records if someday i need new ones.
Any help appreciated.
To solve the problem i created one message subdivised in two part, header and body. The header has all the common infos of the RR, like TTL, domain, Class, Type, and in function of the type i create a "length of data" and "data" field for each RR. For example NAPTR got data field with others fields in it, like Weight, Priority etc...
I did not found any better idea to factorise the informations.