I have a problem deleting documents from Amazon CloudSearch.
When I send document for deletion I receive response
{"status": "success", "adds": 0, "deletes": 5}
And then the video stays in the index with all fields reset to their default values and not deleted.
The documentation is not clear if this is the normal behaviour or a bug.
Any one else experienced this?
This surprised me too but appears to be normal behavior. The 'deleted' documents aren't searchable anymore since their fields are all null so they shouldn't cause any problems.
The problem I have with this is that they can be returned if you search for something like "-zomgwtfbbq", since they don't contain the term "zomgwtfbbq".
It is also confusing since it makes your dashboard show one count (the "searchable" documents) but if you run a test search for -zomgwtfbbq (what I have been using as a proxy for "get all documents"), you get a different number. Took me a while to figure out why.
Despite what they say about setting the version to max uint32 "permanently removing" the document, it will still be there. The problem is that they consider these documents unsearchable, but they're not.
Are you specifying the version number when you delete the document?
When deleting documents, note that deleting version max(uint32_t) will permanently remove the document from your domain. Because it is not possible to specify a higher version number, there is no way to add a later version of the document.
http://docs.aws.amazon.com/cloudsearch/latest/developerguide/versioning.html
Related
Is there a way to get the list of all the versions of a specific resource created in FHIR store. I have used the following call,
<FHIR_URL>/<resource-type>/<resource-id>/_history
but its not returning response
If I add version to this url:
<FHIR_URL>/<resource-type>/<resource-id>/_history/<version>
then it only shows that particular version of the resource, but all the versions of a specific resource are required, is there a way to get this?
When in doubt, I always try the reference-implementations.
(all GET requests below)
http://wildfhir4.aegis.net/fhir4-0-1/Patient/example/_history
https://vonk.fire.ly/R4/Patient/pat1/_history
http://hapi.fhir.org/baseR4/Patient/616330/_history?_count=50
I got each systems Patient-Fhir-Logical-Id ("example", "pat1", "616330") by using the search function, and picking a random Patient. The Search function is as simple as /Patient/? and no query string values.
While always subject to change and "re doing the seed data", the aegis example above (today) is returning multiple rows of history for a single patient.
If AWS does not work "mostly the same" as the 3 reference implementations, I would submit a bug report.
But based on your examples, it seems to fall in line with the reference implementation examples above and the HL7 documentation below.
https://build.fhir.org/http.html#history
Ok, I've been using Google Cloud Platform for some video files
that are are viewable from a few web pages I built. I started this two or three years ago, and I have loved it.
But, now it appears they broke it, without warning/telling us.
So, in the platform's console, yesterday (for first time since a month or two ago), I uploaded another video...that part went fine. But, when it came time to click on the checkbox to grant public access, the checkbox is now GONE. (The only part of the UI that looks NEW,
is the column labeled 'public access'. Instead of just a check-box to toggle on or off, now there's a yellow-triangle and an oval-shaped symbol. Once or twice, I was able to get a popup to appear saying 'edit permission', but that quickly led into the weeds.)
After half an hour or so, I finally thought to call platform support, and explained my problem to a guy (with just enough Australian accent to cause me to have to ask for repeats quite a bit...sigh).
So, they logged me a case# and I suggested I was headed to bed, and asked that we now use email (rather than the phone) to continue. Just before bed, I got the case#, and a query about whether it was ok for them to 'change my console'. I replied to the email, saying yes, and went to bed.)
So that was last nite. This morning, re-reading their email, it seems to say that it could be 3 or 4 days, before a more technical person will contact me.
Some re-reading their platform-console docs, I'm now GUESSING that maybe they just nuked the public-access checkbox, and that now I'm supposed to spend hours (days?) taking a short-course on IAM-permmissions, and learn some new long-winded method.
(This whole mess could have been avoided, if they'd just emailed us an informational warning of this UI-change, with some new 5-step short list or tutorial of how to learn to use their 'new, much more complicated,
way to specify public-access'. From where I sit, this change is equivalent to Microsoft saying 'instead of that checkbox, you'll need to learn to make registry edits...see our platform docs on how to do that.)
Right now, I have more than half-a-mind, to seriously consider bailing out of Google's cloud storage, and consider switching to one of the others. But, I'm not quite ready yet, to make that jump (from the frying-pan into the fire?). :^)
Anyone else been down this road? What meeting did I miss? Is there a quicker way out of my dilemma, than just waiting for Google-support to get back to me?
It looks like the change you mention was introduced on July 18th. I’m not sure why, but judging by the change description, it looks like it is aimed to avoid accidentally making sensitive information public: “Objects can no longer be made public through one-click actions”.
You can find the procedure to make a single object public here. It can be achieved through the Console and won't take you more than a few minutes. Once the object is shared publicly, you can use the icon in the “public access” column to get the URL for the object.
You can also make all the content of a bucket public using a similar approach.
When you upload your objects into a bucket, you can upload with ACL as publicRead
and all your objects will have public URL.
public async Task UploadObjectAsync(string bucketName, string objectName, Stream source, string contentType = "image/jpeg")
{
var storage = StorageClient.Create();
await storage.UploadObjectAsync(bucketName, objectName, contentType, source, new UploadObjectOptions()
{
PredefinedAcl = PredefinedObjectAcl.PublicRead
});
}
As I suspected. (I still wonder if they even considered sending an email to each registered/existing customer.)
Ok, yes, (finally, after some practices), this solves it! Thx for those two answers.
(But in my view, their UI-change is still a work-in-progress) So, I have a SUGGESTION for ya, Google. Once one is into the permissions-edit-dialog, and remembers to do an 'add', there's are the 3 fields. The first and third are fine...drop-downs with choices. But that middle entry needs work...how about doing something like an auto-guess-ahead...initialize the field to a suggested value of 'allUsers', so we don't have to remember what to type and how to spell it, or something along those lines.
EDIT: [Actually, it ought to be possible to make that field a drop-down-list choice, with 'allUsers' as one suggested value, and a second value as a text-entry (for specific user-names, etc).]
Unfortunately, 8 Ball Pool it is not possible to list files Google Hangouts without access to the Omegle bucket that contains them. This is due to the current design of the library, which requires that the bucket is loaded before listing its files.
Summary - Currently the Directory API User Photo's photo data (encoded as web-safe Base64) handles padding in a way that contradicts the documentation; padding should be converted from = to . but instead requires =. Looking for clarification of if this is intended or not.
Details - I have been using the Google API to interface with the User Photos - being able to retrieve and update. The documentation is clear as to how the Web-Safe Base64 format for the photo data needs to be presented:
For padding, the period (.) character is used instead of the RFC-4648 baseURL definition which uses the equals sign (=) for padding. This is done to simplify URL-parsing.
However, recently this has stopped working. I'm unsure of exactly when this happened. (Edit: Based on similar comments from years ago that I'm finding, this may have never worked and I just never happened to test a photo that encoded into something with padding.)
To test this I downloaded an existing image and re-uploaded it, and got the error Invalid value for ByteString. If I intercept the Base64 being returned and pass that same data back directly, I get no error.
The issue turned out to be the padding - in the documentation it states the = equals padding needs to be replaced with a . period. My example Base64 ended with two padding characters, which were turned into two periods as expected (and this gives the error). If I instead leave the padding as the = equals, it works no problem.
Turns out the Base64 being returned from Google when you retrieve a user photo also has the padding with = equals characters, which seems to clearly contradict the documentation. I have also confirmed this through the Try It Now methods via the web, so it's not language or API Client-specific.
So, did the process change and the documentation (last updated Feb 26th, 2015) isn't updated? Is this a permanent change or a bug?
Edit - According to some other posts it looks like this has been a longstanding issue, I may have never run in to an image before that ended up with padding. Point stands - is the documentation accurate, or do I need to adjust for this?
Edit 2 - All signs point to this being either a bug or bad documentation. Either way, I was unable to find any issues in their tracking for this, so I have opened an issue for it. If I get official word in any case I will [try to remember to] come back and provide it as an answer.
we are experiencing some very odd errors in our installation.
Some times out of nowhere Sitecore throws an error:
Assert: Value Cannot be null. Parameter: Item.
The closest i have come to identifying the problem is narrowing it down to either an index or the web database.
Anyway, if I log into sitecore the Item is just missing, i can fix it in 3 ways:
Rebuild the index.
Recycle app pool
iisreset
Does any of you have an idea why this might be happening? We are running Sitecore.NET 6.5.0 (rev. 120706). Any help will be deeply appreciated.
You are describing a system stability issue, so I recommend opening a ticket with Sitecore support (http://support.sitecore.net). This sort of issue is difficult to troubleshoot over Stack Overflow, since we do not have access to your logs and configuration.
When opening the ticket, I recommend using the Support Package Generator which bundles up all the information (Web.config, App_Config files, IIS settings, Sitecore log files) that Sitecore Support needs to troubleshoot the issue. It's a pretty nifty tool.
That said, from what you describe, it sounds like the issue is related to caching. The fact that restarting IIS resolves the issue indicates that the item is in the Web database, but the runtime doesn't see it. You can prove out whether this is the issue by clearing cache using the /sitecore/admin/cache.aspx screen. If your cache is not getting updated properly, you should review your configuration against the guidelines in the SDN Scaling Guide.
Based on knowing you're using the Advanced Database Crawler, your issue may be how you're converting a SkinnyItem to an Item. I've had this issue before. If you look at the SkinnyItem.cs class, there's a GetItem() method to convert it into an Item. You can see it uses the database to get the item by its ID, language, and version number. Its possible that when you publish from master to web, you are publishing a new version # of an existing item and thus the new version exists in the web DB, but the index is not updated and references the old version. So, this GetItem() call would use the previous version # and the item would be null. One way to fix this is instead of calling that GetItem() method, just use your own code to get the latest version of that item from Sitecore, e.g.
Item item = Sitecore.Context.Database.GetItem(someSkinnyItem.ItemID);
Instead of
Item item = someSkinnyItem.GetItem();
Here's an example flow:
Foo item created in master DB as version 1.
Publish Foo to web
Index will pick up version 1 in web DB and put in index.
Any querying code against index will convert the SkinnyItem to an Item via that GetItem() method and will pass 1 as the version #.
Page will load, no error in log
Back in master, create version 2 of Foo and publish.
Index may not get updated right away or even if configured wrong.
Code that looks against index will call GetItem() and still call with version 1 since that's in the index
But when you published, web no longer has version 1, it now has version 2, and thus that specific version of that item Foo is null
Error shows in log
On a similar note, here's a blog post by Alex Shyba (creator of the ADC) on how to sync HTML cache clearing with the index updates. That may help.
I've got a quick question regarding the use of repositories. But the best way to ask is to show a bit of pseudocode and you guys tell me what the result should be
Get a record from the repository with ID of 1 (assume it exists)
Edit a couple of properties
Query the repository again for an item with ID of 1
Result = ??
Should I get the object with updated values or the object without (original state), bearing in mind that since updating the values of properties (step 2) I have not told the repository to update this record.
I think I should get a copy of the original item and not a reference to the edited version.
Please tell me what is correct.
Cheers
The repository pattern is suppose to act like a collection of your objects, so ideally I think it should return the same object instance which would have the updates in it.
Generally there is an identity map somewhere so your repositories can keep track of what has already been loaded. With an identity map, when you fetch an object with the same Id you should always get the already loaded object back regardless of how many times. This is how all more sophisticated ORMs work and is generally a good practice. An identity map helps keep things in sync while you are in the same transaction and saves you some data access.
NHibernate's session has an identity map it keeps track of so you don't have to worry about trying to implement your own in your repositories. Also I believe you can use NHibernate's stateless session if you want to load another instance without change tracking, but I'm not positive on that.
Judging from your past questions I'm assuming you are using LINQ/C#?
If you are using a DataContext and you haven't called SubmitChanges() then you should get back the original unchanged object.
Just tested it. I was wrong, you get back the changed object.
If you set ObjectTrackingEnabled = false on the DataContext you will get the unchanged object.