Magento API V2 Sales Orders List Not Working - web-services

I am using the API V2 "salesOrderList" for receiving a list of all the Orders which have been placed in Magento. But the SOAP Response is showing me an error as:-
Item (Mage_Sales_Model_Order) with the same id "1" already exist
I am using the Magento Enterprise version 1.9.0.0.
After looking into the SQL & searching the database, I found that for each Order, the SQL is providing 4 records for the same Order Entity ID; with the difference being only in the name fields of the billing & shipping area. Also the query is doing two Left Joins with the same database table "sales_flat_order_address" by using two different aliases (one for billing & another for shipping). From my understanding, this should have worked, which is not happening.
Can anyone please suggest as to what is happening & what can be done to recover from this error?
Any help is appreciated and thanks in advance.

Roughly, Magento is creating an order collection for you and attempting to load all the records. This collection has a rule that only allows it to create one order object for each ID, so when your additional object is loaded an exception is thrown.
The left joins could be the issue, but it's hard to say off the bat. Could you post a little detail on how you are making the API call? An incorrect join can often have this problem.
EDIT:
If you're using the default code, my first guess would be that there are erroneous records in the database, or that this is an upgraded Magento system which had a bad upgrade in the past. Try this on a clean copy of your EE version pointing to the same database. If the same problem occurs, you may need to spelunk in the database looking for the reason for the problematic data load. Since you already have the query, you may want to separate out parts of the query to see if some subquery is returning too much data.

Related

MDS business rule on more then one domain based attribute

I'm stuck with this problem on Master Data Service (MDS).
I have an entity that has two domain based to other two entities.
I created the first business rule with the first domain based and it works perfectly.
But when I try to create a second business rule with the second domain based, an error appears:
200095 : Cannot specify more than one entity in MetadataGet
400003 : The attribute reference is not valid. The attribute was not found.
400003 : The attribute reference is not valid. The attribute was not found.
Obviously the attribute is valid. In fact, if I delete the first business rules, the second one is published correctly.
I think that MDS block a second business rules if you try to apply to a second domain based attribute.
This happened to us as well, and it seems that this error only occurs if a specific set of actions is taken:
We first restored the MDS 2012 database on SQL Server 2017
We upgraded a database using MDS management tool. Mind that the multi-entity business rules work fine now - they return no errors upon saving, can be published and successfully evalueted
We then realized that we are missing some code changes, so we decided to create a full model package using MDSModelDeploy.exe in our old MDS 2012
We deployed that package using MDSModelDeploy deployupdate command. After that the existing multi-entity rules are failing to publish, you are also unable to create new rules based on different entities within one entity. Unfortunately, we have found no fix for it, as there are simpler ways around it.
At this point we took a step back, restored and upgraded the old database once again, and it turned out that the rules worked, so it got to be the package that has broken those. I do not know what your situation was, since when we created a fresh model in SQL 2017 all of the multi-entity based rules worked perfectly, so I am curious to know what steps should be taken to reproduce the error in your case.
The only possible approach I can think of to fix the situation in point 4, would be to create an MDSModelDeploy update package from the corrupted model and another one from a new, healthy model, and then compare how the XMLs of the multi-entity business rules are structured. We did not try this one though, since we found the workaround described previously.

How to change Sitecore Template field Item Id without data loss?

I recently noticed there is a difference in Item Id for a Sitecore template field between 2 environments (Source and Target). Due to this, any data changes to the field value for the dataitem using the template is not reflecting to target Sitecore database.
Hence, we manually copy the value from source to target and which takes lot of time to sync the 2 environments. Any idea how to change the template field Item Id in Sitecore without data loss in target instance?
Thanks
The template fields have most likely been created manually on the different servers, as #AdrianIorgu has suggested. I am going to suggest that you don't worry about merging fields and tools.
What you really care about is the content on the PRODUCTION instance of your site (assuming that this is Target). In any other environment, content should be regarded throwaway.
With that in mind, create a package of the template from your PRODUCTION instance and the install that in the other environments, deleting the duplicate field from the Source instance. The GUIDs of the field should now match across all environments. Check this into your source control (using TDS or Unicorn or whatever). You can then correctly update any standard values and that will be reflect through the server when you deploy again.
If your other environments (dev/qa/pre-prod) result in data loss for that field then don't worry about it, restore a backup from PROD.
Most likely that happened because the field or the template was added manually on the second environment, without migrating the items using packages, serialization or a third-party tool like TDS or Unicorn.
As #SitecoreClimber mentioned above, you can use Razl to sync the two environments and see the differences, but I don't think you will be able to change the field's GUID, to have the two environments consistent, without any data loss. Depending on the volume of your data, fixing this can be tricky.
What I would do:
make sure the target instance has the right template by installing a package with the correct template from source (with a MERGE-MERGE operation), which will end up having a duplicate field name
write a SQL query to get a list of all the items that have value for that field and update the value to the new field
Warning: this SQL query below is just a sample to get you started, make sure you extend and test this properly before running on a CD instance
use YOUR_DATABASE
begin tran
Declare #oldFieldId nvarchar(100), #newFieldId nvarchar(100), #previousValue nvarchar(100), #newValue nvarchar(100)
set #oldFieldID = '75577384-3C97-45DA-A847-81B00500E250' //old field ID
set #newFieldID = 'A2F96461-DE33-4CC6-B758-D5183676509B' //new field ID
/* versionedFields */
Select itemId, fieldid, value
from [dbo].[versionedFields] f with (nolock)
where f.FieldId like #oldFieldID
For this kind of stuff I sugest you to use Sitecore Razl.
It's a tool for comparing and merging sitecore databases.
Razl allows developers to have a complete side by side comparison between two Sitecore databases; highlighting features that are missing or not up to date. Razl also gives developers the ability to simply move the item from one database to another.
Whether it's finding that one missing template, moving your entire database or just one item, Razl allows you to do it seamlessly and worry free.
It's not a free tool, you can check here how you can buy it:
https://www.razl.net/purchase.aspx

How to handle requests for non existing dynamic segments in Ember?

If an user modifies the dynamic segment (object ID) in the URL of an Ember App with Ember Data, what's the best practice to handle these URLs as these might refer to non existing Model entries?
In a minimal example one can observe, that for each call with a non-existent ID (for example http://emberjs.jsbin.com/hurozaju/9#/color/30) there is an empty object added to the local ember data store. This is easily observable by the increasing number of "dots" in the output.
The error-action of App.ColorRoute redirects (as intended) to "colors" in case there is a 404 occurring while fetching the model by ID.
Why is there a "new" Object in the store?
Shouldn't the data be left unmodified?
Is there a chance to prevent the creation of new objects in this case?
I spend some time with this problem and i think this is ember-data beta-7 bug. Please report this issue in github.
here is example code how to work around this issue jsbin. This is tested with data-beta.7 and work and with data-beta.4 not working.
Sorry for not waiting as anounced...
This issue is now reported to ember-data on github.

RESTful API and Foreign key handling for POSTs and PUTs

I'm helping develop a new API for an existing database.
I'm using Python 2.7.3, Django 1.5 and the django-rest-framework 2.2.4 with PostgreSQL 9.1
I need/want good documentation for the API, but I'm shorthanded and I hate writing/maintaining documentation (one of my many flaws).
I need to allow consumers of the API to add new "POS" (points of sale) locations. In the Postgres database, there is a foreign key from pos to pos_location_type. So, here is a simplified table structure.
pos_location_type(
id serial,
description text not null
);
pos(
id serial,
pos_name text not null,
pos_location_type_id int not null references pos_location_type(id)
);
So, to allow them to POST a new pos, they will need to give me a "pos_name" an a valid pos_location_type. So, I've been reading about this stuff all weekend. Lots of debates out there.
How is my API consumers going to know what a pos_location_type is? Or what value to pass here?
It seems like I need to tell them where to get a valid list of pos_locations. Something like:
GET /pos_location/
As a quick note, examples of pos_location_type descriptions might be: ('school', 'park', 'office').
I really like the "Browseability" of of the Django REST Framework, but, it doesn't seem to address this type of thing, and I actually had a very nice chat on IRC with Tom Christie earlier today, and he didn't really have an answer on what to do here (or maybe I never made my question clear).
I've looked at Swagger, and that's a very cool/interesting project, but take a look at their "pet" resource on their demo here. Notice it is pretty similar to what I need to do. To add a new pet, you need to pass a category, which they define as class Category(id: long, name: string). How is the consumer suppose to know what to pass here? What's a valid id? or name?
In Django rest framework, I can define/override what is returned in the OPTION call. I guess I could come up with my own little "system" here and return some information like:
pos-location-url: '/pos_location/'
in the generic form, it would be: {resource}-url: '/path/to/resource_list'
and that would sort of work for the documentation side, but I'm not sure if that's really a nice solution programmatically. What if I change the resources location. That would mean that my consumers would need to programmatically make and OPTIONS call for the resource to figure out all of the relations. Maybe not a bad thing, but feels like a little weird.
So, how do people handle this kind of thing?
Final notes: I get the fact that I don't really want a "leaking" abstaction here and have my database peaking thru the API layer, but the fact remains that there is a foreign_key constraint on this existing database and any insert that doesn't have a valid pos_location_type_id is raising an error.
Also, I'm not trying to open up the URI vs. ID debate. Whether the user has to use the pos_location_type_id int value or a URI doesn't matter for this discussion. In either case, they have no idea what to send me.
I've worked with this kind of stuff in the past. I think there is two ways of approaching this problem, the first you already said it, allow an endpoint for users of the API to know what is the id-like value of the pos_location_type. Many API's do this because a person developing from your API is gonna have to read your documentation and will know where to get the pos_location_type values from. End-users should not worry about this, because they will have an interface showing probably a dropdown list of text values.
On the other hand, the way I've also worked this, not very RESTful-like. Let's suppose you have a location in New York, and the POST could be something like:
POST /pos/new_york/
You can handle /pos/(location_name)/ by normalizing the text, then just search on the database for the value or some similarity, if place does not exist then you just create a new one. That in case users can add new places, if not, then the user would have to know what fixed places exist, which again is the first situation we are in.
that way you can avoid pos_location_type in the request data, you could programatically map it to a valid ID.

Diagnosing a Sporadic Django/ Postgres "DatabaseError: current transaction is aborted"

I have a "DatabaseError: current transaction is aborted" that comes and goes (to be specific, 11 times out of 841) in a Django 1.3 project using Postgres. The project is a quiz site and the error occurs when a user submits the answer form in the view. From the database's perspective, the process involves a number of queries and looks like this:
Gather all of the correct answers for the question (they are multiple choice and may need more than one answer)*
Grab the user's profile
Save this answer
Query for the user's new point total
Save the total to their profile
Check to see if they qualify for a new reward
Award new reward if they do
Somewhere in that tortured process, this error crops up (I'm guessing because one query isn't waiting for the others). Is there a way for me, in production (i.e., DEBUG = False), to log the database errors just in this case? I'm on WebFaction and the Postgres error logs are not available to me. Could I steal something from this middleware example to fire in just this specific case?
Alternatively, is there a better way to find this error or should I be wrapping the individual queries in transactions (unfortunately they aren't all in the same place in the code, not sure if wrapping the view in a transaction decorator would help)?
*Just to confuse matters, the multiple right answers requirement was added in the middle of development and then dropped right before we went live, so I could simplify this process somewhat, basically skipping steps 1 and 4, but I'd like to know a general answer to this sort of mysterious issue.
You haven't said where in your 7 steps you have transactions that begin and end. That would be helpful to know.
One source of "transaction aborted" messages is due to deadlocks. More details would be in the PostgreSQL logs.
But the bottom line is that you will continue to have a painful and time-consuming experience debugging PostgreSQL if you can't get access to your PostgreSQL error messages. Take that up with WebFaction. If they can't helpful and your time is worth much, your bottom line costs will be lower by moving to an environment that provides this fundamental feature.
You have to enable autocommit for the transaction. In your DATABASES entry, include:
'OPTIONS': {'autocommit': True,},
By default, Django opens a transaction at the first query. By using this option, you manually have to start a transaction (e.g. using #commit_on_success). Since there is no transaction open anymore, you'll get the actual error that was previously masked by the transaction error.
The autocommit setting will be the new default for Django 1.6, see https://docs.djangoproject.com/en/dev/ref/databases/#postgresql-notes