I have got order information from soap Mangeto api. all information I get correct but I get created_at, updated_at date field wrong.
I get created_at value 2014-02-18 11:36:06 but it will be 2014-02-18 17:06:06.
I have been finding the answer to spent several hours. Magento API returns all date values in GMT. I have set Timezone = India Standard Time (Asia/Calcutta) (GTM 5.30+) in admin . website got change but web service are still returning GMT time.
anyone have idea how to get date format in given timezone in soap web service.
$proxy = new SoapClient( SOAP_CLIENT_URL );
$sessionId = $proxy->login( SOAP_CLIENT_USERNAME, SOAP_CLIENT_PASSWORD );
$result = $proxy->call($sessionId, 'order.list',array(
array('customer_id'=>2)));
Can we set any param in order web service? or Magento has any other way to solve the problem.
Related
I'm trying to query some records like vendor and customer using suiteql with REST API using Postman.
The issue is that it return everytime the same error:
"Invalid search query. Detailed unprocessed description follows. Search error occurred: Record 'customer' was not found."
I tried:
differents syntax like Customer, CUSTOMER, customers, Customers, CUSTOMERS
but no change.
I added customer access to the role.
Is there something to activate while using suiteql with rest api?
Can you first try with select * from customer and see if any results are returned and then go on appending your conditions like date created greater than this year start
SELECT id, companyname, email, datecreated
FROM customer
WHERE datecreated >= BUILTIN.RELATIVE_RANGES('TFY', 'START')
AND datecreated <= BUILTIN.RELATIVE_RANGES('TFY', 'END');
NetSuite doesn't say it but for a record to be searchable, the user needs to have the following permissions:
Transactions:
Find Transaction
All the records needed
Lists:
Perform search
All the records needed
Setup:
REST WEB Services
Log in using Access Tokens or Oauth 2.0
Reports:
SuiteAnalytics WorkBook
I am trying to import the product titles and review ratings from a listing to a Google Spreadsheet. I have tried the ImportXML fuction using Xpath query but that does not work. So, I tried a code as mentioned below and it worked. I have been able to get the listing data but sometimes it gives me an error instead of displaying the data.
Error:
Request failed for https://www.amazon.co.uk returned code 503. Truncated server response: For information about migrating to ... (use muteHttpExceptions option to examine full response). (line 2).
When I refresh the code or when I add/remove https:// from the url, it works again but when I refresh the sheet, it goes off sometime and displays the error.
Question:
Is there any way to get rid of the error?
While trying to get the Star Rating displayed on the sheet, it uses a Span Data-hook class where the data is stored and I am unable to retrieve it. Is there any way to retrieve the star rating as well?
This is the function that I have created to get the product title and other data:
function productTitle(url) {
var content = UrlFetchApp.fetch(url).getContentText();
var match = content.match(/<span id="productTitle".*>([^<]*)<\/span>/);
return match && match [1] ? match[1] : 'Title not found';
}
You are receiving HTTP Status codes of 503, that means the service you are trying to reach is either under maintenance or overloaded.
This is on Amazon's side. You should use the Amazon API instead of the public endpoints for processing this data.
I am able to create a Service Request on CA service desk using the webservices. Now I have been given a task to update the request fields , specifically the status field which is a dropdown
I wanted to know , which method of the web service I can use to achieve this. I have been searching for the required method for quite some time , I tried out the updateObject webservice method , but that throws an exception.
Rubberduck Moment
I resolved this using the updateObject() of the webservice and passed a string array like below
{ "description" , "Some Test description" , "category", "pcat:448727",
"summary","","customer","","type","R","priority","0","znetid",affUserId,"status","CNCL"};
I would like to request tweets on a specific topic (for example: "cancer"), using Python Tweepy. But usually its time can only be specified by a specific day, for example.
startSince = '2014-10-01'
endUntil = '2014-10-02'
for tweet in tweepy.Cursor(api.search, q="cancer",
since=startSince, until=endUntil).items(999999999):
Is there a way to specify the time so I can collect "cancer" tweets between 2014-10-01 00:00:00 and 2014-10-02 12:00:00? This is for my academic research: I was able to collect cancer tweets for the last month, but the sudden burst of quantity of "breast cancer" tweets due to the cancer awareness month breaks my script and I have to collect them in different time segments, and I will not be able to retrieve the tweets for Oct 01, 2014 if I can't figure it out soon.
There is no way that I've found to specific a time using since/until.
You can hack your way around this using since_id & max_id.
If you can find a tweet made at around the times you want, you can restrict you search to those made after since_id and before max_id
import tweepy
consumer_key = 'aaa'
consumer_secret = 'bbb'
access_token = 'ccc'
access_token_secret = 'ddd'
# OAuth process, using the keys and tokens
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
api = tweepy.API(auth)
results = api.search(q="cancer", since_id=518857118838181000, max_id=518857136202194000)
for result in results:
print result.text
I am creating a simple CMS system whereby users can contribute inspirational posts made by brand pages on Facebook. All the user has to do is paste in a permalink to a post and the CMS will handle pinging the Graph API in order to acquire all the info it needs. (e.g. https://www.facebook.com/photo.php?fbid=10152047553868306&set=a.99394368305.88399.40796308305&type=1&relevant_count=1)
One of the requirements of the CMS is that like, comment, and share counts are included for each post. This is where I struggle.
Simply pinging the graph endpoint with a photo ID will only return a paginated list of likes/comments. There is no parameter for total likes or total comments. Fortunately, the photo table includes like_info and comment_info members for me to query upon. This works great for getting totals on photos:
SELECT like_info, comment_info FROM photo WHERE object_id = 10152047553868306
One would expect then that I could apply the same FQL SELECT on the status or video tables in order to get the like and comment info for status updates and video posts but you cannot. like_info and comment_info only reside on the photo table.
At least right now I can get like/comment total for photos, but I still see no indication on how to get share totals for a photo.
Is there a way I can reliably acquire like, comment, and share counts for video posts, photo posts, and status posts? Using any combination of Graph or FQL API?
Any help would be extremely appreciated.
I have found something like this for facebook graph api v2.8
$accessToken ="XXXXXXXXXX";
$fb = new Facebook\Facebook(array(
'app_id' => 'Facbook App Id',
'app_secret' => 'Facebook Secret Key',
'default_graph_version' => 'v2.8',
));
$params = array();
$request = $fb->request('GET', '/{post_id}/?fields=likes.limit(0).summary(true),comments.limit(0).summary(true)',$params,$accessToken);
$response = $fb->getClient()->sendRequest($request);
$graphNode = $response->getGraphNode();
$likeArr = $graphNode['likes']->getMetaData();
$commentArr = $graphNode['comments']->getMetaData();
echo "Total Post Likes : ".$likeArr['summary']['total_count'];
echo "Total Post Comments : ".$commentArr['summary']['total_count'];
use this:
SELECT user_id FROM like WHERE object_id=10151751324059927 LIMIT 1000