Authorize.Net Accept Hosted Form constantly failing to display correctly - authorize.net

Background:
I'm currently developing a .NET MVC application and attempting to integrate the Authorize.Net payment gateway with it. I've downloaded the Authorize.net SDK through Nuget to use within my app.
Primary order information is collected within my app and then the user is redirected to the Authorize.Net Accept Hosted form. (My client doesn't want any credit card entry on/from their site so that eliminates the Iframe/.js solutions from AuthNet)
Issue:
I've worked through all of the initial issues with the AuthNet integration and I'm now dealing with the absolute frustration of having to figure out why the hosted form shows correctly on occasion (VERY RARE), shows only an "Order Summary" header on other occasions, shows "Missing or invalid token" on yet other occasions and most of the time just plain doesn't appear at all.
I can post the code but it's not the code. I'm using the same records to test with - to ensure consistency on my end - and they have all, at one point or another, been successfully sent to the AuthNet form and processed correctly...but this is few and far between at this point.
I'm using the Authorize.Net provided SDK and I've already figured out the idiosyncrasies with it which is why the form appears sometimes. It can't be what I'm sending up otherwise it wouldn't work ALL the time rather than just some of the time.....
Things I've Tried to Resolve the Problem
Set the Authorize.Net account to "Live" Mode
Downgraded to SDK version 1.9.6 instead of 2.0.1 because, apparently, the older version was better than the new one.
Ensured that all of my HostedPayment* options are correctly set and the values are appropriately formatted.
Checked the Authorize.Net developer forums
Checked on StackOverflow
Searched the web
Pulled all my hair out
Question:
Has anyone else run into problems getting the Accept Hosted Form to display consistently? If so, then how did you resolve the issue to make the form render correctly on a consistent basis? I mean REALLY consistently, not once every 30-40 tries.
Seriously! It's incredibly frustrating!

Ok - I walked my code and realized it wasn't my code but it was the refId that i as passing to the AuthNet form. Authnet allows you to pass up a refId as a 20 character string. My app uses GUIDs for this but a GUID is too long to fit in AuthNet's limited space so I compressed it using Ascii85. This "compression" changes the alphanumeric values to a combination of alphas, numbers and special characters.
I had a feeling that this may have been the case but the form was presented correctly, at times, even when the refId submitted had special characters. I decided to shift to a different tracking strategy and the form is now consistently appearing. It was the refId after all.

Related

Several Problems with Google SDK Speech-to-text (Transcription) details

I've encountered several fairly serious problems in my efforts to develop a speech-to-text application. Some of them (I hope) may just be my lack of experience/common-sense/in-depth-reading/etc. Here's the list:
Long (>60 sec) transcriptions -- Forces me to use a GS bucket to first upload the soundfile to the bucket. Trouble is:
a. I have to run the "gcloud auth login" on each machine I need to run on. I have well over 50 machines. This appears to be a purely manual operation, as you have to copy a long URL to your browser, hit enter, click on the right account, accept the permissions, then hand-copy and paste the key presented back into the gcloud prompt, and hit enter there. While it does appear to be presistent to some degree, it is subject to one interesting constraint: only 51 machines (maybe 50, I got tired trying to count) are allowed. And the earliest logged-in machine is un-loggedin to make room for the new login. This was very odious. All this hassle is purely for using the buckets. A shorter transcription will not use GS, and complete without complaint. Really!!!! Is there no better way? Do we have to use gcloud auth login? Manually???? The number of servers we can use with GoogleStorage????
Another google storage issue: transcription requires the bucket to be "public". We are pretty worried about security, and the privacy of our customers, whose recordings will be in the uploaded bucket, even if only there briefly.
The transcription app offers transcriptions in multiple languages, but the "phone_call" model is fixed to en_US, and seems to ignore the language setting. If I change the request to es_US, and supply a spanish recording, it behaves the same. (But everything works OK in the 'command_and_search' model). This seems to be in evolution, any idea when/if they will carry the multi-language features over to the phone_call model?
If anyone can help, Oh Wise Ones, please impart of thy wisdom!
murf

Pentaho / Salesforce: How to integrate SF-Enterprise-Web-Services-API V48.0 into PDI 9.0 that only supports v47.0

actually I am working with PDI 8.2, however I am able to upgrade to 9.0.
The main issue is that a customer wants to pull data from salesforce which works well so far. But he is using the Enterprise Web Services API with version 48.0, latest Pentaho supports v47.0 only.
I strongly assume that reading via v48.0 won't work with PDI so that I have to build a workaround. Could anyone point me to a feasible solution? To be honest, I don't even know whether the Enterprise or the Partner API is relevant for Pentaho. Have got my own SF-Account so that I could try around with the APIs.
Is the "Web Services lookup" the right step for the workaround?
Any answer would be appreciated! Thanks in advance!
Oh man, what a crazy question, all over the place.
I strongly assume that reading via v48.0 won't work
You'd have to try it but it should work. Salesforce has 3 releases a year and that's when they upgrade API versions. We're in Spring'20 now, it's v.48. That doesn't mean anything below is deprecated. You should have no problems calling with any API version >= 20. From what I remember their master service agreement states that API version released will stay up at least 3 years. Well, v.20 is 9 years old and still going strong...
Check for example https://baa.my.salesforce.com/services/data/ (if your client has "My Domain" enabled you can use that too instead of some unknown company), you should see a list similar to this: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_versions.htm (no login required, that'd be a chicken & egg situation. You need to choose API version you want when making the login call).
So... what does your integration do. I assume it reads or writes to SF tables (objects), pretty basic stuff. In that sense the 47 vs 48 won't matter much. You should still see Accounts, Contacts, custom objects... You won't see tables created specifically in v 48. Unless you must see something mentioned in Spring'20 release notes I wouldn't worry too much.
If your client wrote a specific class (service) to present you with data and it's written in v.48 it might not be visible when you login as v.47. But then they can just downgrade the version and all should be well. Such custom services are rarely usable by generic ETL tools anyway so it'd be a concern only if you do custom coding.
whether the Enterprise or the Partner API is relevant for Pentaho
Sounds like your ETL tool uses SOAP API. Salesforce offers 2 versions of the WSDL file with service definitions.
"Partner" is generic, all SF orgs in the world produce same WSDL file. It doesn't contain any concrete info about tables, columns, custom services written on top of vanilla salesforce. But it does contain info how to call login() or run a "describe" that gives you all tables your user can see, what are their names, what are columns, data types... So you learn stuff at runtime. "Partner" is great when you're building a generic reusable app that can connect to any SF or you want to be dynamic (some backup tool that learns columns every day and can handle changes without problems. Or there's a "connection wizard" where you specify which tables, which columns, what mapping... new field comes in - just rerun the wizard).
"Enterprise" will be specific to this particular customer. It contains everything "Partner" has but will also have description of current state of database tables etc. So you don't have to call "describe", you already have everything on the plate. You can use this to "consume" the WSDL file, generate your Java/PHP/C# classes out of it and interact with them in your program like any other normal object instead of crafting XML messages.
The downside is that if new field or new table is added - you won't know if your program doesn't call "describes". You'd need to generate fresh WSDL and consume it again and recompile your program...
Picking right one really depends what you need to do. ETL tools I've met generally are fine with "partner".
Is the "Web Services lookup" the right step
No idea, I've used Informatica, Azure Data Factory, Jitterbit, Talend... but no idea about this Pentaho thing. Just try it. If you pull data straight from SF tables without invoking any custom code (you can think of SF custom services like pulling data from stored procedures) - API version shouldn't matter that much. If you go < 41.0 I believe you won't see Individual object for example but I doubt you need to be on so much cutting edge.

How may seconds may an API call take until I need to worry?

I have to ask a more or less non-typical SO question and hope you don't mind. Right now I am developing my very first web application. I did set up an AJAX function that requests some data from a third party API and populates my html containers with the returned data.
Right now I query one single object and populate 3 html containers with around 15 lines of Javascript code. When i activate the process/function by clicking a button on my frontend, it needs around 6-7 seconds until the html content is updated.
Is this a reasonable time? The user experience honestly will be more than bad considering that I will have to query and manipulate far more data (I build a one-site dashboard related to soccer data).
There might be very controversal answers to that question, but what would be a fair enough time for the process to run using standard infrastructure? 1-2 seconds? (I will deploy the app on heroku or digitalocean and will implement a proper caching environment to handle "regular visitors").
Right now
I use a virtualenv and django server for the development
and a demo server from the third party API which might be slowed down for whatever reason (how to check this?)
which might effect the current time needed (there will be many more variables obv.).
Looking forward to your answers.
I personally think (probably a lot people might too) 6-7 secs is a significant delay for rendering a small page. The cause of this issue might not came from django directly. Check for the following:
I use a virtualenv and django server for the development
you may be running django devserver, production server might make things bit faster (use django-debug-toolbar to find what causing the delay)
Do db index in your model.
a demo server from the third party API which might be slowed down for whatever reason
use chrome developer tools 'network' tab to watch how long that third party call takes. it might not visible there if you call api in your view.py. in that case, add some timing code there to calculate how long it takes to return.

How to create an API and then dynamically retrieve data from and add new data to it?

To start off, I am extremely sorry if my question is not clear but I have very little knowledge about web services in general and the vast nature of varying available information has driven me crazy over the past few weeks. So please do bear with me.
Summary: I want to create a live score update app for android. (I haven't added android as a tag because I do know how to retrieve data from say twitter's JSON api.) However, like the twitter JSON api, I want to be able to add(POST maybe?) data to the Apache 7.0 service that I have running. I then want the app to be able to be able to retrieve this data that I have posted.
I had asked a more generic question earlier and I was told that I should look up some api's. I did that but I have still not been unable to make a break through.
So my questions is:
Is setting up an API on my local web service the correct way to do this?
If so, how can I setup an API that will return JSON objects to the Android app. Also, I would need to be able to constantly update this API with new data.
Additionally, would I also need to setup a database for all this?
Any links to well explained matter would be appreciated too.
Note: I would like to carry this out using a RESTful Web Service through Jersey and use JSON Objects during retrieval.
Again, I am sorry about my terrible knowledge with web services in general despite trying my best to research a lot. The best I could do was get my RESTful Web to respond to a GET with some pre-defined text that I had set in Eclipse.
Thanks.
If I understand you correctly, what you try to do is something like this:
There will be a match or multiple matches of some sort. Whenever a team/player scores someone (i.e. you) will use the app to update the score. People who previously subscribed to the match, will be notified and see the updated score.
Even though I'm not familiar with backends based on Java, the implementation should be fairly similar to other programming languages.
First of all a few words to REST in general. REST is generally needed, when you need to share information between multiple devices and or users. This seems to be the case here. To implement the REST you are going to need an API of some sorts. Within the web APIs are implemented by webservers answering to certain predefined HTTP Requests.
Thus setting up an API on a web server is the correct way.
Next a few words on databases. A database is generally needed, if you want to store information persistently. This might, or might not be what you are planning to do. If there are just going to be a few matches at the same time and you don't care about persistence of the data, you can use Java to store a collection of match objects in memory. I'm just saying it is possible, not that it is a good idea. Once your server crashes or you run out of memory due to w/e reason, data is going to be lost. (Of course within the actual implementation you want to cache data for current matches in some way and keeping objects in memory is way to do so).
I'd recommend to use a database.
Within the database, you can then store and access information about the matches like the score, which users subscribed, who played, etc.
JSON is just a way to represent the data/objects that will be shared between the server and the client. You can use JSON to encode request and response data/bodies.
The user has to be informed about the updated score. There are two basic ways to do so. Push or Pull. With pull, the client will check for updated scores after fixed intervals or actions. With push, the server will notify the client about changed scores which will cause him to update the information. Since you are planning on doing a live application and using Java anyways, push seems to be the better way to go.
Last but not least let's have a look at a possible implementation using
Webserver (API endpoints + database)
Administrator (keeps score updated)
User (receives updates)
We assume that the server will respond to HTTP Requests (POST#/api/my-endpoint) with JSON-Objects.
Possible flow
1)
First the administrator creates a match
REQUEST
POST # /api/matches
body: team1=someteam&team2=someotherteam
The server now will create a match object and store it in the database. The response will contain information about the object and whether the action was successful.
2)
The user asks for a list of matches
REQUEST
GET # /api/matches/curret
The response will be a JSON object containing a list of current matches.
RESPONSE
{
matches: [
{id: 1, teams:...}, ...
]
}
3)
(If push)
A user subscribes to a match
REQUEST
GET # /api/SOME_MATCH_ID/observe
The user will now be added as an observer for the match. Again, the response contains information about whether the action was successful or not.
4)
The administrator updates a score
REQUEST
UPDATE # /api/SOME_MATCH_ID
body: team1scored...
The score now gets update on the server (in memory/database) and the user will be notified about the updated score.
5)
The user gets the updated score
REQUEST
GET # /api/SOME_MATCH_ID
RESPONSE
... (Updated score in some way)

Accessing World of Warcraft data from the web

I'm aware of the WoW add-on programming community, but what I can find no documentation on is any API for accessing WoW's databases from the web. I see third-party sites like WoWHeroes.com and Wowhead use game data (item and character databases,) so I know it's possible. But, I can't figure out where to begin. Is there a web service I can use or are they doing some sort of under-the-hood work that requires running the WoW client in their server environment?
Sites like Wowhead and WoWHearoes use client run addons from players which collect data. The data is then posted to their website. There is no way to access WoW's database. Your best bet is to hit the armory and extract the XML returned from your searches. The armory is just an xml transform on xml data returned.
Blizzard has recently (8/15/2011) published draft documentation for their RESTful APIs at the following location:
http://blizzard.github.com/api-wow-docs/
The APIs cover information about characters, items, auctions, guilds, PVP, etc.
Requests to the API are currently throttled to 3,000 per day for anonymous usage, but there is a process for registering applications that have a legitimate need for more access.
Update (January 2019): The new Blizzard Battle.net Developer Portal is here:
https://develop.battle.net/
Request throttling limits and authentication rules have changed.
Characters can be mined from the armory, the pages are xml.
Items are mined from the local installation game files, that's how wowhead does it at least.
It's actually really easy to get item data from the wow armory!
For example:
http://www.wowarmory.com/item-info.xml?i=33135
View the source of the page (not via Google Chrome, which displays transformed XML via XSLT) and you'll see the XML data!
You can use search listing pages to retrieve all blue gems, for example, then use an XML parser to retrieve the data
They are parsing the Armory information from www.wowarmory.com. There is no official Blizzard API for accessing it, but there is an open source PHP solution available (http://phparmory.sourceforge.net/)
Maybe a little late to the party, but for future reference check out the WoW API Documentation at http://blizzard.github.com/api-wow-docs/
Scraping HTML and XML is now pretty much obsolete and also discouraged by Blizzard.
The documentation:
http://blizzard.github.com/api-wow-docs/
enjoy
Sites like those actually get the data from the Armory. If you pull up any item, guild, character, etc. and do 'View Source' on the page you will see the XML data coming back. Here is a quick C# example of how to get the data.
This third-party site collection data from players. I think this collection based on addons for WoW or each player submit information manualy.
Next option is wraping wow site and parsing information from websites (HTML).
this is probably the wrong site for your question, but you're thinking of the wowarmory xml stuff. there is no official wow api. people just do httprequests and get the xml to do number crunching stuffs. try googling around. there are some libs out there in different languages that are already written for you. i know there are implementations in php/ruby. i was working on one in .net a while back until i got distracted. here's an article which kinda sums this all up.
http://www.wow.com/2008/02/11/mashing-up-wow-data-when-we-can-get-it-in-outside-applications/
Wowhead and other sites generally rely on data gathered by users with a wow add-in.
Wowhead also has a way for other sites to reference that data in hover pop-ups, so their content gets reused on a number of sites.
Powered by Wowhead
For actual ingame data collection:
cosmos.exe is what thottbot for example uses. It probably uses some form windows hack (dllinjection or something) or sniffs packets to determine what items have dropped and etc. (intercepts traffic from the wow server to your client and decodes it). It saves this data on the users computer and then uploads it to a webserver for storage. I don't know if any development libraries were created for this sort of thing.