How should I log External Web Services called from X++ - web-services

I’ve followed the standard tutorials to create an external web service reference. My calls contain transactional data and logging is imperative. I’d like to know what sort of logging I should employ when making these external calls. I’d really love to have these calls appear in-line with standard AIF document history & exceptions, but I don’t think that’s easily feasible. I also can’t find the SOAP request and response generated by my external service reference, making the logging even trickier. I’ve thought about creating a custom outbound adapter, but I’m not sure if that’s the right approach. Just want to see what the professionals recommend.
https://technet.microsoft.com/en-ca/library/hh500185.aspx
http://daxmusings.codecrib.com/2011/10/consuming-external-webservices-in-ax.html

Configure Logging at the AIF Port Configuration. Enable All document versions or Original document.

The best way that we've found so far is to create a logging table in AX. Saving serialized requests, responses, URLs and errors right into AX. We've included other descriptors as well so it's useful anywhere external web services are called. Provide a form for your front end, and consider a batch job to purge old records.

Related

Hande Series of Web Requests in a specific way

I am sorry in advance; I am just learning Web development and my knowledge of it is quite limited.
I will describe my problem first.
I have relatively large amount of data (1.8-2 GB), which should be hidden from a public web access. However, a user should be able to request via url call a specific small subset of data and see it on his / her webpage.
Ideally, I would like to write a program on a web server. Let's call it ./oracle, which stores the large amount of data in primary memory.
Each web user should be able to make a specific string calls to oracle and see oracle'sresponse on a web page as html elements.
There should only one instance of oracle, and web users should make asynchronous calls to it.
Can I accomplish the above task with FastCGI or any other protocols?
If yes could you please explain which tools / protocols should I use / learn?
I would recommend setting up an Apache server because it's very common and you'll be able to find a lot of answers to any specific questions here on StackOverflow already.
You could also look into things like http://Swagger.io which can help you generate your API.
Unfortunately, everything past this really depends on what you use to set up your server. Big picture though:
You'll need to open up a port to listen to incoming requests
You'll need to have requests include the parameters they want to send to oracle
You could accomplish this the URI, like localhost/oracle-request?PARAMETER="foo"
You could alternatively use JSON in the body of the http request
Again, this largely depends on how you set up step 1
You'll need to route those requests to the oracle
This implementation depends entirely on step 1
You'll need to capture the output from the oracle and return it to the user
Once you decide on how you want to set up your server, feel free to edit your question and we may be able to provide more specific help.

Is it possible to use Django and Node.Js?

I have a django backend set up for user-logins and user-management, along with my entire set of templates which are used by visitors to the site to display html files. However, I am trying to add real-time functionality to my site and I found a perfect library within Node.Js that allows two users to type in a text box and have the text appear on both their screens. Is it possible to merge the two backends?
It's absolutely possible (and sometimes extremely useful) to run multiple back-ends for different purposes. However it opens up a few cans of worms, depending on what kind of rigour your system is expected to have, who's in your team, etc:
State. You'll want session state to be shared between different app servers. The easiest way to do this is to store external session state in a framework-agnostic way. I'd suggest JSON objects in a key/value store and you'll probably benefit from JSON schema.
Domains/routing. You'll need your login cookie to be available to both app servers, which means either a single domain routed by Apache/Nginx or separate subdomains routed via DNS. I'd suggest separate subdomains for the following reason
Websockets. I may be out of date, but to my knowledge neither Apache nor Nginx support proxying of websockets, which means if you want to use that you'll sacrifice the flexibility of using an http server as a app proxy and instead expose Node directly via a subdomain.
Non-specified requirements. Things like monitoring, logging, error notification, build systems, testing, continuous integration/deployment, documentation, etc. all need to be extended to support a new type of component
Skills. You'll have to pay in time or money for the skill-sets required to manage a more complex application architecture
So, my advice would be to think very carefully about whether you need this. There can be a lot of time and thought involved.
Update: There are actually companies springing around who specialise in adding real-time to existing sites. I'm not going to name any names, but if you look for 'real-time' on the add-on marketplace for hosting platforms (e.g. Heroku) then you'll find them.
Update 2: Nginx now has support for Websockets
You can't merge them. You can send messages from Django to Node.Js through some queue system like Reddis.
If you really want to use two backends, you could use a database that is supported by both backends.
Though I would not recommended it.

How can i programmatically find out how many resources a IIS server has and the traffic on each resource .. etc

I want to write a program to find out what resources an IIS server has and how many hits are there on each resource. The resource can be anything from a html page to files like sound clips, pictures , etc. I want to find out a list of all these resources and then i got to calculate the traffic as well. So can this be done without using any existing tool. I am not allowing myself to use any tools. I looked into WMI classes, but they do not give very detailed data like i want. I also thought about using ISAPI Filters to log each request. But i am finding it very difficult to learn. So is that a good way to go ? or shall i look at some thing else ?
It would be easiest to just analyze the server's access logs, but if you want to write code, an ISAPI filter would certainly do it. There are samples in the SDK. An ISAPI extension, installed as a wildcard script-map, would also work. If you use a filter, you should probably only register for preproc_headers. Some of the notifications have a significant performance penalty, even if you do very little inside them, since they re-route the request path inside IIS. (Thinking specifically of send_raw_data here.)

REST and web services - having trouble understanding them

Well, the title more or less says it all. I sort of understand what REST is - the use of existing HTTP procedures (POST, GET, etc.) to facilitate the creation/use of web services. I'm more confused on what defines what a web service is and how REST is actually used to make/expose a service.
For example, Twitter, from what I've read, is RESTful. What does that actually mean? How are the HTTP procedures invoked? When I write a tweet, how is REST involved, and how is it any different than simply using a server side language and storing that text data in a database or file?
This concept is also a bit vague to me but after looking at your question I decided to clarify it a bit more for myself.
please refer to this link in msdn and this.
Basically it seems that it is about using http methods (Get/Post/Delete) to identify the exposure of resources the application allows.
For example:
Say you have the URL :
http://Mysite.com/Videos/21
where 21 is an id of a video.
We can further define what methods are allowed to this url - GET for retrieval of the resource, POST for updating/ Creating, Delete for deleting.
Generally, it seems like an organized and clean way to expose your application resources and the operations that are supported on them with the use of http methods
You may want to start with this excellent introductionary writeup. It covers it all, from start to end.
A RESTful architecture provides a unique URL that defines each resource individually and infers through HTTP action verbs the appropriate action to do for the same URL.
The best way I can think to explain it is to think of each data model in your application as a unique resource, and REST helps route the requests without using a query string at the end of the url, e.g., instead of /posts/&q=1, you'd just use posts/1.
It's easier to understand through example. This would be the REST architecture enforce by Rails, but gives you a good idea of the context.
GET /tweets/1 ⇒ means that you want to get the tweet with the id of 1
GET /tweets/1/edit ⇒ means you want to go to the action edit that is associated with the tweet with an id of 1
PUT /tweets/1 ⇒ PUT says to update this tweet not fetch it
POST /tweets ⇒ POST says i got a new one, add it to the db, i cant give an id cuz i dont have one yet until i save it to the db
DELETE /tweets/1 ⇒ delete it from the DB
Resources are often nested though so in twitter it might be like
GET /users/1/jedschneider/1 ⇒ users have many tweets; get the user with id of jedschneider and its tweet id 1
The architecture for implementing REST will be unique to the application, with some apps supporting by default (like Rails).
You're struggling because there are two relatively different understandings of the term "REST". I've attempted to answer this earlier, but suffice to say: Twitter's API isn't RESTful in the strict sense, and neither is Facebook's.
sTodorov's answer shows the common misunderstanding that it's about using all four HTTP verbs, and assigning different URIs to resources (usually with documentation of what all the URIs are). So when Twitter is invoking REST they're merely doing just this, along with most other RESTful APIs.
But this so-called REST is no different than RPC, except that RPC (with IDLs or WSDLs) might introduce code generation facilities, at the cost of higher coupling.
REST is actually not RPC. It's an architecture for hypermedia based distributed systems, which might not fit the bill for everyone making an API. In the linked MSDN article, the hypermedia kicks in when they talk about <Bookmarks>http://contoso.com/bookmarkservice/skonnard</Bookmarks>, the section ends with this sentence:
These representations make is possible to navigate between different types of resources
which is the core principle that most RESTful APIs violate. The article doesn't state how to document a RESTful API and if it did so, it would be a lot clearer that clients would have to navigate links in order to do things (RESTful), and not be provided with a lot of URI templates (RPCish).

Identifying ASP.NET web service references

At my day job we have load balanced web servers which talk to load balanced app servers via web services (and lately WCF). At any given time, we have 4-6 different teams that have the ability to add new web sites or services or consume existing services. We probably have about 20-30 different web applications and corresponding services.
Unfortunately, given that we have no centralized control over this due to competing priorities, org structures, project timelines, financial buckets, etc., it is quite a mess. We have a variety of services that are reused, but a bunch that are specific to a front-end.
Ideally we would have better control over this situation, and we are trying to get control over it, but that is taking a while. One thing we would like to do is find out more about what all of the inter-relationships between web sites and the app servers.
I have used Reflector to find dependencies among assemblies, but would like to be able to see the traffic patterns between services.
What are the options for trying to map out web service relationships? For the most part, we are mainly talking about internal services (web to app, app to app, batch to app, etc.). Off the top of my head, I can think of two ways to approach it:
Analyze assemblies for any web references. The drawback here is that not everything is a web reference and I'm not sure how WCF connections are listed. However, this would at least be a start for finding 80% of the connections. Does anyone know of any tools that can do that analysis? Like I said, I've used Reflector for assembly references but can't find anything for web references.
Possibly tap into IIS and passively monitor the traffic coming in and out and somehow figure out what is being called and where from. We are looking at enterprise tools that could help but it would be a while before they are implemented (and cost a lot). But is there anything out there that could help out quickly and cheaply? One tool in particular (AmberPoint) can tap into IIS on the servers and monitor inbound and outbound traffic, adds a little special sauce and begin to build a map of the traffic. Very nice, but costs a bundle.
I know, I know, how the heck did you get into this mess in the first place? Beats me, just trying to help us get control of it and get out of it.
Thanks,
Matt
The easiest way is to look through the logs, but if that doesn't include the referrer than you may also want to monitor what is going out from your web to the app server. You can use tools like Wireshark or Microsoft Network Monitor to see this traffic.
The other "solution" and I use this loosely is to bind a specific web server to app server and then run through a bundle and see what it is hitting on the app server. You could probably do this in a test environment to lesson the effects on the users of the site.
You need a service registry (UDDI??)... If you had a means to catalog these services and their consumers, it would make this job of dependency discovery a lot easier. That is not an easy solution, though. It takes time and documentation to get a catalog in place.
I think the quickest solution would be to query your IIS logs and find source URLs which originate from your own servers. You would at least be able to track down which servers your consumers are coming from.
Also, if you already have some kind of authentication mechanism in place, you could trace who is using a particular service based on login.
You are right about AmberPoint. There are other tools that catalog the service traffic and provide reports showing what is happening to your services. Systinet, SOA Software and Actional also has a products similar to Amberpoint but Amberpoint has a free-ware version, I believe.