Is it possible to make a SAS Stored Process available via a clean, nicer looking URL, but still be hosted on the server?
The native URL is something like http://[yourMachineName]:8080/SASStoredProcess/do?_PROGRAM=/WebApps/MyWebApp/Foo.
I'd prefer a nicer looking URL like http://[yourMachineName]:8080/SASStoredProcess/WebApps/MyWebApp/Foo
The documentation for the overall process at http://documentation.sas.com/?docsetId=stpug&docsetTarget=dbgsrvlt.htm&docsetVersion=9.4&locale=en, doesn't seem to address the issue.
Absolutely - yes, you can do this. The way to do this is to use a front end framework to provide a routing facility. Or - simply host an index.html file at a particular folder (corresponding to the _PROGRAM path) on your mid-tier, then use the 'on-load' javascript event to fire window.location.replace() with the full path to your STP as a parameter.
Your url could then be http://[yourMachineName]:8080/WebApps/MyWebApp/Foo.
I wrote a guide to building web applications with SAS which is available here, and a quick blog on the subject available here.
As a general point - is much more user friendly to build a nice looking UX using a modern framework such as REACT or Angular, and use that to call your SAS services as appropriate, displaying results in a myriad of ways - than to call raw SAS programs directly (for surfacing data).
Angular routing: https://angular.io/guide/router
You can't do this in the SAS Stored Process Web Application. The SAS URL must contain the SAS folder path and name of your storedprocess.
Possible options you can do within the Stored Process Web App are:
Use the Folders view in the SAS Stored Process Web Application URL, so each user can navigate to the desired stored process from there:
http://YourServer:8080/SASStoredProcess/do?_Action=index
If you have a web page or SAS Visual Analytics available to your users: you can hyperlink the SP URL to any Text.
Related
There is an online store built on Django. How do I set up an exchange of data on products between Django and 1C? I have checked some documentation on the CommerceML format, but it is still unclear how to set it up on the 1C side. As far as I understand, all uploads are configured quite simply. You just need to register the URL of the handler. And then everything happens automatically
Yes, 1с provides an algorithm for interacting with the upload script by which files of product offers are uploaded to the server.
Therefore, you can configure data exchange with external sources and sites, for this in your solution, go to the menu item Administration - Data Exchange and check the box "Data exchange with sites" where you specify the directories and documents involved in the exchange. After downloading the XML files, you just need to process them on the server side. There is information on the developer's forum https://1c-dn.com/forum/forum11/topic2308/ with the methods of exchange settings and API , see if one of the methods suggested there will help you. They also have a blog article about different ways of integration https://1c-dn.com/blog/work-with-http-services-in-1c-part-1-get-method/
I'm building a web application with the following url structure:
/ is the landing page, not angular based
/choose uses Angular, it basically contains search
/fund/<code> also with Angular, contains specific data for a certain fund
There's no problem indexing /, it's just a plain and simple html, already SEO optimized. But I need both /choose and /fund/... being crawled by Google, that's the problem.
My app uses the HTML5 mode, and we never point to the app urls using hashbangs like foo.com#!/choose, always foo.com/choose.
Also, according to Google's docs on that matter, I put <meta name="fragment" content="!"> on the head of every Angular page we have. But using "fetch as google" to inspect my site, I can't realise how Google's asking the pages for my server. I'm using Django on the backend and I built a middleware to catch _escaped_fragment_ and act on it, but Google's never sending it.
So, simply put, my questions are:
Why isn't Google fetching my urls using _escaped_fragment_?
How Google will fetch the pages?
foo.com?_escaped_fragment_=/choose
foo.com/choose?_escaped_fragment_=
According to the google specs, You should use
foo.com/choose?_escaped_fragment_=hashfragment.
But As mentioned here, you don't seem to need hashfragment and the equal sign part since your url is already mapped on your Django server side. So, get rid of it and give it a try.
Your final url should look like this: foo.com/choose?_escaped_fragment_.
Hope it helps!
I am concerned about page ranking on google with the following situation:
I am looking to convert my existing site with 150k+ unique page results to a ember app, off the route. so currently its something like domain.com/model/id - With ember and hash change - it will be /#/model/id. I really want history state but lack of IE support doesn't leave that as a option. So my Sitemap for google has lots and lots of great results using the old model/id. On the rails side I will test browser for compatibility, before either rendering the JS rich app or the plain HTML / CSS. Does anyone have good SEO suggestions with my current schema for success.
Linked below is my schema and looking at the options -
http://static.allplaces.net/images/EmberTF.pdf
History state is awesome but it looks like support is only around 60% of browsers.
http://caniuse.com/history
Thanks guys for the suggestions, the google guide is similar to what I'm going to try. I will roll it out to 1 client this month, and see what webmasters and analytics show.
here is everything you need to have your hash links be seo friendly: https://developers.google.com/webmasters/ajax-crawling/
basically You write Your whole app with hashlinks, but You have to add "!" to them, so You have #!/model/id. Next You must have all pages somewhere generated and if google asks for them, return "plain html" as described here: https://developers.google.com/webmasters/ajax-crawling/docs/getting-started
use google webmaster tools to check if Your site is crawlable.
I'm not sure if you're aware that you can configure Ember to use the browser history for the location API and keep using your pages the way they are reference now. All you need to do is configure the Route's location property
App.Router.reopen({
location: 'history'
});
See more details about specifying the location api here
I don't know if it's called data mining or something else.
Let's say I have a world business listing site, that list all the shops. And I saw this website ABC that also list shops, but only in Ausralia. They are in page by page, with no ID.
How do I start to write a program, that will crawl their pages, and put in the selective information of a page in the format of CSV, which I can then import it to my website?
At least, where can I learn this? Thank you.
What you are attempting to do is known as "Web Scraping", here's a good starting point for information, including the legal issues
http://en.wikipedia.org/wiki/Web_scraping
One common framework for writing crawlers like this is Scrapy- http://scrapy.org/
Yes, this process called Web Scraping. If you are familiar with java, most useful tools here is HTMLUnit and WEbDriver. You should use headless browser to go though you pages and extract important information using selector(mostly xpath, regexp in html format)
Is there a way to setup mulitple sites to run using querystrings rather than domains/subdomains?
I am developing a site that has a Global site and multiple country specific sites (exact list of countries to be confirmed later). For development I have a Global and a Local site created and running on a temporary subdomain. If this works correctly we may run the entire application this way rather than on separate domains (similar to how apple.com appears to work)
I have successfully got the sites running locally as:
global.domain.com
a.domain.com
b.domain.com
but would like them to be able to run as:
www.domain.com/global
www.domain.com/a
www.domain.com/b
We will be implementing multiple languages on certain country sites aswell so locale will need to remain independant.
Could this be done using some sort of URL mapping rather than multiple sites or something? Where can I find information about URL mapping?
There are settings for using virtual folders (see web.config under sites node)
virtualFolder: The prefix to match for incoming URL's.
This value will be removed from the URL and the remainder will be treated as the item path.
How that works in practice I'm not sure - it's on a domain by domain basis, and all your sites will be operating from the same domain.
But I think you might want to reconsider your approach. Sub domains have several advantages. They're simple to configure in the web.config (just add a domain and point it at the right bit of the content tree).
They simplify search engine optimisation - e.g. telling google to target a specific subdomain to a geographical area in Google webmaster tools.
They're simple for visitors to understand.
Bear in mind that if you're going to use multiple languages per site then you will probably want to keep the language parameter in the URL as part of the (virtual) filepath (e.g. www.mysite.com/en-GB/products)
If you use both language and locale in the URL in that way you end up with something like www.mysite.com/UK/en-GB/products