I am trying to change our wsdl to use a secure URL as the end point. We are using the V2, WSI Compliant API and this is the line I am trying to change:
<soap:address location="http://mydomain.com/index.php/api/v2_soap/index/"/>
I want to change it to:
<soap:address location="https://mydomain.com/index.php/api/v2_soap/index/"/>
I really need to find out where the {{var wsdl}} is being passed in. I have tried hard coding it in one place, but the way the wsdl is being compiled (much like the config is generated), it appends the soap address (and has both the secure and unsecure in the final product). That's not really the way I wanted to do it, anyway. I'm wondering if there's a design template that's driving all of this where I could declare a new variable or reset the wsdl.url bit. I've tried changing some code (just to see if this was the origin of the url) in Mage_Api_Model_Server_V2_Adapter_Soap and Mage_Api_Model_Server_Adapter_Soap to no avail. Does anybody have any advice?
I actually figured this out and it was much easier than I expected. This is accomplished by going to System->Configuration->Web->Secure->Use Secure URLs in Admin = Yes.
Related
I am using redirectTo() function with params to redirect to another pages with a query string in the url. For security purpose this does not look appealing because the user can change the parameters in the url, thus altering what is inserted into the database.
My code is:
redirectTo(action="checklist", params="r=#r#&i=#insp#&d=#d#");
Is there anyway around this? I am not using a forms, I just wish to redirect and I want the destination action/Controller to know what I am passing but not display it in the url.
You can obfuscate the variables in the URL. CfWheels makes this really easy.
All you have to do is call set(obfuscateURLs=true) in the config/settings.cfm file to turn on URL obfuscation.
I am sure this works with linkTo() function. I hope it works with RedirectTo() funcation as well. I do not have a set up to check it now. But if doesn't work for RedirectTo(), you can obfuscateParam() and deObfuscateParam() functions to do job for you.
Caution: This will only make harder for user to guess the value. It doesn't encrypt value.
To know more about this, Please read the document configuration and defaults and obfuscating url
A much better approach to this particular situation is to write params to the [flash].1 The flash is exactly the same thing as it is in Ruby on Rails or the ViewBag in ASP.Net. It stores the data in a session or cookie variable and is deleted at the end of the next page's load. This prevents you from posting back long query strings like someone that has been coding for less than a year. ObfuscateParam only works with numbers and is incredibly insecure. Any power user can easily deobfuscate, even more so with someone that actually makes a living stealing data.
I'm working with JSF2 and PrettyFaces for creating a 'SEO-friendly URLs'.
Now I'm facing a problem when I want to pass parameters after the PrettyFaces is creating a new url those parameters will been delete and i want to avoid that.
I will explain it with an example:
Currently when hitting this url:
http://www.mysite.com/index.jsf?param1=value1¶m2=value2
After the PrettyFaces I'm getting this url:
http://www.mysite.com/
But I want it to be like that, so when hitting this url:
http://www.mysite.com/index.jsf?param1=value1¶m2=value2
After the PrettyFaces i'll get this url:
http://www.mysite.com/?param1=value1
Please note: That I only want to pass specific parameters. from the example above, only param1 should be passed.
My configuration on the 'pretty-config.xml':
<url-mapping>
<pattern>/</pattern>
<view-id>/jsp/index.jsf</view-id>
</url-mapping>
I'm actually surprised that the query string is not being preserved. I would guess that something is else is going on, other than prettyfaces. What version of PRettyFaces are you using? I'm also guessing that this is a problem with PrettyFaces, that this was a bug in the version you're using, but I think that's unlikely.
The only thing the url-mapping you've pasted should do is perform an internal forward from "/" to "/jsp/index.jsf". It will not do any client redirection from "/index.jsf" to "/"; this is why I think there is something else at play here. (See the code for reference: https://github.com/ocpsoft/rewrite/blob/master/config-prettyfaces/src/main/java/org/ocpsoft/rewrite/prettyfaces/UrlMappingRuleAdaptor.java#L213)
With regard to stripping out certain query parameters and leaving others, then I highly suggest looking at the Rewrite framework (which is the new core of PrettyFaces), you can use it to build very custom rewriting rules: http://ocpsoft.org/prettyfaces/ and http://ocpsoft.org/rewrite/
I hope this helps.
Where can I make changes if I want to make permanent changes in cookie-path value for my website. will that be in context.xml or web.xml or will that be using newCookie.setPath() method only? The server is Tomcat 6.0. I did look online but have not found anything, to the point.
Its just that there is some problem with the session tracking and admin thinks that this requires changing path of my session cookies from /site-folder to /. Is he wrong?
It might not be something considered good programming trick, but to change the sessioncookiepath value, web-app>METAINF>context.xml file is the place. For perticulary my problem, putting following code helped: Context sessionCookiePath="" This might be due to my website structure.
I would like to fetch a source of file and wrap it within JSONP.
For example, I want to retrieve pets.txt as text from a host I don't own. I want to do that by using nothing but client-side JavaScript.
I'm looking for online service which can convert anything to JSONP.
YQL
Yahoo Query Language is one of them.
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D"http://elv1s.ru/x/pets.txt"&format=json&callback=grab
This works if URL is not blocked by robots.txt. YQL have respect to robots.txt. I can't fetch http://userscripts.org/scripts/source/62706.user.js because it blocked via robots.txt.
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D"http://userscripts.org/scripts/source/62706.user.js"&format=json&callback=grab
"forbidden":"robots.txt for the domain disallows crawling for url: http://userscripts.org/scripts/source/62706.user.js"
So I'm looking for another solutions.
I built jsonpwrapper.com.
It's unstable and slower than YQL, but it doesn't care about robots.txt.
Here's another one, much faster, built on DigitalOcean & CloudFlare, utilizing caching et al: http://json2jsonp.com
Nononono. No. Just please; no. That is not JSONP, it is javascript that executes a function with an object as its parameter that contains more javascript. Aaah!
This is JSON because it's just one object:
{
'one': 1,
'two': 2,
'three':3
}
This is JSONP because it's just one object passed through a function; if you go to http://somesite/get_some_object?jsonp=grab, the server will return:
grab({
'one': 1,
'two': 2,
'three':3
});
This is not JSON at all. It's just Javascript:
alert("hello");
And this? Javascript code stored inside a string (ouch!) inside an object passed to a function that should evaluate the string (but it might or might not):
grab({"body": "alert(\"Hello!\");\n"});
Look at all those semicolons and backslashes! I get nightmares from this kind of stuff. It's like a badly written Lisp macro because it's much more complicated than it needs to (and should!) be. Instead, define a function called grab in your code:
function grab(message) {
alert(message.body);
}
and then use JSONP to have the server return:
grab({body: "Hello!"});
Don't let the server decide how to run your web page Instead, let your web page decide how to run the web page and just have the server fill in the blanks.
As for an online service that does this? I don't know of any, sorry
I'm not sure what you're trying to do here, but nobody will use something like this. Nobody is going to trust your service to always execute as it should and output expected JavaScript code. You see Yahoo doing it because people trust Yahoo, but they will not trust you.
I have a resource at a URL that both humans and machines should be able to read:
http://example.com/foo-collection/foo001
What is the best way to distinguish between human browsers and machines, and return either HTML or a domain-specific XML response?
(1) The Accept type field in the request?
(2) An additional bit of URL? eg:
http://example.com/foo-collection/foo001 -> returns HTML
http://example.com/foo-collection/foo001?xml -> returns, er, XML
I do not wish to oblige machines reading the resource to parse HTML (or XHTML for that matter). Machines like the googlebot should receive the HTML response.
It is reasonable to assume I control the machine readers.
If this is under your control, rather than adding a query parameter why not add a file extension:
http://example.com/foo-collection/foo001.html - return HTML
http://example.com/foo-collection/foo001.xml - return XML
Apart from anything else, that means if someone fetches it with wget or saves it from their browser, it'll have an appropriate filename without any fuss.
My preference is to make it a first-class part of the URI. This is debatable, since there are -- in a sense -- multiple URI's for the same resource. And is "format" really part of the URI?
http://example.com/foo-collection/html/foo001
http://example.com/foo-collection/xml/foo001
These are very easy deal with in a web framework that has URI parsing to direct the request to the proper application.
If this is indeed the same resource with two different representations, the HTTP invites you to use the Accept-header as you suggest. This is probably a very reliable way to distinguish between the two different scenarios. You can be plenty sure that user agents (including search engine spiders) send the Accept-header properly.
About the machine agents you are going to give XML; are they under your control? In that case you can be doubly sure that Accept will work. If they do not set this header properly, you can give XML as default. User agents DO set the header properly.
I would try to use the Accept heder for this, because this is exactly what the Accept header is there for.
The problem with having two different URLs is that is is not automatically apparent that these two represent the same underlying resource. This can be bad if a user finds an URL in one program, which renders HTML, and pastes it in the other, which needs XML. At this point a smart user could probably change the URL appropriately, but this is just a source of error that you don't need.
I would say adding a Query String parameter is your best bet. The only way to automatically detect whether your client is a browser(human) or application would be to read the User-Agent string from the HTTP Request. But this is easily set by any application to mimic a browser, you're not guaranteed that this is going to work.