I'm using Flex 3 and I want to access a webservice on another server. I've imported the webservice (Data->Import) succesfully into my application, but when I'm accessing the functions in the code itself I get the following error:
Warning: Domain ... does not specify a meta-policy. Applying default meta-policy "all".
This configuration is deprecated ...
Error: Request for resource at ... by requestor from ... is denied due to lack of policy file permissions
Security sendbox violation
Connection to ... halted - not permitted from ...
I've put the "crossdomain.xml" policy file in the root directory of the server that the web service is installed on. This is the content of this file:
<!DOCTYPE cross-domain-policy SYSTEM "http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd">
<cross-domain-policy>
<allow-access-from domain="*" secure="false" />
</cross-domain-policy>
I've called the Security.loadPolicyFile() in my code and am still getting this error. Any suggestions?
Try this:
<?xml version="1.0" ?>
<cross-domain-policy>
<site-control permitted-cross-domain-policies="master-only"/>
<allow-access-from domain="*"/>
<allow-http-request-headers-from domain="*" headers="*"/>
</cross-domain-policy>
Can you check if you are not getting a 404 when requesting for the crossdomain.xml file. Just type http://servername:port/crossdomain.xml in your browser if you are getting the xml file in the browser and not a 404.
Related
I'm attempting to access a remote GraphQL server used by a publicly available web site. I've pieced together the appropriate code to interact with the database and can run it locally successfully. It involves me getting some createCognitoIdentity() credentials and then using those credentials to send a GraphQL query. Works like a charm and I get the data I'm looking for... until deployed to prod.
Once in prod, the same code produces a 404 error and I'm unable to even try to query the db because getting the credentials fails with:
Error retrieving credentials from the instance profile metadata service. (Client error: GET http://169.254.169.254/latest/meta-data/iam/security-credentials/ resulted in a 404 Not Found response: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://ww (truncated...) )
Here's my code to recreate it:
$sdk = (new \Aws\Sdk([
'region' => 'us-east-2',
'version' => 'latest',
]));
$result = $sdk->createCognitoIdentity()->getCredentialsForIdentity([
'IdentityId' => 'us-east-2:3945b61f-5ad6-4e57-b7bf-2d01874e94d4',
]);
My production environment is hosted within AWS, so I suspect it's possible the 404 is because it's within AWS? Seems strange to add such a restriction. I'd like to rule out any potential xml present within the response body, but I'm having trouble obtaining the full body.
How can I echo out the response body when a 404 is encountered?
The issue you're having is that there are no IAM credentials associated with the EC2. I have an EC2 that has an IAM role tied to it. To check this I run:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/
On the instances with an IAM role attached I get the role name that is attached - nothing but the name (i.e. no HTML or anything else). On another instance that has no credentials and running the same command I get:
<?xml version="1.0" encoding="iso-8859-1"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
<head>
<title>404 - Not Found</title>
</head>
<body>
<h1>404 - Not Found</h1>
</body>
</html>
which looks like what you're getting. This is from the command line but cURL should get you almost the exact same thing in PHP.
Edit based on the comments
It sounds like the challenge is ultimately that your development environment has credentials set but your EC2 doesn't. The error is a bit misleading as it's ultimately a permission denied (as there are no credentials) but it's surfaced as a 404 because there isn't anything in the metadata.
There is more information here regarding the use of instance profiles. As you're using a Docker to deploy and based on this post the container should be able to get the same profile as if you were running natively.
I am trying to implement a database integration on a system that triggers, after a user creation, a account creation on Zimbra service through ZimbraAdminService.
The server version is 8.6
On Pentaho Web Service Lookup step, when I fill the URL field with https://example.com/service/wsdl/ZimbraAdminService.wsdl and hit "Load" button, I get the following error:
Could not load WSDL file: WSDLException (at /wsdl:definitions/wsdl:types/xsd:schema): faultCode=OTHER_ERROR: An error occurred trying to resolve schema referenced at 'zimbra.xsd'.: java.io.FileNotFoundException: This file was not found: file:/C:/Program Files/Pentaho/data-integration/zimbra.xsd
I already checked the documentation on https://wiki.zimbra.com/wiki/Wsdl
Anyone faced such problem and has a solution? Thanks.
To solve the problem above, I had to go to the browser, access the following addresses, load and save the XML generated of the xsd services as .xsd extension:
https://example.com/service/wsdl/zimbra.xsd
https://example.com/service/wsdl/zimbraAdmin.xsd
https://example.com/service/wsdl/zimbraAdminExt.xsd
https://example.com/service/wsdl/zimbraMail.xsd
https://example.com/service/wsdl/zimbraRepl.xsd
https://example.com/service/wsdl/zimbraSync.xsd
https://example.com/service/wsdl/zimbraVoice.xsd
Put this files on /your-program-install-folder/Pentaho/data-integration (on Windows - C:\Program Files\Pentaho\data-integration)
After doing that, the problem will be solved.
I'm trying to gracefully handle the 403 when visiting an S3 resource via an expired URL. Currently it returns an amz xml error page. I have uploaded a 403.html resource and thought I could redirect to that.
The bucket resources are assets saved/fetched by my app. Still, reading the docs I set bucket properties to handle the bucket as a static webpage page and uploaded a 403.html to bucket root. All public permissions are blocked, except public GET access to the resource 403.html. In bucket properties, website settings I indicated the 403.html as error page. Visiting http://<bucket>.s3-website-us-east-1.amazonaws.com/some-asset.html redirects correctly to http://<bucket>.s3-website-us-east-1.amazonaws.com/403.html
However, when I use aws-sdk js/node and call method getSignedUrl('getObject', params) to generate the signed url, it returns a different host url: https://<bucket>.s3.amazonaws.com/ Visiting expired resources from this method do not get redirected to 403.html. I'm guessing that since the host address is different this is the reason it is not automatically redirecting.
I have also set up static website routing rules for condition
<Condition>
<HttpErrorCodeReturnedEquals>403</HttpErrorCodeReturnedEquals>
</Condition>
<Redirect>
<ReplaceKeyWith>403.html</ReplaceKeyWith>
</Redirect>
Still that's not redirecting the signed urls. So I'm at a loss of how to gracefully handle these expired urls. Any help would be greatly appreciated.
S3 buckets have 2 public-facing interfaces, REST and website. That is the difference between the two hostnames, and the difference in behavior you are seeing.
They have two different feature sets.
feature REST Endpoint Website Endpoint
---------------- ------------------- -------------------
Access control yes no, public content only
Error messages XML HTML
Redirection no yes, bucket, rule, and object-level
Request types all supported GET and HEAD only
Root of bucket lists keys returns index document
SSL yes no
Source: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html
So, as you can see from the table, the REST endpoint supports signed URLs, but not friendly errors, while the website endpoint supports friendly errors, but not signed URLs. The two can't be mixed and matched, so what you're trying to do isn't natively supported by S3.
I have worked around this limitation by passing all requests for the bucket through HAProxy on an EC2 instance and on to the REST endpoint for the bucket.
When a 403 error message is returned, the proxy modifies the response body XML using the new embedded Lua interpreter, adding this before the <Error> tag.
<?xml-stylesheet type="text/xsl" href="/error.xsl"?>\n
The file /error.xsl is publicly readable, and uses browser-side XSLT to render a pretty error response.
The proxy also injects a couple of additional tags into the xml, <ProxyTime> and <ProxyHTTPCode> for use in the output. The resulting XML looks like this:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="/error.xsl"?>
<Error><ProxyTime>2015-10-13T17:36:01Z</ProxyTime><ProxyHTTPCode>403</ProxyHTTPCode><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9D3E05D20C1BD6AC</RequestId><HostId>WvdkvIRIDMjfa/1Oi3DGVOTR0hABCDEFGHIJKLMNOPQRSTUVWXYZ+B8thZahg7W/I/ExAmPlEAQ=</HostId></Error>
Then I vary the output shown to the user with XSL tests to determine what error condition S3 has thrown:
<xsl:if test="//Code = 'AccessDenied'">
<p>It seems we may have provided you with a link to a resource to which you do not have access, or a resource which does not exist, or that our internal security mechanisms were unable to reach consensus on your authorization to view it.</p>
</xsl:if>
And the final result looks like this:
The above is a general "Access Denied" because no credentials were supplied. Here's an example of an expired signature.
I don't include the HostId in the output, since it's ugly and noisy, and, if I ever need it, the proxy captured and logged it for me, and I can cross-reference to the request-id.
As a bonus, of course, running the requests through my proxy means I can use my own domain name and my own SSL certificate when serving up bucket content, and I have real-time access logs with no delay. When the proxy is in the same region as the bucket, there is no additional charge for the extra step of data transfer, and I've been very happy with this setup.
I'm facing the same problem that this guy here:
wsimport Xauthfile error
Since he didn't gave a feedback and I'm new here and can't ask him if he solved his problem I'm opening a new question.
I'm using ubuntu and have JDK7 from java oracle installed.
I'm consuming a thirdparty web service. The password (...GT##ED...) for the webservice have a character that conflicts with de -Xauthfile syntax (http[s]://user:password#host:port//) because of the "#". The dots (...) represents the rest of my password.
Here is the command I'm running:
wsimport -p loa -Xauthfile "path_to_auth.txt" https://myWS?wsdl
In my auth.txt file I have:
https://user:...GT##ED...#myWS?wsdl
In return a get
parsing WSDL...
[ERROR] Server returned HTTP response code: 401 for URL: https://myWS?wsdl,
"https://myWS?wsdl" needs authorization, please provide authorization file with
read access at /home/user_name/.metro/auth or use -Xauthfile to give the
authorization file and on each line provide authorization information using this
format : http[s]://user:password#host:port//<url-path>
I search all over the net, but no success.
When I try to import the WS using SoapUI like in this tutorial I got a
[ERROR] sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid
certification path to requested target
and I don't know where to specify the ssl file for SoapUI. I tryed in
preferences -> SSL Settings
but no lucky.
That's it. I'll apreciate any help.
EDIT
OK, so I pass through the authorization, changing the characters using the HTML URL Encoding Reference, but now I'm getting the following error
[ERROR] Server redirected too many times (20), "https://ws?wsdl" needs
authorization, please provide authorization file with read access at /home/user
/.metro/auth or use -Xauthfile to give the authorization file and on each line
provide authorization information using this format :
http[s]://user:password#host:port//<url-path>
I am using Apache axis2 instead of wsimport. first this problem happened for me, and I write a bash script and it worked.
#!/bin/bash
/axis2-1.7.9/bin/wsdl2java.sh -uri http://username:password#domain/x?wsdl
I also use encoding password and username by 'url encoding' such as bellow:
##%g3E99! -(URL encoding)-> %40%23%25g3E99%21
First create auth.txt file where you need to put the following and save it in C drive:
http://username:password#localhost:port/wsdlurl
Now run the following command:
wsimport -Xauthfile C:\auth.txt -keep http://example.com/test?wsdl
This worked for me.
I'm using a webservice to get information from the server and got this error:
Error: Request for resource at http://backoffice.dev144.com/PPIWS/B144_MAPWS.ASMX by requestor from http://maps.localhost:10000/B144/Images_v2/b144_map.swf/[[DYNAMIC]]/4 is denied due to lack of policy file permissions.
*** Security Sandbox Violation ***
Connection to http://backoffice.dev144.com/PPIWS/B144_MAPWS.ASMX halted - not permitted from http://maps.localhost:10000/B144/Images_v2/b144_map.swf
Of course i put crossdomain file in the main server directory that looks like this:
allow-access-from domain="*" secure="false"/
Can anyone tell me why it's not working?
As I see you try to login to a different subdomain http://backoffice.dev144.com/
If you have the crossdomain.xml on the main //www website this will not work.
You need to copy the crossdomain.xml also to the backoffice.dev144.com/crossdomain.xml