Fully Qualified Domain Name in SMTPClient - smtpclient

In our application we use SmtpClient component to send emails. The problem is after upgrade our smtp server requires fully qualified domain name in HELO command.
Currently our IT somehow created workaround for this, but this is temporary, I really want to comply with their requirement. I could not find how to set it up though. Any ideas are welcome.

Made it work. Had to use "mailSettings" in config files under
More here: http://discoveringdotnet.alexeyev.org/2014/04/fqdn-for-smtpclient.html

Related

WSO2 API Manager 3.2.0 Registered callback does not match with customUrl behind a proxy

The problem I am facing is that after changing the hostname and configuring the reverse proxy as described here and here, as well as following the troubleshooting guide here to resolve the 'the registered callback does not match' I am unable to get any further.
I've followed a number of other examples of how to configure nginx and add the reverseProxy property to the settings.js configs but with no luck.
As you can see below if I go to https://example.com/publisher I continue getting the error 'The registered callback does not match'
Here is what I have the callback regex set to:
regexp=(https://example.com/publisher/services/auth/callback/login|https://example.com/publisher/services/auth/callback/logout)
If I inspect the authorize request query I can see that the redirect_url is being set to 127.0.0.1 and I suspect that is the problem as when I add that url to the service provider regex callback it works, but this is not suitable in a non local environment.
And here is the request query (where I suspect the main issue lies - note redirect_uri):
https://example.com/oauth2/authorize?response_type=code&client_id=1obvNiUMBcJwMa3euoHjrsckuGIa&scope=apim:api_create%20apim:api_delete%20apim:api_import_export%20apim:api_product_import_export%20apim:api_publish%20apim:api_view%20apim:app_import_export%20apim:client_certificates_add%20apim:client_certificates_update%20apim:client_certificates_view%20apim:document_create%20apim:document_manage%20apim:ep_certificates_add%20apim:ep_certificates_update%20apim:ep_certificates_view%20apim:external_services_discover%20apim:mediation_policy_create%20apim:mediation_policy_manage%20apim:mediation_policy_view%20apim:pub_alert_manage%20apim:publisher_settings%20apim:shared_scope_manage%20apim:subscription_block%20apim:subscription_view%20apim:threat_protection_policy_create%20apim:threat_protection_policy_manage%20openid&state=/&redirect_uri=https://127.0.0.1/publisher/services/auth/callback/login
Here is how my deployment.toml is configured (I've replaced my actual domain with example.com):
Note I had to remove the ports to work behind the proxy
And here is my settings.js:
I added the reverseProxy property as suggested in a github issue
And here is my nginx conf:
This is a known limitation. Please find the steps to resolve the issue - https://apim.docs.wso2.com/en/latest/troubleshooting/troubleshooting-invalid-callback-error/#troubleshooting-registered-callback-does-not-match-with-the-provided-url-error
The reason for this error comes down to a missing X-Forwarded-For header, I ended up changing the forwardedHeader in settings.js to Host as that was being passed from my proxy server.
Thanks for the detailed question "user3745065".
I was having the exactly same issue you described in this post, and I guess I nailed the problem down.
Like you mentioned the issue is with the forwardedHeader, that in your case you switched to Host.
But checking the product documentation, the sample they provide is the following:
customUrl: { // Dynamically set the redirect origin according to the forwardedHeader host|proxyPort combination
enabled: true,
forwardedHeader: 'X-Forwarded-Host',
},
It took me a while to noticed that the forwardedHeader is supposed to be 'X-Forwarded-Host' not 'X-Forwarded-For' as it comes as default.
Few other things I needed to tweak that wasn't clear in the documentation for changing the hostname (here), I had to remove the port variable ${mgt.transport.https.port} from devportal.
That's outlined on the installation step 5 also, here. However worth mentioning:
from:
[apim.devportal]
url = "https://{Your Domain}:${mgt.transport.https.port}/devportal"
to
url = "https://{Your Domain}/devportal"
otherwise when the it tries to redirect to the portal (for instance, from the publisher) it construct the url with the port number, and that default port 9443 isn't going to work on your proxy (tested on nginx with the provided settings that is on the documentation here) which is listening and expecting calls on port 443.
Things that I noticed you configured but perhaps it's not necessary:
Set the apim.idp settings
Set the reverseProxy settings
Set the apim.gateway.environment settings (Not related to the callback url issue, this is meant for you to configure the runtime gateway urls)
Last but not the least, Following the "Troubleshooting 'Registered callback does not match with the provided url' error", again you need to remove the port number from the url, otherwise you will have the same issue aforementioned on your proxy.
Just my 2 cents! ;)

Jetty Webservice - https protocol based address is not supported

I am using jetty version 7.5.1 .
My webservice works fine with a "http://..." endpoint, but when I change it to "https://..." things go wrong.
Endpoint e = Endpoint.create(webservice);
e.publish("https://localhost:" + serverPort + "/ws/mywebservice);
I get the following error message:
"https protocol based address is not supported".
I've tried using an SslChannelConnector, a SelectChannelConnector and the combination of both.
Connector connector = new SelectChannelConnector();
connector.setPort(59180);
SslContextFactory factory = new SslContextFactory();
factory.setKeyStore("keystore");
factory.setKeyStorePassword("password");
factory.setKeyManagerPassword("password");
factory.setTrustStore("keystore");
factory.setTrustStorePassword("password");
SslSelectChannelConnector sslConnector = new SslSelectChannelConnector(factory);
sslConnector.setPort(443);
sslConnector.setMaxIdleTime(30000);
server.setConnectors(new Connector[]{connector, sslConnector});
I also tried modifying the port in the publish path. But without success.
Could it be that something went wrong with the creation of my keystore file?
Even I put the wrong password though, it does show a different error message, explaining that my password is wrong.
My options are running out. Any ideas?
EDIT: More information:
Servlets work fine with HTTPS now. But the webservices are not. Am I maybe publishing it the wrong way ?
I found several threads on various forums with similar problems. But never found a solution. I would like to write down my solution for future victims:
The publish method only accepts the http protocol. Even if you are publishing for https, this should still be "http://...". On the other hand, you should use the port of your SSL connector.
Endpoint e = Endpoint.create(webservice);
e.publish("http://localhost:443/ws/mywebservice);
Use any other protocol and you will always get the "xxx protocol based address is not supported" exception. See source code.
Note 1: The webservice already works fine at this point. However there is a point of discussion: The generated wsdl file (at https://localhost:443/ws/mywebservice?wsdl) will reference the http://... path. You could argue if the wsdl file is a requirement or just documentation.
Correcting a hostname in a WSDL file is not that hard, but replacing the protocol is harder. The easiest solution is probably to just edit the wsdl file and host the file, which is not very "dynamic" of course.
Alternatively, I solved it by creating a WsdlServlet which replaces the address. On the other hand, it does feel bad to create an entire class just to fix 1 character. :)
Note 2: Another bug in this jetty release, is the authentication. It's impossible to offer the webservice without any authentication. The best thing you can get, after turning off all possible authentication: you will still have to use 'preemptive authentication' and enter a random username and password.

Web App not Found-edit in DatasheetView

i came accross the following error, when my client tries to edit list data through datasheet view from terminal machine.
The Web application at xxx could not be found. Verify that you have typed the URL
correctly. If the URL should be serving existing content, the system administrator may
need to add a new request URL mapping to the intended application.
Note: this error is coming with only 1 list. All other lists are working fine. i m using sharepoint 2007 on 32bit
This may be related to alternate access mappings.
I had this issue, and the clue was that the datasheet was referencing a URL of the form:
_http://hostname/site/...
instead of
_http://hostname.domain/site/...
ie. the datasheet was not referencing the fully qualified domain name (FQDN).
If the error message states The Web application at _http://hostname/site/..., ie. the error doesn't use the FQDN, alternate access mapping may resolve it. The end of the error message seems to suggest alternate access mappings, although it is not entirely explicit.
I resolved this by adding an alternate access mapping as follows:
internal url: http://hostname
public url: http://hostname.domain (FQDN)
Default Zone in my case, should work for other zones.
hope this helps :)

Error using ColdFusion cfexchangeconnection to connect to Exchange server

I am getting an error when trying to connect to an Exchange server using the cfexchangeconnection tag. First some code:
<cfexchangeconnection action="open"
server="****"
username="****"
password="****"
connection="myEX"
protocol="https"
port="443">
I know its the right server because it fails when not processing via https. I have tried:
Following all the instructions here http://help.adobe.com/en_US/ColdFusion/9.0/Developing/WSc3ff6d0ea77859461172e0811cbec14f31-7fed.html
Prefixing username with a domain name, adding #domain name, etc and no luck.
The error I get is:
**Access to the Exchange server denied.**
Ensure that the user name and password are correct.
Any ideas
Here's an idea - this is what I needed to do to make my cfexchange connection work. Not entirely sure if it's the same problem. I think I had a 440 error, rather than your 401 error.
I'm using:
https
webdav
forms based auth
Exchange 2007
Coldfusion 8
Windows 2003 servers
Here's the connection string that worked for me. What was keeping my connection from working was the need for the formBasedAuthenticationURL. This is a poorly documented attribute by both Adobe and Microsoft.
<cfexchangeconnection action="open"
username="first.last"
password="mypassword"
mailboxname="myAcctName"
server="my.mail.server"
protocol="https"
connection="sample"
formBasedAuthentication="true"
formBasedAuthenticationURL="https://my.mail.server/owa/auth/owaauth.dll">
<cfexchangecalendar action="get" name="mycal" connection="sample">
<cfexchangefilter name="startTime" from="#theDate#" to="#theEndDate#">
</cfexchangecalendar>
<cfexchangeConnection action="close" connection="sample">
Additional notes:
IIS and WebDAV are enabled on the target Exchange server.
The username and password you're using has the appropriate permissions for
a WebDAV connection. (I'm not the Exchange admin, so I'm not sure what they
are, but I think the account needs to be allowed to connect to OWA. - Please
correct me if I am wrong.)
Optional: (don't use if you don't have to)
IF HTTPS is required, use the appropriate argument.
IF Forms Based Authentication is on in Exchange 2007 (as was my case),
you'll have to work around it using the formBasedAuthenticationURL argument.
Not sure if that's it, but I hope it is!

How to detect which web service protocol an ASP.NET request is using?

I have an ASP.NET (1.1) web service which authenticates clients using a SoapExtension.ProcessMessage(SoapMessage) override as described in:
http://www.codeguru.com/columns/experts/article.php/c5479
However if the web.config if not set up such that HttpSoap is the only protocol allowed, then ProcessMessage will never get called for requests coming in on other protocols, and therefore bypass security.
Is there anyway to programatically ensure SOAP is being used (as opposed to relying on the web.config to be correct)?
Thanks.
If it's of any use to anyone, I ended up checking:
Request.ServerVariables["HTTP_SOAPAction"] != null
which isn't ideal but seemed to do the trick.
Look in Request.ServerVariables, specifically the SERVER_PROTOCOL variable.
http://www.aspcode.net/List-of-RequestServerVariables.aspx
You could try to read and parse the web.config at startup, to see if it's set the way you'd like it to be.