How to set explicit NetworkCredential for authenticating to a workgroup computer instead of domain - web-services

I am using the following form of the NetworkCredential constructor to set explicit credentials before invoking a webservice that requires a specific identity:
myWebService.Credentials = new System.Net.NetworkCredential(userName, password, domain);
This has been working fine in our IIS 6.0 development and IIS 7.5 staging environments where the various servers are part of our domain.
Now this code has been deployed to a production environment where the servers are NOT part of a domain but just members of a WORKGROUP and the proper authentication is not working. At runtime, this effective substitution is failing:
myWebService.Credentials = new System.Net.NetworkCredential("localuserName", "XyZ!XyZ", "myServerName");
I don't have complete access to these various workgroup machines and the sysadmin who configured things there appears to have set up the local accounts and application pools correctly.
So, in summary, can use of the above technique continue to work in a WORKGROUP by simply using the name of the server instead of the domain name? If the code should work in either case, then there must be some other configuration problem where I will have to chase down more information on the problem.

i'm using iis 7 and there is no problem with following:
1. find the ip address of machine which is running IIS and
find webservice bindings the bindings in IIS is like the following http://192.368.228.1:8051/
2. set domain like this : http://servername:port/ or http://machine-ip:port/
also you can set webservice url like the following
myWebService.Url ="http://192.368.228.1:8051/service1.asmx";
myWebService.Credentials = new System.Net.NetworkCredential("user", "pass");
no domain is used in this way.
for more information about this subjects have look on following link
this section: Passing Credentials for Authentication to Web Services
http://msdn.microsoft.com/en-us/library/ff649362.aspx#secnetch10_usingclientcertificates
hope this could be helpful.

Related

Getting User DNS Domain - Django/Angular app

Good morning!
I have currently a django/angular application where my users use LDAP auth with multiple domains. I would like to get (via server-side or via frontend) the user dns domain, by meaning the equivalent to the environmental variable %USERDNSDOMAIN% in Windows so I can use dinamically this variable inside the server-side instead of having multiple url for each domain so I can have a global LDAP auth.
For now I have the user IP, which is obtained directly from the request via django request:
request.META.get('REMOTE_ADDR')
So far I have tried getting the DNS from the given ip by using python dns library:
dns.reversename.from_address("REQUEST_IP")
and also using the python sockets library
socket.gethostbyaddr('REQUEST_IP')
The first one works but does not give the result I am looking for, and the second one does not work properly:
---> 10 socket.gethostbyaddr('REQUEST_IP') herror: [Errno 1] Unknown host
Have a good day!

Configuring WSO2 IS behind a reverse proxy at some context

I am trying to set up WSO2 Identity Server behind a reverse proxy for SSL offloading. For example, let's say if WSO2 IS is available at say https://<some-ip>:9443/, I am trying to put it behind reverse proxy with address such as https://<domain name>/is/. Note the context path /is and SSL port 443. I thought that this will be trivial enough but sadly I am unable to find any conclusive documentation for achieving the same.
My applications are using OIDC to connect to WSO2 IS and using Azure Application Gateway as reverse proxy - typically all API calls works well but neither of UI (or flows involving redirections) works due to context. I can also fix redirects by URL rewriting at reverse proxy but that still doesn't solve problems. For example, login page will appear but XHR call from the same will go to /logincontext instead of /is/logincontext. Where can I set up the proxy context path in WSO2 IS? I already tried setting the same in .toml file (equivalent of setting it in carbon.xml) but it seems to be affecting only Management Portal.
WSo2 IS documentation talks about setting it up behind ngnix but that documentation is not using any path context. I could find reverse proxy documentation for other WSO2 product such as WSO2 API Manager but it only involves updating carbon.xml and that doesn't work for WSO2 IS. I am not a java person and hence, finding it difficult to figure out web app organization of WSO2.
Any help/link to documentation/guide to set up with proxy context will be useful.
I know that this answer comes a little bit late but recently I had a similar issue and here it is how I made it work, maybe it could be helpful for someone. I was using WSO2 IS 5.11.0.
Note:
I checked similar questions on stackoverflow and found a few but none was enough by itself for my case.
Maybe the solution I came up with is not the best or the most correct but it is the only one I could make work.
Here's how I did, assuming the context path is is:
Open Carbon Management Console and go to Identity Providers -> Resident. Then, go to Inbound Authentication Configuration -> OAuth2/OpenID Connect Configuration. Here, change the hostname under Identity Provider Entity ID to https://domain_name:443/is/<remaining path>.
Make sure that the port number is present or absent both here and in the client application. If there is a mismatch between the two, for some reason, it won't work (or at least it didn't for me).
Open the file deployment.toml and modify it as follows:
under the [server] section, add your proxy context at the end of the base_path url, e.g. base_path = "https://$ref{server.hostname}:${carbon.management.port}/is";
of course, also add proxy_context_path = "is" (actually, this last line should be enough but for some reason in my case it wasn't, so I had to modify the base path too);
under [transport.https.properties] add proxyPort="443".
For the record, I also turned off compression, by adding:
[transport.http.properties]
compression="off"
[transport.https.properties]
...
compression="off"
and set the token issuer URL equal to the entity id set up in Carbon, with:
[oauth]
use_entityid_as_issuer_in_oidc_discovery = true
but found out that these last two steps (turning off compression and setting the entity id as issuer) weren't needed.
Disable the csrf guard by setting org.owasp.csrfguard.Enabled = false
in the file /repository/resources/conf/templates/repository/conf/security/Owasp.CsrfGuard.Carbon.properties.j2.
This step was necessary for me to avoid the 403 Error after logging in on the Carbon Console (turning off compression didn't work).
Lastly, if you use nginx as reverse proxy (as I did), add these two lines in the location used for wso2:
proxy_redirect https://domain_name/oauth2/ https://domain_name/is/oauth2/;
proxy_redirect https://domain_name/carbon/ https://domain_name/is/carbon/;
These are needed (or at least were for me) because some URLs are not under the context path. In particular, the last one allows you to open the Carbon Console at https://domain_name/is/carbon/.
References:
wso2 api manger carbon page gives 403 Forbidden
WSO2 Identity Server login returns a 403
WSO2 Identity Server port configuration
To understand the template-based configuration model adopted from version 5.9.0 onwards, see:
https://apim.docs.wso2.com/en/latest/reference/understanding-the-new-configuration-model/
https://mcvidanagama.medium.com/understand-wso2-api-managers-new-configuration-model-6425a2710faa
Here are some useful configuration mappings from the old xml to the new toml based model:
https://github.com/ayshsandu/samples/tree/master/config-mapping

HTTP 407 Proxy Authentication Required while accessing Amazon S3

I have tried everything but I cant seem to fix this issue that is happening for only one client behind a corporate proxy/firewall. Our Silverlight application connects to Amazon S3 for downloading/Uploading some documents. On one client and one client only it returns a 407 error and after that the application fails to save anything.
Inner Exception:
System.ServiceModel.ProtocolException: [UnexpectedHttpResponseCode]
Arguments: 407,Proxy Authentication Required
We had something similar at a different client but there was more of a CORS issue. to resolve this I used cloud-front to fake a sub-domain that then accesses the S3 bucket and it solved the issue. I was hoping it would fix it with this client as well but it didnt.
I have tried adding this code to web.config as suggested by a lot of answers
<system.net>
<defaultProxy useDefaultCredentials="true" >
</defaultProxy>
</system.net>
I have read articles about passing a proxy headers with basis authentication using username and password but I am not sure how this would help us. The Proxy server is used by client and any authentication it requires is outside our domain.
**Additional Information**
The Silverlight code references 2 services. One is our wcf service that retrieves all the data for the application. One is The Amazon S3 service that uses the amazon Soap api, the endpoint for which is at http://s3.amazonaws.com/doc/2006-03-01/AmazonS3.wsdl?
If I go into our app and only use part of the system that dont make any calls to the Amazon S3 api the application works fine. As soon as I go to a part of the system that makes a call to the S3, the problem starts. funny enough the call to S3 goes fine and I can retrieve the doc fine but then any calls to our wcf service return 407.
Any ideas?
**Update 2**
Based on comments from Elliot Nelson I check the stack we were using for making http requests in our application. Turns out we are using client http for both http and https requests by default. Here is the code we have in the App.xaml constructor
public App()
{
Startup += Application_Startup;
UnhandledException += Application_UnhandledException;
InitializeComponent();
WebRequest.RegisterPrefix("http://", WebRequestCreator.ClientHttp);
WebRequest.RegisterPrefix("https://", WebRequestCreator.ClientHttp);
}
Now, to understand the differences between clienthttp and browserhttp and when to use them. Also, the potential impacts/issues of switching to browserhttp.
**Update 3**
Is there a way to request browsers to run your in-browser Silverlight application in trusted mode and would it help bypass this issue?
(Answer #2)
So, most likely (for corporate environments like this network), almost nothing can be done without whatever custom proxy settings are set in IE, usually pushed by corporate policy. To take advantage of these proxy settings, you want to use WebRequestCreator.BrowserHttp, which automatically uses the browser's default settings when making requests.
There's a table of the differences between these two clients available in the Microsoft docs. I'm guessing you were using something (maybe setting custom headers or reading the raw response body) that wasn't supported in BrowserHttp.
For security reasons, you can't "ask" the browser what its proxy settings are and use them, so this is a tricky situation. You can specify Browser vs Client handling by domain, or even for a specific request (the same page above describes how); you may be able in this case to get away with just using ClientHttp for your service calls and BrowserHttp for your S3 calls, and avoid the problem altogether!
For next steps, I'd try that approach; if it doesn't work, I'd try switching wholesale to BrowserHttp just to see if it bypasses the proxy issue (there's almost no chance the application will actually work, since you're probably using ClientHttp-only options).
Long term, you may want to consider making changes to your services so they are usable by a BrowserHttp-only application (this would require you to be pretty basic in your requests/responses, but using only BrowserHttp would be a guarantee you'd work in pretty much any corp network).
Running in trusted mode is probably a group policy thing which would require their AD admins to approve / whitelist your app.
I think the underlying issue you are facing is that the proxy requires NTLM authentication and for whatever reason the browser declines to provide your app with that context.
One way to prove that it's an NTLM auth issue is to test with curl - get it to make a req through the proxy, then it should be a bit easier to code to. EG the following curl will get you through 99% of Windows corporate proxies (assuming the proxy is at proxy-host.corp:3128):
C:\> curl.exe -v --proxy proxy-host:3128 --proxy-user : --proxy-ntlm https://www.google.com
NOTE The --proxy-user : tells curl to use the current user session to perform the NTLM challenge.
So if you can get the client to run that, you can at least identify that NTLM works, then it's a just a matter of getting the app to perform the NTLM challenge using the default credentials (which may or may not be provided by the browser session)
Since you described this as a silverlight application, I'm going to assume you can't use classic browser-proxy troubleshooting like "move browser to public network" or "try a different browser", to isolate the problem.
You should try to isolate the proxy server, and have the customer use the required proxy-auth.
The application is making request, but it might be intercepted by a transparent proxy, or the result might be coming from what you consider a web server.
In the early days, the 401 error was pretty strictly associated with web-auth, and 407 was for proxy-auth.
Architecturally, the separation is a convenience, a web server can have both web server, proxy, and reverse-proxy behaviors.
What happens is your customer's environment is making a web connection to the destination, but it receives a HTTP 407 status from some host, probably their network, or sometimes the provider. Almost certainly the request is received not forwarded. The HTTP client your application lives in needs to provide the credentials that host requires. Companies have environments that are complex enough where often your customer will say this is the first time they have heard of this (some proxy-auth is also dynamic or destination specific).
Also, in some corporate environments, the operator will allow temporary or permanent white-listing from the proxy-auth service. You should see if they can do this, even temporarily, to confirm there aren't going to be other problems.
In the end, it sounds like your application might not robustly support proxy-auth, or the proxy-auth type they use in their environment.

Why am I getting "Internal Server Error" running two Odoo instances (same domain but different ports)?

I have two instances of Odoo in a server in the cloud. If I make the following steps I get "Internal Server Error":
I make login in the first instance (http://111.222.33.44:3333)
I close the session
I load the address of the second instance in the same browser (http://111.222.33.44:4444)
If I want to work in the second instance (in another port), I need to remove the browser cookies first to acces to the other Odoo instance. If do this everything works fine.
If I load them in differents browsers (Firefox and Chromium) at the same time, they work well as well.
It's not a NginX issue because I tried with and without it.
Is there a way to solve this permanently? Is this the expected behaviour?
If you have access to the sourcecode you can change this file like shown below and check if the issue is solved or not.
addons/web/controllers/main.py
if db != request.session.db:
request.session.logout()
request.session.db = db
abort_and_redirect(request.httprequest.url)
And delete --> request.session.db = db
which is below this IF statement.
Try following changes in:
openerp/addons/base/ir/ir_http.py
In method _handle_exception somewhere around line 140 you will find this piece of code:
attach = self._serve_attachment()
if attach:
return attach
Replace it with:
if isinstance(exception, werkzeug.exceptions.HTTPException) and exception.code == 404:
attach = self._serve_attachment()
if attach:
return attach
You can perfectly well serve all the databases with a single OpenERP server on your machine. Unfortunately you did not mention what error you were seeing and what you expected as a result - makes it a bit harder to help you ;-)
Anyway, here are some random ideas based on the information you provided:
If you have a problem with OpenERP not listening on all interfaces, try to specify 0.0.0.0 as the xmlrpc_interface in the configuration file, this should have OpenERP listen on 8069 on all IPs.
Note that Apache is not relevant if you're connecting to e.g. http://www.sample.com:8069/?db=openerp because you're directly connecting to OpenERP. If you want to go through Apache, you need to setup ReverseProxy rules in your vhost configs, and OpenERP does not need to listen to all public IPs then.
OpenERP 6.1 and later can autodetect the database name based on the virtual host name, and filter the name of the available databases: you need to start it with the --db-filter parameter, which represents a pattern used to filter the list of available databases. %h represents the domain name and %d is the first domain component of that domain. So for example with --db-filter=^%d$ I will only see the test database if I end up on the server using http://test.example.com:8069. If there's only one database match, the list is not displayed and the user will directly end up on the right database. This works even behind Apache reverse proxies if you make sure that OpenERP see the external hostname, i.e. by setting a X-Forwarded-Host header in your Apache proxy config and enabling the --proxy mode of OpenERP.
The port reuse problem comes because you are trying to start multiple OpenERP servers on the same interface/port combination. This is simply not possible unless you are careful to start just one server per IP with the IP set in the xmlrpc_interface parameter, and I don't think you need that. The named-based virtual hosts that Apache supports are all handled by a single master process that listens on port 80 on all the interfaces. If you want to do the same with OpenERP you only need to start one OpenERP server for all your domains, and make it listen on 0.0.0.0, port 8069, as I explained above.
On top of that it's not clear what you would have set differently in the various config files. Running 40 different OpenERP servers on the same machine with identical code sounds like a lot of overkill. OpenERP is designed to be multi-tenant so that many (read: hundreds) of databases can be served from the same server.
Finally I think this is the expected behaviour. The cookies of all websites are stored specifically for each website (for each domain) in the web browser. So if I only change the port the cookies of the first instance are in conflict with the cookies of the other instance because the have the same domain (111.222.33.44 in my example).
So there are some workarounds:
Change Domain Locally
Creating a couple of domain name in my laptop in /etc/hosts:
111.222.33.44 cloud01
111.222.33.44 cloud02
Then the cookies don't interfere with each other anymore. To access to each instance
http://cloud01:3333
http://cloud02:4444
Broswer Extension. Multilogin or Multiaccount
There is another workaround. If I use this chromium extension the problem disappears because the sessions are treated separately:
SessionBox

WSO2 API Manager - Displaying correct IP in UI

Have installed the API manager 1.4.0 on a single machine and got everything running. However have found the IP address shown within the management console and store sites are incorrect. For instance in the mgnt console home page the 'Host' and 'Server URL', also on an api's page in the store (both the URLs provides in the overview and the ip used in the 'try it' feature).
Looking into this it seems my network adapter is supplying a privately accessible ip, instead of public (this cannot be changed). This value is then propagated around the API manager on startup between components but also to provide links to access the services externally.
Have looked into the configuration and changed some values, however cannot get all IP's in the UI to display correct. Settings I've changed include..
repository\conf\carbon.xml HostName, MgtHostName, ServerURL
repository\conf\api-manager.xml APIGateway-->APIEndpointURL (also updated APIKeyManager-->ThriftServerHost)
Is there any way to solve this? In particular, is there a way to set an IP that will be published for external access without changing any configuration used for communications within the host?
Instead of an IP address, I would use a domain name, and add it first to your hosts file like:
192.168.1.2 apimanager.example.net
Then edit some carbon.xml parameters to look like:
<HostName>apimanager.example.net</HostName>
<MgtHostName>apimanager.example.net</MgtHostName>
<ServerURL>https://apimanager.example.net:${carbon.management.port}${carbon.context}/services/</ServerURL>