I have a web application, in which a web service resides in a folder. The whole web application can be accessed from anywhere, while the web service should only be accessed from certain IP addresses. I can't separate them and take the web service into another IIS web site, thus I need to restrict the access to the web service, while it resides in that web site. However, I have no limitation in creating virtual directories. What should I do? Can I do it at all?
To understand the scenario better, suppose that the domain of the website is www.sample.com, and every address on this website is accessible to all the Internet. For example, www.sample.com\path1 and www.sample.com\path2 are browsable by everyone and every IP address out there.
But the address of the web service www.sample.com\services\user.asmx should only be accessed from certain IP addresses, like 217.218.192.50 && 107.50.27.30 for example.
How can I achieve this configuration in IIS7?
OK, what a simple action it was.
Simply select the folder in IIS7, and from the right hand, select IP Address and Domain Restrictions (which if is not visible, must be reached via Features View tab).
Now, you can allow or deny any single IP address, or a range if IP addresses from seeing or not seeing your folder, and anything inside it.
Related
I have built a basic web application using html, css and php (it is a library with query, modify etc. capabilities). I have built the databases containing the books information, subscribers information etc. with phpMyAdmin from Wamp server. On localhost (C:\wamp\www) everything works ok (I can add, modify, make queries etc.).
Now I would like to make this web application available online, but I have no idea how this can be done. The access to the database must be also available online (for search, queries etc. from the databases).
Can somebody support me?
The access to your database can be local since the php files that use yourdatabase run in the same machine.
You only need to accept online access to your apache server, if it's not accessible yet, and have no firewall active. In this case you should be able to connect to your server by ip. And you'll need a domain and a dns server if you want not having to write the public IP to connect.
You need a public IP address or routing the outside web traffic to your own web server.
Most routers have an advanced section called IP/Port Forwarding: find yours. If you don’t have this, I’m afraid you cannot be reachable by the outside.
Besides that, find your private IP with:
C:\>ipconfig
take note of the IP address: that’s your private address, which uniquely identifies you in your local network.
In httpd.conf change:
ServerName localhost:80
With:
ServerName <private IP>:80
Also find this line:
Require local
And change it to:
Require all granted
Restart your web server. Find out what’s your current public IP address (the public address of your router: https://www.whatismyip.com ) and visit:
http://<public IP>:<port>/
Or, in case you have not changed the default http port (80) just visit:
http://<public IP>/
I've been thinking of the concept of an ad blocker that runs at the OS level, rather than as a browser extension. I know that I can place x.com in Windows' %windows%\system32\drivers\etc\hosts file and point it to the IP of y.com, and on y.com I can serve up content that says, "This ad blocked by Example Ad Blocker". However, the domain list I have is quite large -- like literally a thousand domains and growing, and so this wouldn't work well in file lookups. Does Windows permit some way to programmatically, like Qt/C++, add a DNS reroute rule in a more speedy way?
There's a risk of doing domain intercepts and DLL hooks using APIs because AV products and/or Microsoft would have to whitelist you and certify you so that your activity doesn't look like a virus. And the odds of them doing that are not only low (unless you're a multimillion dollar company), but they want to protect their ad marketing too.
The best option is to make a browser extension for each of the browsers. You can even check the source code of the AdBlock Chrome extension to see how it works. The trouble with that in 2017, however, is that there's no common browser extension platform just yet. It's getting much closer, but it's still not standardized yet. The new standard uses the Chrome standard. Opera, Firefox, Edge, and of course Chrome support this new standard to some degree, but it's kind of unsmooth still. And for anyone outside of that, such as IE11 or earlier, they're not going to have your Chrome-style browser extension and you'll have to go the seriously hard route to make one just for those earlier browsers or ask the customer to upgrade when your adware product installs.
If you want something that doesn't require a browser extension, then the option you want is to add another DNS server connection in the user's DNS client settings. I don't know how to do this yet via C#, Qt/C++, or C++. However, you can shell out from those languages and use the "netsh" command to create those DNS connections. Probably a good strategy would be to find the user's default gateway IP. Then, make the DNS priority like so:
your DNS server that redirects x.com to y.com so that you can do ad blocking from y.com via a web server
the user's default gateway IP
Google's DNS (8.8.8.8) in case the default gateway IP has changed for the user
So, it would be something like these 4 netsh commands:
netsh delete dnsserver "Wireless Network Connection" all
netsh interface ip add dns name="Wireless Network Connection" addr=1.1.1.1 index=1
netsh interface ip add dns name="Wireless Network Connection" addr=192.168.254.254 index=2
netsh interface ip add dns name="Wireless Network Connection" addr=8.8.8.8 index=3
Change "Wireless Network Connection" to "Local Area Connection" if they are using a cable for their computer instead of wireless. (Few do that these days.)
Change 1.1.1.1 to the IP address of your special DNS server.
Change 192.168.254.254 to the IP address of their default gateway.
The third rule (8.8.8.8) tells the computer to use Google's DNS if all else fails. This is important because they could disconnect their laptop at home and go to a café or something, and we need their DNS stuff to still work.
Now, once you get the DNS client settings right, you need a cheap Linux cloud host to serve up the DNS server and web server. You might even need more than one in case one goes down for maintenance, and possibly on a different cloud zone or even cloud hosting provider.
For the DNS product, if you have Linux skills, you can install and configure dnsmasq pretty easily to get a cheap and easy to manage DNS server on Linux. Or, if you search your Linux repositories, you can find other DNS servers, some more robust than others, some harder to use than others.
For the web product, you can install NGINX or Apache on each of the two DNS servers. Then, you can make a configuration where any domain connection can come to it and it will load a web page for that domain. The web page can say something like, "Ad Blocked By X Ad Blocker" or whatever you want in very small font (small enough to fill the ad spot).
Once this is all in place, you'll have to reboot the Win PC client and also clear their browser cache and history so that DNS will route through the new arrangement.
The end result is that when people on that Windows PC surf the web and load an ad, their OS will make a DNS request to translate domain name to IP address. The first DNS server they'll reach will be your private DNS server. It can then say that x.com ad domain (as an example) is the IP address of your private DNS server. That private web server will then be contacted and it will display the ad block message. For all other requests not served up by your DNS servers, they'll go to their default gateway. If that's not serving up DNS as needed, then they'll failsafe to the Google DNS on 8.8.8.8. So, web browsing will work fine, minus ads.
As for a bad domain list, there's a community-maintained bad domains list here on Github.
The trouble with the private DNS server that you host is that you're now having to pay a bandwidth bill for gobs of connections to it. That's probably undesirable unless you've got a proper way to monetize that. A better strategy would be to NOT use a private DNS server on the web and use a local DNS server and a local web server. You'd have to code both of those or use some third-party product for that. The trouble there, however, is that you may have some commercial licensing problems with that, or increased costs, and it won't work for some web developers who already use a web server on their workstation.
Therefore, as you can see from the added costs, hassle, and workstation configuration nuance troubles, the best strategy would be to use the browser extension for ad blocking.
However, even at that, how are you going to differentiate your product from the free ad blockers out there that are doing a sensational job already?
I have Shibboleth configured on an IIS server and am using it protect a .NET application.
I need authenticated access for users accessing the application over the web and for that Shibboleth is working fine.
The application also hosts web services which need to be accessed by other applications in the same server and for that working with Shibboleth is a challenge since web service clients cannot deal with the log in page.
Is it possible to configure Shibboleth to ignore requests coming from the same server for example by checking the IP address?
It won't directly answer your question, but I can share a workaround I found and hope it can help with your problem too.
Define another website in IIS pointing to the same folder as the initial one, and make it only respond to a different domain (like something.local). Then in IP Address and Domain Restrictions, make sure only 127.0.0.1 is allowed to access it.
In C:\Windows\System32\drivers\etc open the file "hosts" in Notepad (running with Administrator privileges). Add the line "127.0.0.1 something.local" (no quotes; make sure the domain is the same one you defined before)
Now, make the webservices call the application by the new domain.
Assuming my REST API URL is
http://myshop.com/rest/api/product/1
I would like to have this return data only when calling it within the corporate network, everyone else should not get any result back.
Here are the use cases where they can/cannot be accessible
User accessing it from outside the network but using it via a JSF/CDI application deployed on JBoss Server. (Should be accessible)
User directly accessing the URL from inside the network (via rest client or directly typing the url in browser window) (Should be accessible)
User directly accessing the URL from outside the network (via rest client or directly typing the url in browser window) (Should NOT be accessible)
Thanks for taking a look.
I'd suggest to get an IP address from the request and then check it via permitted IP's or mask of a subnet. How to get an IP address if you're using JAX-RS API you can find here: How to find out incoming RESTful request's IP using JAX-RS on Heroku?
Another option it's of course to block incoming request by firewall or by server's setting.
We have a web service that is deployed on 2 separate machines in different locations. Is it possible to monitor the url that a person used to call our webservice using java code? We have a 3DNS url set up and we want all clients to use this url as oppossed hitting the boxes directly with the correct port numbers in the url.
Thanks
Damien
Have you taken a look at:
#Resource
WebServiceContext wsContext;
This will return the context of the current message sent to your webservice. I've been able to get the IP address of the user from that.
This is assuming that you are using Java.
You might look into something like OWSM (Oracle Web Services Manager)... there may be open source alternatives.
OWSM creates a virtual endpoint that it handles and routes to the actual service hosts. This way, your service hosts can be hidden behind the firewall, with only the OWSM host visible to the world. When a user hits the virtual endpoint, OWSM can authenticate and pass them along to the balanced service host.
An alternative might be to use servlet filters on the real endpoints. The filter could do a couple of different things. It could simply log the requested URL from the HttpServletRequest, or it could even redirect to the correct URL for you (I'm not sure what the implications of that are for a web service, though).
All you would have to do is have the filter mapped to the same context path as the web service (axis uses /services/* for example).