FIDO U2F in offline environment - offline

I want to use the U2F protocol for an offline application.
This application has no connection to the internet, and I was wondering if its even possible to use U2F in an offline environment, as it requires some origin. Please note localhost is allowed and possibly can be used as origin but I'm unsure if that's secure or insecure and if it may lead to people being able to copy the key.

I found the answer myself yes its possible but you will have to program your own host that handles the appid acording to your needs

Related

How to pentest rest apis using burpsuite?

I want to pen test rest apis, the use case I have is a client(desktop app with username and password) connecting to a server. So I am confused from where to start and how to configure burp. Usually I use burp to pen test websites, which is quite easier to configure, you only set the proxy and intercept in the browser, but now the use case is different.
Furthermore, I did some search on google I noticed postman is mentioned many times, I know it's a tool for building apis, but is it also used in the pentesting with the burp?
It may be useful to first confirm that the application is communicating via HTTP/HTTPS to ensure Burp is the right tool to use.
Postman is only useful for penetration testing if you already have Postman docs. It doesn't sound like that's the case here so I wouldn't worry about that.
Assuming the desktop app does use HTTP, there are two things you will need to do:
Change system-level proxy settings to point to Burp (127.0.0.1:8080)
Install and trust the Burp CA Certificate (available locally from http://burp:8080).
In some cases you might need to enable 'invisible proxying' in Burp.
Depending on the type of client, this may not always work at first, but if the client supports a proxy, you should see the traffic in your Burp window. Please do pay attention to your Dashboard in Burp, if you see TLS warnings, it may be an indicator the client uses certificate pinning, and some reverse engineering may be needed on the client.
As you know, burp, intercept a http/s protocol network and it isn't a tool for intercept network traffic. so To achieve your goal, you can use the wiresharkor something else, for finding a software rest api endpoint.
After that, you can start your penetration testing using the burp as you did before.
so how you can find rest api endpoint in wireshark?
you can filter network results, using this pattern:
tcp.port==443

Silverlight calling 3rd party web service. How to avoid cross-domain issues?

Well. I created reference, tested on local machine all is well. Deployed solution to production server and here we go:
An error occurred while trying to make a request to URI
This could be due to attempting to access a service in a cross-domain
way without a proper cross-domain policy in place, or a po
From what I gathered - it's security measure to prevent something (not sure what). Well. I can't make provider to put clientaccesspolicy.xml and crossdomain.xml.
What is my options? Looks like either running Silverlight app in elevated mode or.. ?
I don't want to require elevated trust.
The only way I know is to call my server and make call to webservice from my server returning data back to client. Seems like too much overhead. Is there any better way? Really frustrating.
I'm afraid your options are limited here. In compliance with Same Origin Policy the cross domain policy file is a must. Here you can find an example why. Personally I would go down the route of proxying the remote web service via your hosting server if you can't influence the provider.

embedded http server in c++ for chrome extension native client

i was trying to find some examples that would give me some pointers on how to create an http server within a chrome extension, but haven't had any luck. does anyone know a how to start an NPAPI,NACL http server?
Thanks
Short answer: not possible.
If you want to open a port on a local machine to allow connections, then that is not allowed by the web security model. NaCl runs with the same privileges as JavaScript, no extra holes. However, you may specify extra flags to chrome on start to get more permissions from NaCl, such as open debug port, or get access to raw network sockets.
If you want to 'emulate' an HTTP server to make your extension keep using it regardless of being offline, then it is easier to use the PostMessage API.

Building a secure web service without buying (and renewing) a certificate

The goal: a web service, secure, that will be called by exactly two clients, both outside the local network. The most obvious way to secure a web service is via https, obtaining a certificate from some CA. The problem is that this is a silly waste of money. The whole point of a CA is that it is a publicly trusted authority, so I don't have to verify my identity to every single person who wants to use my web page, the CA is doing that for them. However, when I'm dealing with a very small number of known clients, rather than the wide open public, I don't need anyone to vouch for me. We can do verification through our own channels.
Is there any way to accomplish this? Ideally, I'd be able to operate https with a certificate recognized by those calling my service, and if nobody else recognizes the certificate as valid, I don't care. I don't want them calling this service anyway. This should be a fairly common need in B2B data transfers (fixed-endpoint communications, rather than services intended for public consumption), and it is easy to do if you're transferring actual files (PGP-style encryption lets you simply verify and import one another's keys directly). But it isn't clear to me that this is possible with web sessions. It sure should be, if it is not. I have found some documentation of self-signed certificates, but they all seem to be intended for development purposes only, or internal use only, and expire quickly or require being on the same network.
Is there a good way to achieve this? Or am I going to have to encrypt the contents of the web service call instead? The latter is less desirable, because it would require the users of this service to add encryption code to their client applications (which assumes they are building these on a platform which easily can add support for common encryption routines, something that may or may not be true) rather than just relying on the standard, https framework.
I'm working on the Windows (IIS/ASP.NET) platform, if that makes any difference.
Creating your own CA and generating self-signed certificates is the way to go. There is no reason why they must be for development only, or expire quickly. You will be in control of this.
When I implemented this in a Java environment, the most useful resource I found was on Baban's Weblog. You can probably find a resource more relevant to your IIS environment.
To offer a secure service you don't need any certificate, only an https link. You are right that, in your case, a certificate does nothing for you. If your visitor insists on a certificate, then I second #sudocode's answer.
Our old authorization service used certificates, but in rebuilding it we got rid of the certificates and went to an Amazon ec2 style security for the services.

Windows Integrated Authentication fails ONLY if web svcs client is on same machine as IIS server

I have a web service running under IIS7 on a server with a host header set so that it receives requests made to http://myserver1.mydomain.com.
I've set Windows INtegrated Authentication to Enabled and everything else (basic, anonymous, etc) to Disabled.
I'm testing the web service using a powershell script, and it works fine when I run it from my workstation against http://myserver1.mydomain.com
However, when I run the same exact script on the IIS server itself, I get a 401-Unauthorized message.
In addition, I've tried installing the web service on a second server, myserver2.mydomain.com. Again I can call my test script fine from BOTH my workstation and from myserver1.
So it seems the only issue is when the client is on the same box as the web server itself - somehow the windows credentials are not being passed or recognized.
I tried playing with IE settings on myserver1 (checked and unchecked 'Enable Windows Integrated Authentication', and added the URL to Local Sites). That did not seem to have an effect.
When I look at the IIS logs, I see the 401 unauthorized line but very little other information.
I see basically the same behavior when testing with IE (v9) - works from my workstation but not when IE is running on the IIS server.
I found the answer after several hours:
By default, there is something called a LoopbackCheck which will reject windows authentication if the host header used for the site does not match the local host's name. This behavior will only be seen when the client is on the local host. The check is there to defeat possible reflection attacks.
More details here:
http://support.microsoft.com/kb/896861
The kb item discusses ways to disable the Loopback check, but I ended up just switching from using host headers to ports to distinguish the different sites on the IIS server.
Thanks to those who gave assistance.
Try checking the actual credential that is being passed when you are running on the server itself. Often times you will be running on some system account that doesn't have access to the resource in question.
For example, on your box your credentials are running as...
MYDOMAIN\MYNAME
and the server will be something like...
SYSTEM\SYSTEM_ACCOUNT
and so this will fail because 'SYSTEM\SYSTEM_ACCOUNT' doesn't have credentials.
If this is the case, you can fix the problem in one of two ways.
Give 'SYSTEM\SYSTEM_ACCOUNT' access to the resource in question. Most people would avoid this strategy due to security concerns (which is why the account has no access in the first place).
Impersonate, or change the credentials of the client manually to something that does have access to the resource, 'MYDOMAIN\MYNAME' for example. This is what most people would probably go with, including myself.