In relation to SIP registration, simply trying to add termination URI:
Termination URI
Configure a SIP Domain Name to uniquely identify your Termination SIP
URI for this Trunk. This URI will be used by your communications
infrastructure to direct SIP traffic towards Twilio. When you point
your infrastructure toward this URI, Twilio uses a Geo DNS lookup to
intelligently direct your traffic to our closest POP. Learn more about
Termination Settings
I can add <foo>.pstn.twilio.com fine, but get an error or suggestion:
Please add at least one IP Access Control List or Credential List
before saving your Termination settings.
Fair enough. Adding an IP Access Control List by clicking in the box generates:
Cannot rename subdomain null because the parent domain does not exist
or this account doesn't own it.
I don't even know what to make of that. The IP Access Control List is already in the Twilio dashboard, I just selected a value already stored.
It's the same error message regardless of whether IP Access Control Lists or Credential Lists is used. In both cases, the values are configured and in the web dashboard.
see also:
Twilio origination and termination SIP URI's with Java
Related
Currently, I am writing server-side functionality to verify a JWT provided by the GCP Metadata Server (see https://cloud.google.com/compute/docs/instances/verifying-instance-identity for details).
In my first (dirty) implementation, I fetched Google's certificates from https://www.googleapis.com/oauth2/v1/certs for every incoming request. This works like a charm, but does not really scale. So I want to cache the certificates.
One approach would be to create a cache that stores certificates corresponding with a kid. However, this allows an adversary to let the server make many requests to the Google server by sending false JWT's with random kid's.
So what I rather want, is to store the complete response from the certificate endpoint. However, for this to work, I need know how long before usage the certificates are published.
I could not find anything about this any RFC, nor in the GCP documentation. Does anyone know if this is specified somewhere?
I asked this question to the GCP support and they came back with the following:
We cannot provide any guarantees about certificate rotation. The keys may rotate and additional valid certs may appear within the max-age. The server should pull [1] again to refresh the cert cache if it receives a token with an unknown kid.
https://www.googleapis.com/oauth2/v1/certs
So to summarize: it is only possible to cache individual keys, it is not possible to cache the complete result. Any cache misses should always be rechecked at Google's endpoint.
I have a data processing web service that accepts a google spreadsheet as input. A spreadsheet owner enables my data service to read the spreadsheet by sharing the sheet with the service email. This works well and was surprisingly easy to setup.
But the service email is not a valid email address and generates a DNS error in the users mailbox. The service also does not receive a notification that a spreadsheet has been shared.
Is there a way to associate a valid public email address with my Google project that would allow it to receive the sharing notification sent by sharing the spreadsheet? This would ideally also be the email address that the spreadsheet owner used to share the sheet with the service.
This is currently not implemented. Service accounts have email addresses like example#[project-id].iam.gserviceaccount.com but these are currently only identifiers and don't have mailboxes (in fact their domains don't resolve to an IP address and there is no MX record). This might eventually change but currently it is not possible to receive mail for service accounts.
A possible workaround is to use 3-legged OAuth to get access to your users' complete drive data (instead of just that one document). This might not be a viable option since giving an application access to the drive scope is a very serious commitment.
There is one workaround that I want to mention since people might think of it but it's a bad idea: you could create a Google consumer (GMail) account and get a 3-legged OAuth token for it using the GMail scope (to receive email) and the Drive scope (to access documents shared with it). I strongly recommend against using something like this in a production environment since consumer accounts are not built for service account scenarios (e.g. mailbox size, email reception QPS before abuse protection is triggered, credential exchanges, ...). A solution like this will eventually blow up.
I'm not very familiar with web services security concepts but as a provider of web services, we have to update the public cert in our .jks file.
Should we share anything to the consumers of this service to update at their end?
Consumers sign their messages and sends the request. The service end-point is on HTTP protocol.
Consumers sign their messages and send the request
Signing messages involves the private key of the one who does the signing, in your case the client. See here for some intro details of how this stuff works. Changing the web service certificate (the public key) might not cause problems (certificates are updated on a constant basis on the internet for HTTPS sites and no browser starts spewing errors, for example) but at the same time your clients might fail if you are using message level security.
If you encrypt at the message level (the data that gets exchanged) instead of encryption at the transport level (how the data is sent - over HTTPS instead of HTTP) then you need to notify your clients.
You don't mention how the exchange is secured so maybe find out about that first. If the endpoint is on HTTP as you mentioned then it's message level security which means you service might sign or encrypt the message for itself so changing the keys will alter the signature and your clients will not trust the response anymore.
If you are still in doubt about what you need to do then find someone who does know what to do, then notify your clients before doing the change so they have time to make changes themselves if needed. They can decide for themselves if this has or hasn't an impact on them. Whatever you do though, don't give them your new private key.
So, we just got word today that one of our clients firewall is blocking our HTTP requests because "The [software] is sending anonymous packets to our firewall (a Microsoft TMG firewall) so the firewall is dropping the packets as anonymous access is [not] allowed."
For our connection code we are using c++ with curl and we fallback to IEDownloadToFile if needed. I didn't write the original code nor am I really a network programmer so I came here for help. So, my questions are: What are anonymous packets? What am I doing in curl that could cause anonymous packets? Where can I find more information about solving this problem? Thanks!
What they mean is your app has to authenticate with the firewall. That link provides a wealth of information concerning the TMG product. Your client probably has this configuration:
Require users to authenticate whenever
they request Web access. Every Web
session requires authentication.
When using this method, note the
following:
Anonymous Web access is disabled.
Forefront TMG requests user
credentials and validates them before
it checks the request against the
Firewall policy. If users fail to
authenticate, their access request is
denied.
This method is defined per network.
Most non-interactive clients, such as,
the Windows Update client, cannot
authenticate, and are therefore denied
access.
So when the user opens their web browser and tries to access a web page, they'll get a pop-up window asking for credentials because the firewall has intercepted their web request and sent its own authentication page. When the user authenticates, the firewall passes web traffic.
Your automated app does not authenticate with the firewall, so the firewall drops packets and your traffic is classified as anonymous.
Sorry, I don't know the solution on how to make your application authenticate with the firewall. If your app goes to specific URLs, the site operators could whitelist them.
According to this page, you should be getting error 407: proxy authentication required from curl. Try adding these options to the curl initialization, but you still have the problem of asking the user for their network credentials interactively:
CURLOPT_HTTPAUTH: add CURLAUTH_NTLM
CURLOPT_PROXYAUTH: add CURLAUTH_NTLM
set CURLOPT_FOLLOWLOCATION
There is no such thing as an 'anonymous packet' in standard networking parlance. Your client's firewall is making up terms, or there was a miscommunication somewhere along the line before the message got to you. Either way, you're going to need to get clarification from your client or the firewall's vendor or documentation.
I agree with bdonlan. In the context of http requests, "anonymous packets" is vague and ambiguous at best. Maybe they mean there is no referrer code? Or they require http-authentication? Or you need to establish a session key before being able to access the specific url you are requesting? You need actual technical details from your client.
Does anyone know whether I can set a session value for the current domain, and use this session for another domain?
For example:
when I set session in domain www.aabc.com and I wish this session to work in domain www.ccc.com as well -- I click a button in www.aabc.com and change the header to www.ccc.com?
You can only set cookies for your domain (and other sites on your domain, like subdomains, if I remember correctly).
This is (mainly ?) for security reasons : else, anyone could set cookies for any website... I let you imagine the mess ^^
(The only way to set cookies for another domain seem to be by exploiting a browser's security hole - see http://en.wikipedia.org/wiki/Cross-site_cooking for instance ; so, in normal cases, not possible -- happily)
I had to set this up at my last job. The way it was handled was through some hand-waving and semi-secure hash passing.
Basically, each site, site A and site B, has an identical gateway setup on each domain. The gateway accepts a user ID, a timestamp, a redirect URL, and a hash. The hash is comprised of a shared key, the timestamp, the user ID.
Site A generates the hash and sends all of the information listed above to the gateway at site B. Site B then hashes the received passed user ID and timestamp with the shared key.
If the generated hash matches the received hash, then the gateway logs the user in and loads their session from a shared memory table or memcached pool and redirects the user to the received redirect url.
Lastly, the timestamp is used to be able to determine an expiration time for the provided passed hash (e.g.: the hash was only valid for x time). Something around 2.5 minutes is what we used for our TTL (to account for network lag and perhaps a refresh or two).
The key points here are:
Having a shared resource where sessions can be serialized
Using a shared key to create and confirm hashes (if you're going to use md5, do multiple passes)
Only allow the hash to be valid for a small, but reasonable amount of time.
This requires control of both domains.
Hope that was helpful.
You cannot access both domains sessions directly, however, there are legitimate solutions to passing session data between two sites that you control. For data that can be tampered with you can simply have a page on domain abc.com load a 1px by 1px "image" on xyz.com and pass the appropriate data in the querystring. This is very insecure so make sure the user can't break anything by tampering with it.
Another option is to use a common store of some kind. If they have access to the same database this can be a table that domain abc.com stores a record in and then passes the id of the record to domain xyz.com. This is a more appropriate approach if you're trying to pass login information. Just make sure you obfuscate the ids so a user can't guess another record id.
Another approach to the common store method if these two domains are on different servers or cannot access the same database is to implement some sort of cache store service that will store information for a time and is accessible by both domains. Domain abc.com passes in some data and the service passes back an ID that domain abc.com sends to domain xyz.com which then turns back to the service asking for the data. Again, if you develop this service yourself make sure you obfuscate the ids.