500 SSL negotiation failed.
have a problem when i try to connect with web service that have certification , username and password as credentials ,can any one help me
code:
$ENV{HTTPS_CA_FILE} = '/pass/cert.crt';
$ENV{HTTPS_CA_DIR} = '/pass/';
$ENV{HTTPS_PROXY_USERNAME} = 'username';
$ENV{HTTPS_PROXY_PASSWORD} = 'password';
use SOAP::Lite +trace => 'debug';
$soap = SOAP::Lite->new()->on_action( sub { join '/', #_ } )->proxy("`https`://`ip`:`port`/SendSMS");
sub SOAP::Transport::HTTP::Client::get_basic_credentials {
return 'username' => 'password';
}
$som = $soap->call(
SOAP::Data->name('SendSMSR')->attr( { xmlns => '`https`://`ip`:`port`' } ), # *strong text*
SOAP::Data->name('parm1')->value('1040'),
SOAP::Data->name('param2')->value('22222'),
SOAP::Data->name('param3')->value('1600')
);
SOAP::Lite uses LWP to do HTTP(S) requests. With version 6 (released 2011) LWP started to use IO::Socket::SSL as the backend, but it took until version 6.06 (04/2014) to get the support for proxies for HTTPS right. The HTTPS_PROXY_* settings you use work only with the old Crypt::SSLeay backend.
Please check first the version of LWP::Protocol::https you are using
perl -MLWP::Protocol::https -e 'warn $LWP::Protocol::https::VERSION'
If it reports a version smaller than 6.06 you need to upgrade to have proper HTTPS proxy support. That is unless you are on Debian/Ubuntu an have version 6.04. Then it might still work because Debian included the necessary fixes before 6.06 was released (Ubuntu 14.04 should be ok).
HTTPS_PROXY_* will no longer work with the newer backend. From reading the code of SOAP::Lite::Transport::HTTP it looks like that it does not have a special handling for HTTPS proxies, but instead just calls env_proxy from LWP::UserAgent if the environment variable HTTP_proxy is set. You might try to set the following environment variables to set the proxy:
HTTP_proxy=whatever; # set to trigger call of env_proxy
https_proxy=http://user:pass#your-https-proxy/
If you still have problems enable debugging. For IO::Socket::SSL you can enable debugging by calling your code with
perl -MIO::Socket::SSL=debug4 your-program.pl
See also the documentation for SOAP::Transport::HTTP, although there is no special documentation on how to use a https proxy.
Related
we already have a release version in playstore and its working just fine.
But suddenly now when we try to build and run the code again since we want to add new functionality. It would no longer communicate with our backend.
So i searched the net using the error as keyword and saw that need INTERNET PERMISSION as the 100% result and answer, which we have already and not helpful at all. Yes we have it in debug and live manifests.
The server is up we can access it in the browser and as well as postman also dig command
So i searched more things in the net to no avail, found about the because proxy issue thing i tried both client side and server-side. we don't have proxy
we use simple request only like this:
static Future getDriver(String phone){
var url = baseUrl + "/mobile/driverPhone";
return http.post(url,body: {
"phone" : phone
});
}
also some suggestion say to use DIO, but i want to know the reason first before i gave up with this http plugin. Can someone with good heart explain and help me with this?
P.S. we on master channel, here are some error logs
The error SocketException: Failed host lookup: 'api.xyz.com' (OS Error: No address associated with hostname, errno = 7) usually means that your DNS lookup fails. From the error itself OS Error, is usually a system-level error and nothing specific to http or Dart/Flutter.
You might need to adjust some settings in your DNS. Also, if you are using a Mac/Linux system you might be able to run dig api.xyz.com to see if that resolves.
Running your app in the actual device could be possible as well since there is a possibility its getting an error in your virtual device due to DNS settings or the virtual device you are using is not connected to the internet. You will be able to isolate the problem.
I have two python projects running locally:
A cloud endpoints python project using the latest App Engine version.
A client project which consumes the endpoint functions using the latest google-api-python-client (v 1.5.1).
Everything was fine until I renamed one endpoint's function from:
#endpoints.method(MyRequest, MyResponse, path = "save_ocupation", http_method='POST', name = "save_ocupation")
def save_ocupation(self, request):
[code here]
To:
#endpoints.method(MyRequest, MyResponse, path = "save_occupation", http_method='POST', name = "save_occupation")
def save_occupation(self, request):
[code here]
Looking at the local console (http://localhost:8080/_ah/api/explorer) I see the correct function name.
However, by executing the client project that invokes the endpoint, it keeps saying that the new endpoint function does not exist. I verified this using the ipython shell: The dynamically-generated python code for invoking the Resource has the old function name despite restarting both the server and client dozens of times.
How can I force the api client to get always the latest endpoint api document?
Help is appreciated.
Just after posting the question, I resumed my Ubuntu PC and started Eclipse and the python projects from scratch and now everything works as expected. This sounds like a kind of a http client cache, or a stale python process, which prevented from getting the latest discovery document and generating the corresponding resource code.
This is odd as I have tested running these projects outside and inside Eclipse without success. But I prefer documenting this just in case someone else has this issue.
I have installed GitLab Omnibus Community Edition 8.0.2 for evaluation purpose. I am trying to connect Gitlab (Linux AMI on AWS) with our on-premise LDAP server running on Win 2008 R2. However, i am unable to do so. I am getting following error (Could not authorize you from Ldapmain because "Invalid credentials"):
Here's the config i'm using for LDAP in gitlab.rb
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-'EOS' # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
label: 'LDAP'
host: 'XX.YYY.Z.XX'
port: 389
uid: 'sAMAccountName'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
password: 'pwd1234'
active_directory: true
allow_username_or_email_login: true
base: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
user_filter: ''
EOS
There are two users: gitlab (newly created AD user) and john.doe (old AD user)
Both users are able to query all AD users using ldapsearch command but when i use their respective details (one at a time) in gitlab.rb and run gitlab-rake gitlab:ldap:check command, it displays info about that particular user only and not all users.
Earlier, gitlab-rake gitlab:ldap:check was displaying first 100 results from AD when my credential (john.doe) was configured in gitlab.rb file. Since this was my personal credential, i asked my IT team to create a new AD user (gitlab) for GitLab. After i configured new user (gitlab) in gitlab.rb file and ran gitlab-rake gitlab:ldap:check, it only displayed that particular user's record. I thought this might be due to some permission issue for the newly-created user so i restored my personal credentials in gitlab.rb. Surprisingly, now when i run gitlab-rake gitlab:ldap:check, i get only one record for my user instead of 100 records that i was getting earlier. This is really weird! I think, somehow, GitLab is "forgetting" previous details.
Any help will really be appreciated.
The issue is resolved now. Seems like it was a bug in the version (8.0.2) i was using. Upgrading it to 8.0.5 fixed my issue.
Also, values of bind_dn and base that worked for me are:
bind_dn: 'CN=git lab,OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
base: 'OU=users,OU=Service Accounts,OU=corp,OU=India,OU=Users,OU=UserId&Rooms,DC=india,DC=local'
I have FusionReactor ENT v5 on my new server,
I have FusionReactor STD Edition v.5 on my old server.
The only problem I am having is that the WebRequest Runtime Protection is not working.
I have checked the settings,
http://docs.intergral.com/display/FR50/Protection+Settings
Request Runtime Protection Strategy
This defines what happens when this protection type is triggered. The individual survival strategies are defined as follows:
Abort (with Email Notification): Protection will attempt to abort any requests that have run for too long and have triggered Request Runtime Protection. Optionally sends an email notification containing details about the triggering request.
Email Notification Only: Send an email notification (as long as notification is enabled in FusionReactor Settings) but take no further action.
My reactor.conf from my old server:
fac.archive.retention.value=100
crashprotection.pagelist.0.track_stats=true
user.0=Administrator,administrator,XXXXXXXXXXXXXXXXXXXXXXXX,?p\=running&static\=&flavor\=WebRequest&__toc\=requests
crashprotection.pagelist.0.string=/directory1/directory2/SiteFile1.cfm
crashprotection.pagelist.1.string=directory1/directory2/SiteFile2.cfm
crashprotection.pagelist.count=2
crashprotection.email.address.to=TEST#domain.com
crashprotection.pagelist.1.scope=ALL
version=7
crashprotection.pagelist.0.scope=TIMEOUT
fruid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
crashprotection.pagelist.1.track_stats=false
crashprotection.pagelist.1.regex=false
crashprotection.pagelist.0.regex=false
fac.archive.retention.strategy=SIZE
crashprotection.email.active=true
crashprotection.pagelist.0.append_parameters=false
crashprotection.requests.level.min=5
crashprotection.pagelist.1.prepend_hostname=false
crashprotection.pagelist.0.prepend_hostname=false
crashprotection.pagelist.1.append_parameters=false
fac.scheduler.mailjob.enable=true
crashprotection.email.server=127.0.0.1
crashprotection.request_timeout=60
crashprotection.email.address.from=fusionreactor#domain.com
gruid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
My reactor.conf from my new server:
user.0=Administrator,administrator,XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
metrics.slow_threshold=2
crashprotection.email.active=true
crashprotection.email.server=127.0.0.1
crashprotection.request_timeout=10
email.hostname=local.domain.com
crashprotection.email.address.from=fusionreactor#domain.com
version=6
crashprotection.requests.level.min=5
metric.recent_slow_pages.statusthreshold.ok2w=1
crashprotection.email.address.to=testuser#omain.com
gruid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
fruid=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
The test email works fine and the crash notication email works fine.
The slow web request does not
By the looks of things your new server config looks ok for the CrashProtection settings. It might just be that the Protection system has not taken the settings correctly.
Have your tried restarting your server?
It looks like there is a bug in the current FR 5 agent where the Crash Protection settings would not correctly update if Quantity protection is enabled. A server restart should correct this issue.
If you do not wish to restart your server, you can try putting all the protection settings back to the defaults, save them. Then setting up Runtime protection first.
Hopefully this will solve your issue.
If you have any other problems I suggest you contact the FusionReactor support team at support#fusion-reactor.com.
Kind Regards,
Ben Donnelly
FusionReactor Support
I'm using Logstash 1.4.1 with Elasticsearch (installed as EC2 cluster) 1.1.1 and Elasticsearch AWS plugin 2.1.1.
To try if the Logstash is properly talking to Elasticsearch, I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch { host => <ES_cluster_IP> } }'
and I get -
log4j, [2014-06-10T18:30:17.622] WARN: org.elasticsearch.discovery: [logstash-ip-xxxxxxxx-20308-2010] waited for 30s and no initial state was set by the discovery
Exception in thread ">output" org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [30s]
at org.elasticsearch.action.support.master.TransportMasterNodeOperationAction$3.onTimeout(org/elasticsearch/action/support/master/TransportMasterNodeOperationAction.java:180)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(org/elasticsearch/cluster/service/InternalClusterService.java:492)
at java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:615)
at java.lang.Thread.run(java/lang/Thread.java:744)
But when I use -
bin/logstash -e 'input { stdin { } } output { elasticsearch_http { host => <ES_cluster_IP> } }'
it works fine with the below warning -
Using milestone 2 output plugin 'elasticsearch_http'. This plugin should be stable, but if you see strange behavior, please let us know! For more information on plugin milestones, see http://logstash.net/docs/1.4.1/plugin-milestones {:level=>:warn}
I don't understand why can't I use elasticsearch instead of elasticsearch_http even when versions are compatible.
I'd take care to set the protocol option to one of "http", "transport" and "node". The documentation on this is contradictory - on the one hand it states that it's optional and there is no default, while at the end it says the default differs depending upon code set:
The ‘node’ protocol will connect to the cluster as a normal
Elasticsearch node (but will not store data). This allows you to use
things like multicast discovery. If you use the node protocol, you
must permit bidirectional communication on the port 9300 (or whichever
port you have configured).
The ‘transport’ protocol will connect to the host you specify and will
not show up as a ‘node’ in the Elasticsearch cluster. This is useful
in situations where you cannot permit connections outbound from the
Elasticsearch cluster to this Logstash server.
The ‘http’ protocol will use the Elasticsearch REST/HTTP interface to
talk to elasticsearch.
All protocols will use bulk requests when talking to Elasticsearch.
The default protocol setting under java/jruby is “node”. The default
protocol on non-java rubies is “http”
The problem here is that the protocol setting has some pretty significant impact on how you connect to Elasticsearch and how it will operate, yet it's not clear what it will do when you don't set protocol. Better to pick one and set it -
http://logstash.net/docs/1.4.1/outputs/elasticsearch#protocol
In the Logstash elasticsearch plugin page has mention:
VERSION NOTE: Your Elasticsearch cluster must be running Elasticsearch 1.1.1. If you use
any other version of Elasticsearch, you should set protocol => http in this plugin.
So it is not version incompatibility.
Elasticsearch use 9300 for multicast and communicate with other clients. So, it is probably your logstsah can't talk to your elasticsearch cluster. Please check your server configuration whether the firewall has block port 9300.
Aet the protocol in elasticsearch.yml:
output {
elasticsearch { host => localhost protocol => "http" port => "9200" }
stdout { codec => rubydebug }
}