Problem Uploading Files Using Chilkat sFTP - chilkat

File upload returns "Status Code 8 - Invalid Parameter" response. Looking for some advice on what might be causing this.
I'm using Chilkat sFTP to transfer and receive files to and from multiple partners without issue but for a new partner I'm seeing the following error. The partner's tech team are asking if a passive connection is being invoked but I can't see any properties within Chilkat which would enable me to change this.
Log message:
ChilkatLog:
OpenFile:
DllDate: Jul 31 2014
ChilkatVersion: 9.5.0.43
UnlockPrefix: NORVICSSH
Username: LVPAPP005:scheduleradminprod
Architecture: Little Endian; 64-bit
Language: .NET 2.0 / x64
VerboseLogging: 0
SshVersion: SSH-2.0-FTP Server ready
SftpVersion: 3
sftpOpenFile:
remotePath: \GIB_DAILY_CENTAUR_POSITIONS_20190403.CSV
access: writeOnly
createDisposition: createTruncate
v3Flags: 0x1a
Sent FXP_OPEN
StatusResponseFromServer:
Request: FXP_OPEN
InformationReceivedFromServer:
StatusCode: 8
StatusMessage: Invalid parameter
--InformationReceivedFromServer
--StatusResponseFromServer
--sftpOpenFile
Failed.
--OpenFile
--ChilkatLog

You're confusing the SSH/SFTP protocol with the FTP protocol. The two are entirely different protocols. The concept of "passive" data transfers does not exist in SSH/SFTP as it does in the FTP protocol.

Related

Akka Http turn off header parsing

I'm trying to implement a transparent proxy with Akka-Http & Akka-Stream.
However, I'm running into an issue where Akka-Http maniuplates and parses the response headers from the upstream server.
For example, when the upstream server sends the following header:
Expires: "0"
Akka will parse this into Expires Header and correct the the value to:
Expires: "Wed, 01 Jan 1800 00:00:00 GMT"
Although start of unix time is better than "0", I don't want this proxy to touch any of the headers. I want the proxy to be transparent and not "fix" any of the headers passing through.
Here is the simple proxy:
Http().bind("localhost", 9000).to(Sink.foreach { connection =>
logger.info("Accepted new connection from " + connection.remoteAddress)
connection handleWith pipeline
}).run()
The proxy flow:
Flow[HttpRequest].map(x => (x, UUID.randomUUID().toString()).via(Http().superPool[String]()).map(x => x._1)
I noticed that the http-server configuration allows me to configure and keep the raw request headers, but there doesn't seem to be one for http-client.
raw-request-uri-header = off
Is there way I can configure Akka to leave the header values as is when I respond to the client?
This is not possible currently.
I wonder how hard it would be to expose such mode, and how much complexity we'd have to pay for it, however I err on the side of this feature not being able to pull its weight.
Feel free to open a ticket for it on http://github.com/akka/akka where we could discuss it further. Some headers are treated specially so we really do want to parse them into the proper model – imagine websocket upgrades, Connection headers etc, so there would have to be a strong case behind this feature request to make it pull its weight IMO.
(I'm currently maintaining Akka HTTP).

server1 instance in websphere shuts down regularly

i have a WSDL web service in the server1 instance of websphere.
this server1 instance shuts down regularly. there are no error logs being generated every time the shutdown occurs.
however, whenever the server1 instance of websphere is started, these errors and exceptions are generated:
The certificate (Owner: "CN=SOAPRequester, OU=TRL, O=IBM, ST=Kanagawa, C=JP") with alias "soaprequester" from keystore "D:\IBM\WEBSPH~1\APPSER~1\etc\ws-security\samples\dsig-sender.ks" has expired: java.security.cert.CertificateExpiredException: NotAfter: Sat Oct 01 19:24:06 CST 2011
The certificate (Owner: "CN=SOAPProvider, OU=TRL, O=IBM, ST=Kanagawa, C=JP") with alias "soapprovider" from keystore "D:\IBM\WEBSPH~1\APPSER~1\etc\ws-security\samples\dsig-receiver.ks" has expired: java.security.cert.CertificateExpiredException: NotAfter: Sat Oct 01 19:30:39 CST 2011
Method createManagedConnctionWithMCWrapper caught an exception during creation of the ManagedConnection for resource jms/BPECF, throwing ResourceAllocationException. Original exception: javax.resource.spi.ResourceAdapterInternalException: createQueueConnection failed
com.ibm.mqservices.MQInternalException: MQJE001: An MQException occurred: Completion Code 2, Reason 2063
MQJE027: Queue manager security exit rejected connection with error code 23
javax.jms.JMSSecurityException: MQJMS2013: invalid security authentication supplied for MQQueueManager
my questions are:
1. is MQ required by the WSDL service?
2. are any of these 5 errors possible for causing the frequent downtimes?
As far as I understand you have WebSphere Process Server configured with WebSphere MQ as message bus.
MQ Queue might be represented as JMS binding in SOAP over JMS configuration. IBM article.
Regarding errors:
First 2 errors are simple - certificates have expired. You should update it.
I assume 3 -5 exception are 1 error - there is answer to this question stackoverflow
2063 is security related problems.

RTMP in Coldfusion10 - Type 'coldfusion.flash.messaging.CFRTMPEndPoint' not found

I'm trying to use RTMP with Coldfusion 10 and the embedded livecycle ES. The other channels and endpoints work fine, but when the server starts up I get an error complaining that the endpoint class cannot be found for the cf-rtmp channel.
I'm using the standard setup with no special configuration.
As I said, the AMF channel, polling channel, etc. work just fine, and there are no complaints about those channels or endpoints when the server starts up.
This is the error in the logs when the server starts up:
INFO: ColdFusionStartUpServlet: ColdFusion: VM version = 23.7-b01
java.lang.NullPointerException
at coldfusion.server.jrun4.metrics.SimpleLoadMetric.run(SimpleLoadMetric .java:157)
at coldfusion.scheduling.ThreadPool.run(ThreadPool.java:211)
at coldfusion.scheduling.WorkerThread.run(WorkerThread.java:71)
Nov 21, 2013 9:34:14 AM org.apache.catalina.core.ApplicationContext log
INFO: CFMxmlServlet: Macromedia Flex Build: 87315.134646
**** MessageBrokerServlet in application 'Adobe ColdFusion 10' failed to initialize due to runtime exception:
Exception: flex.messaging.MessageException:
Cannot create class of type 'coldfusion.flash.messaging.CFRTMPEndPoint'.
Type coldfusion.flash.messaging.CFRTMPEndPoint' not found.
at flex.messaging.util.ClassUtil.createClass(ClassUtil.java:70)
Here are the first lines of the rtmp channel definition in my services-config file referencing the channel and endpoint classes.
<channel-definition id="cf-rtmp" class="mx.messaging.channels.RTMPChannel">
<endpoint uri="rtmp://{server.name}:2048" class="coldfusion.flash.messaging.CFRTMPEndPoint"/>
....
</channel>

How to find source of Connection Reset Error

Where can I go look to find the source of a connection reset error? Here are the details:
I have a Clojure applet that uses clj-http.client.
I need to track down what is sending the following error
Feb 14, 2013 5:16:04 PM
org.apache.http.impl.client.DefaultRequestDirector execute
INFO: I/O exception (java.net.SocketException)
caught when processing request: Connection reset
Feb 14, 2013 5:16:04 PM
org.apache.http.impl.client.DefaultRequestDirector execute
INFO: Retrying request
We have looked through the server's IIS logs, and cannot find any error indicating a connection reset. We've also looked at the server's Event Logs, and cannot find an error that matches the error I'm getting in the client. As a matter of fact, the IIS logs look OK. I can see my address verification "GET" requests right in the log.
It's just a guess, though I often get that error message when the web server is configured to respond to the wrong host name. If it is serving for www.example.com/my/service and I open a connection to 1.2.3.4/my/service then it hangs up with "connection reset".

SSLSniff error: "SSL Accept Failed"

I'm trying to use SSLSniff's tool, and I have some technical issues... I've been looking for any similar problems, but the only results are from Twitter feeds, with no public useful answer. So, here it is:
(My version of SSLSniff is 0.8) I'm launching sslsniff with args:
sslsniff -a -c cert_and_key.pem -s 12345 -w out.log
where: cert_and_key.pem file is my authority's certificate concatenate with my unencrypted private key (in PEM format of course), and 12345 is the port where I redirect traffic with my iptables rule.
So sslsniff is correctly running:
INFO sslsniff : Certificate ready: [...]
[And anytime I connect with a client, there are these 2 following lines:]
DEBUG sslsniff : SSL Accept Failed!
DEBUG sslsniff : Got exception: Error with SSL connection.
On my client' side, I've register my AC as a trusted CA (with FF). Then when I connect through SSL I'm having the error:
Secure Connection Failed.
Error code: ssl_error_bad_cert_domain
What is super strange (moreover the fact that the certificate is not automatically accepted since it should be signed by my trusted CA) is that I cannot accept the forged certificate by clicking on "Add exception..." : I am always returning to the error page asking me to add an(other) exception...
Moreover, when I try to connect to for example: https://www.google.com, SSLSniff's log is completed with a new line :
DEBUG sslsniff : Encoded Length: 7064 too big for session cache, skipping...
Does anyone know what I'm doing wrong?
-- Edit to summer up the different answers --
The problem is that SSLSniff is not taking care of alternive names when it forges certificates. Apparently, Firefox refuses any certificate as soon as the Common Name doesn't match exactly the domain name.
For example, for Google.com : CN = www.google.com and there is no alternative name. So when you connect to https://www.google.com, it's working fine.
But for Google.fr : CN = *.google.fr, with these alternative names: *.google.fr and google.fr. So when you connect to https://www.google.fr, FF is looking for alternative names and, since it obviously doesn't find any, refuses the malformed certificate.
... So a solution would be to patch/commit... I don't know if Moxie Marlinspike has intentionally forgot this functionnality because it was too complicated, or if he was just not aware of this issue. Anyway, I'll try to have a look at the code.
The session encoded length error message: When caching the SSL session fails, it means that SSL session resumption on subsequent connections will fail, resulting in degraded performance, because a full SSL handshake needs to be done on every request. However, despite using the CPU more heavily, sslsniff will still work fine. The caching fails because the serialized representation of the OpenSSL session object (SSL_SESSION) was larger than the maximum size supported by sslsniff's session cache.
As for your real problem, note that sslsniff does not support X.509v3 subjectAltNames, so if you are connecting to a site whose hostname does not match the subject common name of the certificate, but instead matches only a subjectAltName, then sslsniff will generate a forged certificate without subjectAltNames, which will cause a hostname verification mismatch on the connecting client.
If your problem happens only for some specific sites, let us know the site so we can examine the server certificate using e.g. openssl s_client -connect host:port -showcerts and openssl x509 -in servercert.pem -text. If it happens for all sites, then the above is not the explanation.
Try a straight MITM with a cert you fully control , and make sure you don't have some OCSP/Perspectives/Convergance stuff meddling with things. Other than that, maybe add the cert to the OS trusted roots. I think FF on windows uses the windows cert store (start->run->certmgr.msc). It may also be worth trying with something like Burp to see if the error is localized to SSLSniff or all MITM attempts.