RAILO with sftp - coldfusion

RAILO support sftp mapping which we can use with cffile tag for various operation, more information at
https://github.com/getrailo/railo/wiki/Railo-Resources#mappings
I can create mapping like
this.mappings["/ftpdir"] = "ftp://username:password#server.com/dir";
and work perfectly fine for me for copy/moving etc.
Only issue is that in case I want to use SFTP instead of FTP what should be change in mapping.
Update
I have notice in SFTP was using port 22 so same I have tried to supplied in ftp path as below
this.mappings["/ftpdir"] = "ftp://username:password#server.com:22/dir";
but doesn't work. RAILO tried to search directory in my local computer instead of FTP server.
Thanks,
Pritesh

If you want to use sftp instead of ftp, change the protocol in URL to sftp://.
I do not think you need to specify port 22 explicitly. It's defuault for SSH/SFTP.

Related

Cannot login onto sftp using Bizzflow.net sftp extractor

I cannot sucessfully log in onto our SFTP server using ex-sftp, which is part of Bizzflow.net. When I try to use credetials in any SFTP client, it works, but using Bizzflow.net extractor ends with incorrect name or password error message.
the issue is when username uses some non alphabetical characters. Typically, when SFTP is hosted on Windows server and login uses domain\username format, this issue occurs. The reason is the ex-sftp does not encode username correctly. Best solution would be to use local username without domain\ prefix. Also you can submit bug to ex-sftp on https://gitlab.com/bizzflow-extractors/ex-sft

Why am I getting "Internal Server Error" running two Odoo instances (same domain but different ports)?

I have two instances of Odoo in a server in the cloud. If I make the following steps I get "Internal Server Error":
I make login in the first instance (http://111.222.33.44:3333)
I close the session
I load the address of the second instance in the same browser (http://111.222.33.44:4444)
If I want to work in the second instance (in another port), I need to remove the browser cookies first to acces to the other Odoo instance. If do this everything works fine.
If I load them in differents browsers (Firefox and Chromium) at the same time, they work well as well.
It's not a NginX issue because I tried with and without it.
Is there a way to solve this permanently? Is this the expected behaviour?
If you have access to the sourcecode you can change this file like shown below and check if the issue is solved or not.
addons/web/controllers/main.py
if db != request.session.db:
request.session.logout()
request.session.db = db
abort_and_redirect(request.httprequest.url)
And delete --> request.session.db = db
which is below this IF statement.
Try following changes in:
openerp/addons/base/ir/ir_http.py
In method _handle_exception somewhere around line 140 you will find this piece of code:
attach = self._serve_attachment()
if attach:
return attach
Replace it with:
if isinstance(exception, werkzeug.exceptions.HTTPException) and exception.code == 404:
attach = self._serve_attachment()
if attach:
return attach
You can perfectly well serve all the databases with a single OpenERP server on your machine. Unfortunately you did not mention what error you were seeing and what you expected as a result - makes it a bit harder to help you ;-)
Anyway, here are some random ideas based on the information you provided:
If you have a problem with OpenERP not listening on all interfaces, try to specify 0.0.0.0 as the xmlrpc_interface in the configuration file, this should have OpenERP listen on 8069 on all IPs.
Note that Apache is not relevant if you're connecting to e.g. http://www.sample.com:8069/?db=openerp because you're directly connecting to OpenERP. If you want to go through Apache, you need to setup ReverseProxy rules in your vhost configs, and OpenERP does not need to listen to all public IPs then.
OpenERP 6.1 and later can autodetect the database name based on the virtual host name, and filter the name of the available databases: you need to start it with the --db-filter parameter, which represents a pattern used to filter the list of available databases. %h represents the domain name and %d is the first domain component of that domain. So for example with --db-filter=^%d$ I will only see the test database if I end up on the server using http://test.example.com:8069. If there's only one database match, the list is not displayed and the user will directly end up on the right database. This works even behind Apache reverse proxies if you make sure that OpenERP see the external hostname, i.e. by setting a X-Forwarded-Host header in your Apache proxy config and enabling the --proxy mode of OpenERP.
The port reuse problem comes because you are trying to start multiple OpenERP servers on the same interface/port combination. This is simply not possible unless you are careful to start just one server per IP with the IP set in the xmlrpc_interface parameter, and I don't think you need that. The named-based virtual hosts that Apache supports are all handled by a single master process that listens on port 80 on all the interfaces. If you want to do the same with OpenERP you only need to start one OpenERP server for all your domains, and make it listen on 0.0.0.0, port 8069, as I explained above.
On top of that it's not clear what you would have set differently in the various config files. Running 40 different OpenERP servers on the same machine with identical code sounds like a lot of overkill. OpenERP is designed to be multi-tenant so that many (read: hundreds) of databases can be served from the same server.
Finally I think this is the expected behaviour. The cookies of all websites are stored specifically for each website (for each domain) in the web browser. So if I only change the port the cookies of the first instance are in conflict with the cookies of the other instance because the have the same domain (111.222.33.44 in my example).
So there are some workarounds:
Change Domain Locally
Creating a couple of domain name in my laptop in /etc/hosts:
111.222.33.44 cloud01
111.222.33.44 cloud02
Then the cookies don't interfere with each other anymore. To access to each instance
http://cloud01:3333
http://cloud02:4444
Broswer Extension. Multilogin or Multiaccount
There is another workaround. If I use this chromium extension the problem disappears because the sessions are treated separately:
SessionBox

how to retrieve a ssl certificate in django?

Is it possible to retrieve the client's SSL certificate from the current connection in Django?
I don't see the certificate in the request context passed from the lighttpd.
My setup has lighttpd and django working in fastcgi mode.
Currently, I am forced to manually connect back to the client's IP to verify the certificate..
Is there a clever technique to avoid this? Thanks!
Update:
I added these lines to my lighttpd.conf:
ssl.verifyclient.exportcert = "enable"
setenv.add-request-header = (
"SSL_CLIENT_CERT" => env.SSL_CLIENT_CERT
)
Unfortunately, the env.SSL_CLIENT_CERT fails to dereference (does not exist?) and lighttpd fails to start.
If I replace the "env.SSL_CLIENT_CERT" with a static value like "1", it is successfully passed to django in the request.META fields.
Anything else, I could try? This is lighttpd 1.4.29.
Yes. Though this question is not Django specific.
Usually web servers have option to export SSL client-side certificate data as environment variables or HTTP headers. I have done this myself with Apache (not Lighttpd).
This is how I did it
On Apache, export SSL certificate data to environment variables
Then, add a new HTTP request headers containing these environment variables
Read headers in Python code
http://redmine.lighttpd.net/projects/1/wiki/Docs_SSL
Looks like the option name is ssl.verifyclient.exportcert.
Though I am not sure how to do step 2 with lighttpd, as I have little experience on it.

Komodo Edit and SSH Private Keyfile in Pageant for Amazon EC2

I would like to use Komodo Edit to edit files on my Ubuntu Amazon EC2 instance (running Django).
According to this Old Nabble from 2008 (oh boy) Komodo Edit should support SSH authentication via Pageant keyfiles.
So, I imported my .pem keyfile in PuTTYGen, converted it to .ppk (no password) and loaded it into Pageant. I am able to use PuTTY just fine to SSH into my instance.
I can also use Notepad++'s reasonable NppFTP with the AWS instance by adding the server and using the original .pem file directly (NppFTP doesn't seem to use pageant.)
However, I would like to use Komodo Edit, so I loaded up Komodo Edit, went to Edit -> Preferences --> Servers, and put in my Public DNS address (ec2-174-129-xxx-xxx.compute-2.amazonaws.com) and my username which was required ('ubuntu').
When attempting to connect, however, I get a "Javascript Application Error: ''" (a seemingly empty error) from Komodo Edit. I can't find any sort of logs or console to watch the handshake (Notepad's NppFTP plugin had a nice one.)
Obviously I can just use NppFTP but I would like to get this feature working. Any ideas?
Use putty 0.60. According to this site there's an incompatibility between the versions you are using.

cfhttp dns resolution

i'm trying to get CFHTTP to talk to a domain that i have created for testing purposes on my test server. the address of the domain is "mydomain.example.com". everytime i try to connect using cfhttp i get an error stating:
Your requested host "mydomain.example.com" could not be resolved by DNS.
i have already added the entry in the windows hosts file.
mydomain.example.com 127.0.0.1
i've also made sure that java.net.InetAddress can resolve the domain by doing the following in a coldfusion page:
<cfset loc.javaInet = createObject("java","java.net.InetAddress")>
<cfset loc.dnsLookup = loc.javaInet.getByName("mydomain.example.com")>
for which is get back
mydomain.example.com/127.0.0.1
i've even tried starting and stopping the coldfusion service and changing the value of networkaddress.cache.ttl in the runtime\jre\lib\security\java.security to 0.
i'm at a lost of why everything seems to be resolving at the jre level but not at the cfhttp level. any ideas???
Why is it that after I post a question, I figure it out? Go fig.
The issue was that for some reason I still had an old proxy configuration setup on my java.args line in my runtime\bin\jvm.config.
After removing the old configuration setting and restarting the ColdFusion service, I'm back in business.
For those that want to know, you can set the proxy information for cfhttp to use by adding the following arguments to your java.args line in the jvm.config file
-Dhttp.proxyHost=<ip address>
-Dhttp.proxyPort=<portnumber>
-Dhttp.proxyUser=<username>
-Dhttp.proxyPassword=<password>
Your problem may have to do with the way that DNS look-ups are cached by Coldfusion. CFHTTP permanently keeps a copy of the DNS look-up. You could try flushing this by restarting Coldfusion.
Also, your hosts file won't pick up those changes in windows easily. The easy way is with a reboot of the windows machine.
I agree, the problem is a DNS one, and using a proxy just masks the problem. Try setting your DNS resolver on Windows to something stable and public, like 8.8.8.8 which is a Google DNS server.