This is driving me nuts:
My 'ftp.company.com' ftp account structure:
/root/
public_html
domains
-mysubdomain.com
-public_html
i set up my git config:
[git-ftp]
url = ftp.company.com/domains/mysubdomain.com/public_html
user = company.com
password = ******
but 'git ftp init/push' keeps uploading to:
//public_html/domains/mysubdomain.com/public_html
Why is git-ftp refering to my ftp ROOT/public_html as base url?
What am I doing wrong here?
thanks and regards
Tom
Related
I can normally open up my web in the front several hours after deployment,but later , it occurred 502 bad gateway ,it is so wired, my web uses Django and Nginx and Uwsgi, i do research a lot on google,but failed with nothing
Here is my configuration:
1.Nginx configuration
# mysite_nginx.conf
upstream django {
server 127.0.0.1:8004; # for a web port socket (we'll use this first)
}
server {
listen 80;
server_name www.example.com # substitute your machine's IP address or FQDN
charset utf-8;
client_max_body_size 75M; # adjust to taste
location /media {
alias /home/blender_font_project/django_file/Blender_website/media;
}
location /static {
alias /home/blender_font_project/django_file/Blender_website/static;
}
location / {
uwsgi_pass 127.0.0.1:8003;
include /etc/nginx/uwsgi_params;
}
}
2.uwsgi configuration
# mysite_uwsgi.ini file
[uwsgi]
chdir = /home/blender_font_project/django_file/Blender_website
module = djangoTest5.wsgi
master = true
processes = 10
socket = :8003
vacuum = true
harakiri=60
daemonize=/home/blender_font_project/uwsgi_file/real3dfont_logfile
3.this is my Nginx error log
231 connect() failed (111: Connection refused) while connecting to upstream
BTW , i have set Django to DEBUG Ture and i can access my resource by www.example.com/static/example.jpg,but the web page shows 502
I really dont know why , thanks if you offer any help!
(...After million years struggle and strive,with inspiration from a super hero in comment named #Atul Mishra , i finally figure it out...)
It is the matter Django itself,i forget to download mysql module in View , i would have expect a Django error html if it is the django problem, but no , so i mistakenly attribute it to Nginx or Uwsgi
But the wired thing is that Django should have report the error , but no ! what an irresponsible dude!!
so , 1.remember to add Django error log function ,it saves your life , and
2.test Django with runserver before Nginx enter the stage even when comet is striking the earth!!
My Django website is in HTTPS. When I am trying to POST data to the website from a script I get this error : "referer checking failed - no Referer". It seems to be a CSRF issue but I do not know how to solve it.
Example :
import requests
r = requests.post('https://mywebsite/mypage', data = {'key':'value'})
print r.text
gives me this output :
[...]
<p>Reason given for failure:</p>
<pre>
Referer checking failed - no Referer.
</pre>
<p>In general, this can occur when there is a genuine Cross Site Request Forgery, or when
<a
href="https://docs.djangoproject.com/en/1.8/ref/csrf/">Django's
CSRF mechanism</a> has not been used correctly. For POST forms, you need to
ensure:</p>
<ul>
<li>Your browser is accepting cookies.</li>
<li>The view function passes a <code>request</code> to the template's <code>render</code>
method.</li>
<li>In the template, there is a <code>{% csrf_token
%}</code> template tag inside each POST form that
targets an internal URL.</li>
<li>If you are not using <code>CsrfViewMiddleware</code>, then you must use
<code>csrf_protect</code> on any views that use the <code>csrf_token</code>
template tag, as well as those that accept the POST data.</li>
</ul>
[...]
Do I need to pass a referer to my headers before sending the POST data - which would not be convenient ? Or should I disable CSRF for this page ?
Thanks
AFAIK, This is the purpose of CSRF, to avoid posting data from unknown strange sources. You need csrf token to post this which django generates dynamically.
Upgrading Django might fix the missing Referer error.
As of Django 4.0 (release notes), the backend will first check the Origin header before falling back to the Referer header (source):
CsrfViewMiddleware verifies the Origin header, if provided by the browser, against the current host and the CSRF_TRUSTED_ORIGINS setting. This provides protection against cross-subdomain attacks.
In addition, for HTTPS requests, if the Origin header isn’t provided, CsrfViewMiddleware performs strict referer checking. This means that even if a subdomain can set or modify cookies on your domain, it can’t force a user to post to your application since that request won’t come from your own exact domain.
It's possible you have a reverse proxy running, for example an nginx proxy_pass to 127.0.0.1:8000?
In this case, Django expects the Cross-Site Forgery Protection tokens to match hostname 127.0.0.1, but they will be coming from a normal domain (for example example.com).
Expected Source
Actual Source
http://127.0.0.1
https://example.com
HTTP reverse proxy (example.com:80 -> localhost:3000) is a common way to use nginx with NodeJS applications, but it doesn't work well with Django
Client-Facing URL
Server Proxy URL
https://example.com
http://127.0.0.1:3000
It is better to run Django through a Unix socket rather than a port (example.com:80 -> <socket>). You can do this with Gunicorn:
Client-Facing URL
Server Proxy URL
https://example.com
unix:/run/example.com.sock
Here's how to do this with Django, Gunicorn, and nginx:
Let's say you've got a Django project root, which contains a system folder (the one where settings.py and wsgi.py are):
export DJANGO_PROJECT_PATH=/path/to/django/root
export DJANGO_SETTING_FOLDER=system
First, make sure you have Gunicorn installed and that you are using a virtual environment:
cd $DJANGO_PROJECT_PATH
source .venv/bin/activate # <- Use a virtual environment
pip3 install gunicorn # <- install Gunicorn in the venv
Run Gunicorn. This will start the Django project similar to running python3 manage.py runserver, except that you can listen for requests on a Unix socket:
$DJANGO_PROJECT_PATH/.venv/bin/gunicorn \
--workers=3 \
--access-logfile - \
--bind unix:/run/example.com.sock \ # <- Socket
--chdir=$DJANGO_PROJECT_PATH/ \
$DJANGO_SETTING_FOLDER.wsgi:application
Then create an HTTP proxy using nginx that passes HTTP requests from clients through the gunicon-created socket:
/etc/nginx/sites-enabled/example.com:
server {
listen 80;
listen [::]:80;
server_name example.com;
# serve static files directly through nginx
location /static/ {
autoindex off;
root /path/to/django/root;
}
# serve user-uploaded files directly through nginx
location /media/ {
autoindex off;
root /path/to/django/root;
}
# You can do fun stuff like aliasing files from other folders
location /robots.txt {
alias /path/to/django/root/static/robots.txt;
}
# here is the proxy magic
location / {
include proxy_params;
proxy_pass http://unix:/run/example.com.sock; # <- the socket!
}
}
Make sure to restart nginx:
sudo service restart nginx
After all this, your csrf tokens should match the domain name of your site and you'll be able to log in and submit forms.
Amazon has instructions for postfix and sendmail, but not OpenSMTPD, so adding them here.
Tested with OpenBSD 5.8
Verify your domain and a sender in AWS SES console. Save your SMTP Settings.
Set up the SMTP authentication details in the mail secrets database (replacing $smtpUsername:$smtpPassword with the values from step 1)
# touch /etc/mail/secrets
# chmod 640 /etc/mail/secrets
# chown root:_smtpd /etc/mail/secrets
# echo "ses $smtpUsername:$smtpPassword" >> /etc/mail/secrets
# makemap /etc/mail/secrets
Configure OpenSMTPD:
# nano /etc/mail/smtpd.conf
listen on lo0
table aliases db:/etc/mail/aliases.db
table secrets db:/etc/mail/secrets.db
accept for local alias <aliases> deliver to mbox
accept from local for any relay via tls+auth://ses#email-smtp.us-east-1.amazonaws.com auth <secrets>
Restart OpenSMTPD:
# rcctl restart smtpd
Test it:
# sendmail -v -f verified-sender#verified-domain.com to#example.com
Subject: test subject
test body
^D
Errors?
watch your line-breaks in smtpd.conf
# smtpd -n to check for syntax errors in smtpd.conf
Try port 587 if your machine is blocking port 25 (add :587 to end of aws url in smtpd.conf)
I am in the process of setting up Kerberos on a CentOS7 (more specific: the Hortonworks HDP 2.3 sandbox) running in a VirtualBox VM. My problem is that kinit seems to be unable to reach my KDC, the answer is "Resource temporarily unavailable while getting inital credentials" if I add an address in my /etc/hosts file and if I leave that file as is I get the message "could not contact any host for realm mycompany while getting initial credentials".
The KDC is running (can find it with ps plus the service starts with an "okay" message), same for kadmin.
As a guide for setting up kerberos I followed these two guides:
CentOS guide
Guide 2
My config files:
krb5.conf
[logging]
default = FILE:/var/log/krb5libs.log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmin.log
[libdefaults]
default_realm = MYCOMPANY.COM
dns_lookup_realm = true
dns_lookup_kdc = true
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
[realms]
MYCOMPANY.COM = {
kdc = kerberos.mycompany.com
admin_server = kerberos.mycompany.com
}
[domain_realm]
.mycompany.com = MYCOMPANY.COM
mycompany.com = MYCOMPANY.COM
kdc.conf
[kdcdefaults]
kdc_ports = 88
kdc_tcp_ports = 88,750
[realms]
MYCOMPANY.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
}
kadm5.acl
*/admin#MYCOMPANY.COM *
/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.96.140 sandbox.hortonworks.com sandbox ambari.hortonworks.com
192.168.1.3 mycompany.com kerberos.mycompany.com
I get the "Resource..." error if I have any address in the third line of the hosts file, if that line is missing I get the "could not contact..." error.
I could trace the kinit command with something along the lines of krb5_trace or something (unfortunately I can't find the link I got it from any more nor remember the exact command) to the address specified in the host file so kinit seems to contact the fitting address, its just that the KDC does not listen there.
Netstat shows that the KDC is listening on the ports specified in the kdc.conf
Any help would be appreciated
Okay so it does work now. Things I did to fix it:
/etc/resolv.conf
mycompany.com 127.0.0.1
/etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.96.140 sandbox.hortonworks.com sandbox ambari.hortonworks.com
127.0.0.1 mycompany.com kerberos.mycompany.com
And, most embarrassing: I used kinit mycompany/admin for the principal user/admin#mycompany.com which is of course wrong.
The right call is of course kinit user/admin
I am trying to create a function to upload two xml files to another website once a day, I can make a connection fine using this code
<cfftp action = "open"
username = "xxxx"
connection = "MyConnection"
password = "xxxx"
server = "xxx"
passive="yes"
secure="true">
but then when I try to put the file using this code
<cfftp
action="putFile"
connection="MyConnection"
localfile="xxx"
remotefile="xxx">
then I get this error
An error occurred during the sFTP putFile operation.
Error: Permission denied.
The error occurred in xxxxx: line 13
11 : connection="MyConnection"
12 : localfile="xxxx"
13 : remotefile="xxxx">
Additional background info is that I can upload via filezilla.
Just use passive="yes" with the putFile operation:
<cfftp
action="putFile"
connection="MyConnection"
localfile="xxx"
remotefile="xxx"
passive="yes">
I just had a look at some code I wrote a while ago that uses cfftp, and my remotefile contains the full path. Can you confirm if yours does this?
The error "permission denied" would make sense it if was trying to upload it into the wrong directory also. Let us know how you go.