How to create ftp (vsftpd) in google cloud compute engine? - google-cloud-platform

How to create ftp in google cloud compute engine? I can connect via SFTP without any issues, but my company is using a software to connect via FTP to download a XML file from the server. Unfortunately that software doesn't have SFTP connection facilities.
I saw lots of examples from the internet and to connect via SFTP not FTP.
Any idea's or tutorials ?

I found a way to do this, Please advice is there any risks.
apt-get install vsftpd libpam-pwdfile
nano /etc/vsftpd.conf
And inside the vsftpd.conf config file.
# vim /etc/vsftpd.conf
listen=YES
listen_ipv6=NO
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
nopriv_user=vsftpd
chroot_local_user=YES
allow_writeable_chroot=yes
guest_username=vsftpd
virtual_use_local_privs=YES
guest_enable=YES
user_sub_token=$USER
local_root=/var/www/$USER
hide_ids=YES
listen_address=0.0.0.0
pasv_min_port=12000
pasv_max_port=12100
pasv_address=888.888.888.888 # My server IP
listen_port=211
Remove everything from the file and add these lines instead
auth required pam_pwdfile.so pwdfile /etc/ftpd.passwd
account required pam_permit.so
Create the main user that will be used by the virtual users to authenticate:
useradd --home /home/vsftpd --gid nogroup -m --shell /bin/false vsftpd
Once that is done we can create our users/passwords file.
htpasswd -cd /etc/ftpd.passwd helloftp
Next, add the directories for the users since vsftpd will not create them automatically.
mkdir /var/www/helloproject
chown vsftpd:nogroup /var/www/helloproject
chmod +w /var/www/helloproject
Finally, start the vsftp daemon and set it to automatically start on system boot.
systemctl start vsftpd && systemctl enable vsftpd
Check the status to make sure the service is started:
systemctl status vsftpd
● vsftpd.service - vsftpd FTP server
Loaded: loaded (/lib/systemd/system/vsftpd.service; enabled)
Active: active (running) since Sat 2016-12-03 11:07:30 CST; 23min ago
Main PID: 5316 (vsftpd)
CGroup: /system.slice/vsftpd.service
├─5316 /usr/sbin/vsftpd /etc/vsftpd.conf
├─5455 /usr/sbin/vsftpd /etc/vsftpd.conf
└─5457 /usr/sbin/vsftpd /etc/vsftpd.conf
Finally add firewall rules to access via cloud.
Later I have changed my IP from 0.0.0.0 for more restriction

Yes, It is possible to host an FTP server on Google Cloud. In fact, I wrote an in-depth blog about How to set up an FTP server on Google Cloud.
If your VM is base on Linux then you have to use an application like vsftpd to set up an FTP server.
Here are the steps:
Step 1: Deploy a Virtual Instance on Google Cloud
Step 2: Open SSH terminal
Step 3: Installing VSFTPD
Step 4: Create a User
Step 5: Configure vsftpd.conf file
Step 6: Preparing an FTP Directory
Step 7: FTP/S or FTP over SSL setup (optional)
Step 8: Opening Ports in Google Cloud Firewall
Step 9: Test and Connect

Related

System has not been booted with systemd as init system (PID 1). Can't operate. Failed to connect to bus: Host is down

I am trying to activate service after creating a systemd service using the following commands in google cloud terminal:
vim /etc/systemd/system/app.service
Pasted the contents below to this file:
#vim /etc/systemd/system/app.service
[Unit]
# specifies metadata and dependencies
Description=Gunicorn instance to serve myproject
After=network.target
# tells the init system to only start this after the networking target has been reached
# We will give our regular user account ownership of the process since it owns all of the relevant files
[Service]
# Service specify the user and group under which our process will run.
User=clashgamers2021
# give group ownership to the www-data group so that Nginx can communicate easily with the Gunicorn processes.
Group=www-data
# We'll then map out the working directory and set the PATH environmental variable so that the init system knows where our the executables for the process are located (wi$
WorkingDirectory=/home/clashgamers2021/clashgamers/
Environment="PATH=/home/clashgamers2021/clashgamers/env/bin"
# We'll then specify the commanded to start the service
ExecStart=/home/clashgamers2021/clashgamers/env/bin/gunicorn --workers 3 --bind unix:app.sock -m 007 wsgi:app
# This will tell systemd what to link this service to if we enable it to start at boot. We want this service to start when the regular multi-user system is up and running:
[Install]
WantedBy=multi-user.target
For activating this service, I typed:
sudo systemctl start app
sudo systemctl enable app
However I got this error:
clashgamers2021#cloudshell:~/clashgamers (clash-gamers-318206)$ sudo systemctl start app
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
You're trying to run the commands in the Cloud Shell:
Cloud Shell is an interactive shell environment for Google Cloud that makes it easy for you to learn and experiment with Google Cloud and manage your projects and resources from your web browser.
Create a new VM (specify hardware & OS) and connect to it using SSH button in the Cloud Console or use other methods described in the documentation.
Then run your commands and if they don't work update your question with more details.

Setting up passwordless ssh failed for all the HAWQ hosts

we have 3 node and trying to setup hdfs and pivotal hawq with ambari and i have already enabled passwordless ssh for all the 3 machines but when i start hawq service i am getting "Setting up passwordless ssh failed for all the HAWQ hosts" this error please help to resolve this issue.
enter image description here
On all of your hosts, edit your /etc/ssh/sshd_config file and change "PasswordAuthentication no" to "PasswordAuthentication yes". This can be done with sed too.
sudo sed -i 's/PasswordAuthentication no/PasswordAuthentication yes/g' /etc/ssh/sshd_config
Then restart sshd on all of the hosts:
sudo /etc/init.d/sshd restart
Now you can proceed with the installation of HAWQ. The installation is using a command called gpssh-exkeys. This process uses password authentication to communicate with the hosts so that it can create and exchange keys for the gpadmin account. Once the keys have been exchanged, the gpadmin account no longer needs password authentication.
Also, after the installation is complete, you can revert back and disable password authentication if you like.
Lastly, I've asked the PM for HDB at Pivotal to enhance Ambari to do these steps for you automatically. There is a similar process for iptables being disabled during the installation of Hadoop so this would be like that. Ambari would enable password authentication, install HDB, and then disable password authentication.

haproxy in docker container

I'm new to docker and haproxy.. I tried to follow the example from the official docker hub repo.
So, I have Dockerfile
FROM haproxy:1.5
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
and simple haproxy config (which I expect to redirect local calls to my EB instance)
global
# daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server server1 {my-app}.elasticbeanstalk.com:80 maxconn 32
Build and run
$ docker build .
$ docker run --rm d4598bcc293f
Container starts and stucks, Ctrl+C doen't stop it. "docker kill" helps only.
My EB resource is up and running
$ curl {my-app}.elasticbeanstalk.com/status
{
"status": "OK"
}
But local calls fail
$ boot2docker ip
192.168.59.104
$ curl 192.168.59.104/status
curl: (7) Failed to connect to 192.168.59.104 port 80: Connection refused
What am I missing or doing wrong?
Thank you!
UPDATE: I've found the problem with calls redirections. Wrong port
number in haproxy.cfg.
But this problem still annoys me... Container starts and stucks,
Ctrl+C doen't stop it. "docker kill" helps only.
If you want to be able to exit with control-c, do docker run -i <image>. The -i means to pass input to the containerized program, and if HAProxy gets a control-c then it will terminate which will stop the container.
HAProxy doesn't produce any output unless you run it in debug mode, so there's not really much point to running attached, though. You might have a better time with docker run -d <image>, which will detach from the container and let it run in the background. To stop it, use docker kill.

Installing and Viewing Neo4j on Existing AWS EC2 Instance

I'm trying to install the enterprise edition of neo4j on an existing EC2 (Amazon linux) instance. So far I've
wget "link to enterprise"
untar the file
renamed and moved the folder to NEO4J_HOME
then went into the config files for neo4j.properties to make the following changes:
# Enable shell server so that remote clients can connect via Neo4j shell.
remote_shell_enabled=true
# The network interface IP the shell will listen on (use 0.0.0 for all interfaces)
remote_shell_host=127.0.0.1
# The port the shell will listen on, default is 1337
remote_shell_port=1337
EDITED Christophe Willemsen pointed out that for my original error, I had forgotten to restart the server at that point but I was still unable to access the web server while it was running. So to make it more clear, I've edited the remaining post:
I went to neo4j-server.properties and uncommented:
org.neo4j.server.webserver.address=0.0.0.0
And start the server
NEO4J_HOME/bin/neo4j start
WARNING: Max 1024 open files allowed, minimum of 40 000 recommended. See the Neo4j manual.
Using additional JVM arguments: -server -XX:+DisableExplicitGC -Dorg.neo4j.server.properties=conf/neo4j-server.properties -Djava.util.logging.config.file=conf/logging.properties -Dlog4j.configuration=file:conf/log4j.properties -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:-OmitStackTraceInFastThrow
Starting Neo4j Server...WARNING: not changing user
process [28557]... waiting for server to be ready..... OK.
http://localhost:7474/ is ready.
checking the status:
NEO4J_HOME/bin/neo4j status
Neo4j Server is running at pid 28557
I can run the shell but the when I go to localhost 7474 I still can not connect
Any help would be appreciative. The only tutorial or help I've found assumed I was starting from scratch with a new instance. If someone could provide some instructions for installing or fix my configuration that would be great.
Thanks!
You have to edit neo4j-server.properties and uncomment the line with:
org.neo4j.server.webserver.address=0.0.0.0
So that the db listens on an external interface not just localhost, and you have to open the port (7474) in your firewall rules.
Make sure to secure access to the db though:
http://neo4j.com/docs/stable/security-server.html

Setting up JMeter for Distributed testing in AWS with connectivity issues

I have to do distributed testing using JMeter. The objective is to have multiple remote servers in AWS controlled by one local server send a file download request to another server in AWS.
How can I set up the different servers in AWS?
How can I connect to them remotely?
Can someone provide some step by step instructions on how to do it?
I have tried several things but keep running into connectivity issues across networks.
We had a similar task and we ran into a bunch of issues as well. Here are the details of the whole process and what we did to resolve the issues we encountered. Hope it helps.
We needed to send requests from 5 servers located in various regions of the world. So we launched 5 micro instances in AWS, each in a different region. We chose the regions to be as geographically apart as possible.
Remote (server) JMeters config
Here is how we set up each instance.
Installed java:
$ sudo apt-get update
$ sudo apt-get install default-jre
Installed JMeter:
$ mkdir jmeter
$ cd jmeter;
$ wget ftp://apache.mirrors.pair.com//jmeter/binaries/apache-jmeter-2.9.tgz
$ gunzip apache-jmeter-2.9.tgz;tar xvf apache-jmeter-2.9.tar
Edited the jmeter.properties file in the /bin folder of the JMeter installation and uncomment the line containing the server.rmi.localport setting. We changed the port to 50000.
server.rmi.localport=50000
Started JMeter server. Make sure the address and the port the server reports listening to are correct.
$ cd ~/jmeter/apache-jmeter-2.9/bin
$ vi jmeter-server
Local (client) JMeter config
Then we set up JMeter to run tests remotely on these instances on our local client machine:
Ensured to use the same version of JMeter as was running on the servers. Installed Java and JMeter as described above.
Enabled remote testing by editing the jmeter.properties file that can be found in the bin folder of the JMeter installation. The parameter remote_hosts needed to be set with the public DNS of the remote servers we were connecting to.
remote_hosts=54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x,54.x.x.x
We were now able to tell our client JMeter instance to run tests on any or all of our specified remote servers.
Issues and resolutions
Here are the issues we encountered and how we resolved them:
The client failed with:
ERROR - jmeter.engine.ClientJMeterEngine: java.rmi.ConnectException: Connection - refused to host: 127.0.0.1
It was due to the server host returning the private IP address as its address because of Amazon NAT.
We fixed this by setting the parameter RMI_HOST_DEF that the /usr/local/jmeter/bin/jmeter-server script includes in starting the server:
RMI_HOST_DEF=-Djava.rmi.server.hostname=54.xx.xx.xx
Now, the AWS instance returned the server’s external IP, and we could start the test.
When the server node attempted to return the result and tried to connect to the client, the server tried to connect to the external IP address of my local machine. But it threw a connection refused error:
2013/05/16 12:23:37 ERROR - jmeter.samplers.RemoteListenerWrapper: testStarted(host) java.rmi.ConnectException: Connection refused to host: xxx.xxx.xxx.xx;
We resolved this issue by setting up reverse tunnels at the client side.
First, we edited the jmeter.properties file in the /bin folder of the JMeter installation and uncommented the line containing the client.rmi.localport setting. We changed the port to 60000:
client.rmi.localport=60000
Then we connected to each of the servers using SSH, and setup a reverse tunnel to port 60000 on the client.
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -R 60000:localhost:60000 ubuntu#54.x.x.x
We kept each of these sessions open, as the JMeter server needs to be able to deliver the test results to the client.
Then we set up the JVM_ARGS environment variable on the client, in the jmeter.sh file in the /bin folder:
export JVM_ARGS="-Djava.rmi.server.hostname=localhost"
By doing this, JMeter will tell the servers to connect to localhost:60000 for delivering their results. This ends up being tunneled back to the client.
The SSH connections to the servers kept dropping after staying idle for a little bit. To prevent that from happening, we added a parameter to each of the SSH tunnel set up directing the client to wait 60 seconds before sending a null packet to the server to keep the connection alive:
$ ssh -i ~/.ssh/54-x-x-x.us-east.pem -o ServerAliveInterval=60 -R 60000:localhost:60000 ubuntu#54.x.x.x
(.ssh/config version of all required SSH settings:
Host 54.x.x.x
HostName 54.x.x.x
Port 22
User ubuntu
ServerAliveInterval 60
RemoteForward 127.0.0.1:60000 127.0.0.1:60000
IdentityFile ~/.ssh/54-x-x-x.us-east.pem
IdentitiesOnly yes
Just use ssh 54.x.x.x after setting this up.
)
I just went though this on openstack and found the same issues... no idea why the jmeter remoting documentation only covers half the required steps. You can do it without tunnels or touching the properties files.
You need
All nodes to advertise their public IP - on AWS/OS this defaults to the private IP
Ingress rules for the RMI port which defaults to 1099 - I use this
Ingress rules for the RMI "local" port which defaults to dynamic. Below I use 4001 for the client and 4000 for servers. The port can be the same but note the properties are different.
If you are using your workstation as the client you probably still need tunnels. Above Archana Aggarwal has good tips for tunnels.
Remote servers
Set java.rmi.server.hostname and server.rmi.localport inline or in the properties file.
jmeter-server -Djava.rmi.server.hostname=publicip -Dserver.rmi.localport=4000
Sneaky server on client
You can also run one on the same machine as the client. For clarity I've set java.rmi.server.hostname but left server.rmi.localport as dynamic
jmeter-server -Djava.rmi.server.hostname=localip
Client
Set java.rmi.server.hostname and client.rmi.localport inline or in the properties file. Use -R etc like so:
jmeter -n -t Test.jmx -Rremotepublicip1,remotepublicip2 -Djava.rmi.server.hostname=clientpublicip -Dclient.rmi.localport=4001 -GmypropA=1 -GmypropB=2 -lresults.jtl
When you go for distributed testing using JMeter in AWS, I would suggest you to use docker - which will help us with jmeter test infrastructure very quickly. This way we can also ensure that same version of java and jmeter are installed in all the instances of amazon which is very important of JMeter distributed testing.
Ensure that - you set below properties and ports are open for jmeter-server. [they do not have to be 1099,50000 exactly]
server.rmi.localport=50000
server_port=1099
java.rmi.server.hostname=SERVER_IP
for client
client.rmi.localport=60000
java.rmi.server.hostname=SERVER_IP - this step is very important as the container in aws instance will have their own IP address in the docker network - so master and slave can not communicate. So we explicitly set this property
More info:
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker-in-aws/