i run a AWS EC2 Instance with a shell script, which pulls the actual git. Github changed to private token authentication. if I execute the shell I have to insert manually my user and password. but I would like to run the shell with a cronjob automatically.
#!/bin/sh
cd ~/code/NLP
git pull
python3 main.py
git add *
git commit -m "raspberry pi run"
git push origin master
i am looking for a solution to insert my git password and username automatically
Related
I am trying to run a GCE startup script that downloads all dependencies, clones a repository and runs a python program. Here is the code
#! /usr/bin/bash
apt-get update
apt-get -y install python3.7
apt-get -y install git
export HOME=/home/codingassignment
echo $HOME
cd $HOME
rm -rf sshlogin-counter/
git clone https://rutu2605:************#github.com/rutu2605/sshlogin-counter.git
nohup python3 -u ./sshlogin-counter/alphaclient.py > output.log 2>&1 &
When I run echo$HOME, it displays the path in the log file. However when I cd into it, it says directory not found
May 08 23:15:18 alphaclient google_metadata_script_runner[488]: startup-script: /home/codingassignment
May 08 23:15:18 alphaclient google_metadata_script_runner[488]: startup-script: /tmp/metadata-scripts701519516/startup-script: line 7: cd: /home/codingassignment: No such file or directory
That's because at the time when the script is executed, the /home/codingassignment directory doesn't exist yet. To quote the answer you referred to in the comment:
The startup script is executed as root when the user have been not created yet and no user is logged in
The user home directory for the codingassignment user is created later, when you try to login through SSH for example, if you're using the SSH button in Cloud Console or use the gcloud compute ssh command.
My suggestion:
a) Download the code to some "neutral" directory, like /assignment and set proper permissions for this folder so that the codingassignment user can access it later.
b) Try first creating the user with adduser - this might solve your problem. First create the user, then use su codingassignment to drop root permissions, if you don't need them when executing the script.
I have deployed my django-app on heroku with Github.
It is test server so I am using sqlitedb.
But because of the dyno manager, my sqlitedb resets everyday.
So I am going to download only db on heroku.
I tried this command.
heroku git:clone -a APP-NAME
But this clones empty repository.
And when I run $heroku run bash -a APP-NAME command, I got ETIMEOUT error.
Is there any other way to download the source code on heroku?
What you want to do with git is not possible because changes to the database is not versioned.
The command to run bash on Heroku is heroku run bash, not heroku bash run. You may have to specify the app using the -a flag: https://devcenter.heroku.com/articles/heroku-cli-commands#heroku-run
I have solved with downloading the application slug.
If you have not used git to deploy your application, or using heroku git:clone has only created an empty repository, you can download the slug that was build when you application was last deployed.
First, install the heroku-slugs CLI plugin with heroku plugins:install heroku-slugs,
then run:
heroku slugs:download -a APP_NAME
This will download and compress your slug into a directory with the same name as your application.
I create a script to make deploy but every time throw this error:
"Pseudo-terminal will not be allocated because stdin is not a terminal.
Host key verification failed."
My .gitlab-ci.yml:
make_deploy:
stage: deploy
script:
- apk update
- apk add bash
- apk add git
- apk add openssh
- bash scripts/deploy.sh
- echo "Deploy succeeded!"
only:
- master
deploy.sh:
#!/bin/bash
user=gitlab+deploy-token-44444
pass=passwordpass
gitlab="https://"$user":"$pass"#gitlab.com/repo/project.git"
ssh-keygen -R 50-200-50-15
chmod 600 key.pem
ssh -tt -i key.pem ubuntu#ec2-50-200-50-15.compute-1.amazonaws.com << 'ENDSSH'
rm -rf project
git clone $gitlab
cd project
npm i
pm2 restart .
ENDSSH
exit
You need to change your authentication type instead of using username & password, use ssh key exchange.
This way, your script will not be prompted with username & password input.
But before you do that, you should first create ssh keys and upload the public key to your repository settings, it will serve as your primary authentication between the instance and the gitlab server.
More info here.
Test your connection.
ssh -T git#gitlab.com
So here is what I want to do.
Push to master in git
Have gitlab-ci hear that push an start a pipeline
The pipeline builds code and pushes a docker container to the gitlab registry
The pipeline logs into a digital ocean droplet via ssh
The pipeline pulls the docker container from the gitlab registry
The pipeline starts the container
I can get up to step 4 no problem. But step 4 just fails every which way. I've tried the ssh key approach:
https://gitlab.com/gitlab-examples/ssh-private-key/blob/master/.gitlab-ci.yml
But that did not work.
So I tried a plain text password approach like this:
image: gitlab/dind:latest
before_script:
- apt-get update -y && apt-get install sshpass
stages:
- deploy
deploy:
stage: deploy
script:
- sshpass -p "mypassword" ssh root#x.x.x.x 'echo $HOME'
this version just exits with code 1 like so
Pseudo-terminal will not be allocated because stdin is not a terminal.
ln: failed to create symbolic link '/sys/fs/cgroup/systemd/name=systemd': Operation not permitted
/usr/local/bin/wrapdocker: line 113: 54 Killed docker daemon $DOCKER_DAEMON_ARGS &> /var/log/docker.log
Timed out trying to connect to internal docker host.
Is there a better way to do this? How can I at the very least access my droplet from inside the gitlab-ci build environment?
I just answered this related question: Create react app + Gitlab CI + Digital Ocean droplet - Pipeline succeeds but Docker container is deleted right after
Heres the solution he is using to get ssh creds set:
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test it!
- ssh -t ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP} 'echo $HOME'
Code credit goes to https://stackoverflow.com/users/6655011/leonardo-sarmento-de-castro
I'm using Django with Sqlite3 on OpenShift and I need to reset my database (clear all the tables). How do I do that?
You can run flush command to clear data from all the tables.
python manage.py flush
Note that this command will IRREVERSIBLY DESTROY all data currently in the database.
To run a manage.py command in OpenShift,
make sure you have ssh access to the repository.
Step 1
Method 1: With RedHat Client
The easiest way is to install rhc,
You can install rhc by following the official guide
After installation and configuration, run
rhc ssh <app name>
if everything gone correct, this will logs you into your app repo.
(or) Method 2: Without RedHat Client
Add your public key on console settings
Copy the ssh command from the Remote Access section on console.
The command looks like,
ssh <some random string >#your-domain.rhcloud.com
Paste the command to a terminal window and press enter
Step 2
Now navigate to your source directory, run
cd app-root/repo/
Step 3
Now you are at the repo, where you can run your manage.py task
python manage.py makemigrations
or
python3 manage.py migrate
This is how you run a manage.py command in an repo.
make sure you don't share your keys.