I wanted to ask something regarding Debian Virtual Machine on Google Cloud. So I've set some configurations on my Compute Engine and on my laptop such that I can ssh to the VM from my laptop with ssh account1#IPAddress and now I'm logged in as account1#VM-name $ in my terminal. After doing so, I clone my own GitHub Repo to start my Node.js server (the Node.js server might be unimportant in this case, but the git clone is important to the story line). Then, I decided to disconnect from terminal and decided to do SSH-in-browser method. With that I'm logged in with my email, that is let's say account2#VM-name $. However, when I typed ls, the GitHub folder is not there. Is this normal with Debian VMs across cloud services such that different accounts cannot see other folders, or is there actually away to share the same files with different "ssh accounts"? Or maybe I should sudo git clone such that it is saved in the root folder in the VM?
Thanks a lot for the help!
when you logged using account1#vm-name and cloned your github repo it will be available in /home/account1 directory.
Now when you are logged in now with account2#vm and do ls , it will show details of /home/account2 and it is obvious nothing is there.
you can use below commands.
sudo cd /home/account1
ls
Related
I had a few questions about automatic git pulls on a remote server. I am aware there are several questions like this, but I wasn't sure what steps to take exactly, and I don't want to mess up my current setup with a mistake :/
To wit, the environment is on a Google Cloud VM. I am running a flask-based website that renders each page with the render_template() function.
The website resides inside its git folder, i.e. I never set up a bare repo and copied stuff. When I set it up a couple years ago, I just did git clone repo-url, then inside the repo directory, did flask run. Then I set up nginx to connect to the site's socket created with uwsgi inside the repo directory.
--
It has been working fine. I make changes locally to the content, push to github, then log in to the VM, and perform a git pull.
I want to do this automatically. I tried adding a cron job to do this, where the job basically ran a script, and the script did the git pull. Script content was:
cd /repo
git pull
Running the script in the server worked, but cron never managed to do the pull.
--
I have been reading about web hooks, and there is a bunch of stuff about post-receive hooks, post-update hooks, and making bare repos. At this point, I am embarrassed to say I have no idea what I should be doing.
Any help is greatly appreciated.
Another option would be to consider a GitHub Action, which, from GitHub, could interract with your Google cloud VM.
For example, actions-hub/gcloud.
- uses: actions-hub/gcloud#master
env:
PROJECT_ID: test
APPLICATION_CREDENTIALS: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS }}
with:
args: cp your-file.txt gs://your-bucket/
cli: gsutil
I've got a Django project which works great. Previously we just cloned down and used password authentication. I changed the remote to git#bitbucket.org:myteam/our_repo.git
Recently we started requiring 2FA, so now we can only clone down over SSH.
For this project, I created an access key (read-only, which is all I need for cloning down on a staging server) and I was able to clone down the repo (git clone git#bitbucket.org:myteam/our_repo.git) without issue and get it all set up. This appeared to have worked.
The other server admin remoted in and tried to run git pull origin master, he got a permission issue. His windows user is part of the Administrators group - but for some reason that didn't matter. His local user had to be added to the directory with full access before he could run git pull origin master
It appears that this permission issue is causing other issues, too. File uploads (from the Django admin) are no longer actually uploading the files into the directory on the server - my guess is that this is related to the permissions issue, too. Nothing was changed to impact this - the project was just cloned down over SSH.
Does cloning something down over SSH change the permissions on the directories or somehow lock it down more? This wasn't an issue before - only since we've switched over to SSH.
Any feedback is helpful!
Does cloning something down over SSH change the permissions on the directories or somehow lock it down more?
No, it does not change anything locally.
And 2FA is only impacting HTTPS URL (where your password must be a PAT, Persoanl Access Token)
It has no bearing on SSH URLS.
Check first ssh -Tv git#github.com output.
Is there a way to add multiple git repositories in the same Google cloud project?
You currently cannot do this. We know this is a useful feature, and we're working hard on it. Stay tuned!
We've added the ability to have multiple Cloud Source Repositories for every cloud project.
You can read about how to add a new repo to your project here: https://cloud.google.com/source-repositories/docs/setting-up-repositories
There is no way of doing this as of today. Every project can only have one remote repository.
Git submodule should do the trick. Add git repositories as submodules.
See
https://git-scm.com/docs/git-submodule
https://git-scm.com/book/en/v2/Git-Tools-Submodules
No, there isn't, but you can use Git subtree merges to add multiple "subrepositories" as folders in your main repository, which will do the trick.
See details here https://help.github.com/articles/about-git-subtree-merges/
(There are also submodules as #Shishir stated, but as I understand they are only set for your current local clone and won't be included in checkouts/clones done by others, so I think submodules won't work).
Every Google cloud project can only have one remote repository.
However, t's definitely possible to have multiple local repositories that correspond with the same remote Google cloud repository.
The official documentation describes the following procedure for how to use a Cloud Source Repository as a remote for a local Git repository :
Create a local Git repository
Now, create a repository in your environment using the Git command
line tool and pull the source files for a sample application into the
repository. If you have real-world application files, you can use
these instead.
$ cd $HOME
$ git init my-project
$ cd my-project
$ git pull https://github.com/GoogleCloudPlatform/appengine-helloworld-python
Add the Cloud Source Repository as a remote
Authenticate with Google Cloud Platform and add the Cloud Source
Repository as a Git remote.
On Linux or Mac OS X:
$ gcloud auth login
$ git config credential.helper gcloud.sh
$ git remote add google https://source.developers.google.com/p/<project-id>/
On Windows:
$ gcloud auth login
$ git config credential.helper gcloud.cmd
$ git remote add google https://source.developers.google.com/p/<project-id>/
The credential helper scripts provide the information needed by Git to
connect securely to the Cloud Source Repository using your Google
account credentials. You don't need to perform any additional
configuration steps (for example, uploading ssh keys) to establish
this secure connection.
Note that the gcloud command must be in your $PATH for the
credential helper scripts to work.
It also explains how to create a local git by cloning a Cloud Source repository :
Clone a Cloud Source Repository
Alternatively, you can create a new local Git repository by cloning
the contents of an existing Cloud Source Repository:
$ gcloud init
$ gcloud source repos clone default <local-directory>
$ cd <local-directory>
The gcloud source repos clone command adds the Cloud Source
Repository as a remote named origin and clones it into a local Git
repository located in <local-directory>.
I've been following along this blog to setup a shoutcast server on openshift using the diy cartridge. After replacing the destip with my server's OPENSHIFT_DIY_IP and editing the action and stop hooks I find that the server isn't starting when I visit the application's url, instead I'm getting:
503 Service Temporarily Unavailable
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.
When I checked the log file used in the action hook I'm finding:
server.log
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv.exe': Permission denied
(while using window's shoutcast distribution) and
nohup: failed to run command `/var/lib/openshift/xxxx app-id xxxx/app-root/runtime/repo//diy/sc_serv': Permission denied
(while using linux's shoutcast distribution)
I've read on several forums that often openshift resets the chmod file permissions and prevents applications from being executed, and that's exactly what I found my openshift application doing (after using filezilla to edit the file permissions). Since sc_serv or sc_serv.exe is the main application (a command line application) to keep the server going I'm wondering how I could get around this odd permissions error.
start action hook (when I used window's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv.exe $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
start action hook (when I used linux's shoutcast distribution)
nohup $OPENSHIFT_REPO_DIR/diy/sc_serv $OPENSHIFT_REPO_DIR/diy/sc_serv.conf > $OPENSHIFT_DIY_LOG_DIR/server3.log 2>&1 &
I'd like to note that the blogger used linux and I'm using windows to edit the openshift repository and I assume that the files extracted from the linux distribution of shoutcast are the same whether from windows or linux, but I clearly can't test that. All I can tell so far is that openshift is blocking the main executable (whether it's linux or windows) which essentially runs the whole service. I've tested the server myself on my own localhost and found it working perfectly so I have no doubt if it were to run (with the right settings listed in this blog that it would work.
Edit: Solved
In order to have the permissions changed and kept that way they need to be edited from git using
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
After stumbling across more stackoverflow answers (and feel free to link one that explains this I don't remember which one I used) I read that openshift will reset everything (permission wise) on every git push (to retain the safety of the code I assume). So the only way to solve the permissions issue is in fact with git, not through ftp software like filezilla or through ssh. This way changing the chmod will remain permanently.
git update-index --chmod=+x filename
git commit -m 'update file permissions ect...'
git push origin master
In the end what I have in openshift's diy folder is the linux distribution of shoutcast (which can be extracted with 7-Zip. Modified so that it can be reached through port-forwarding like in this blog. To reach the server (having set up openshift's tools) all you'll have to do before broadcasting is this in command line:
rhc port-forward [app-name]
If you're using Sam broadcasting software the good news is that you can easily add a mysql database, and also port-forward into that as well using that same command. Port-forwarding would mean that instead of finding the ip:port for your stream and mysql on openshift you would use localhost or 127.0.0.1 and whatever ports indicated by rhc port-forward. You could also be using your other favorite software to broadcast in which case I'd recommend setting up a batch file like so:
cd C:\YourSoftwarePath
start YourSoftware.exe
start rhc port-forward [app-name]
If you have hardware doing the streaming like through a barix box there will probably be some way of doing this in some other tricky manner.
I have a Jenkins server on OS X 10.7, which polls a subversion server, builds the code and packages the app. The last step that I need to complete is deploying the app on a remote host, which is a windows share. Note that my domain account has write access to the target folder and the volume is mounted. I've tried using a shell script build step:
sudo cp "path/to/app" "/Volumes/path/to/target"
However i get a "no tty" response. I was able to run this command succesfully in Terminal, but not as a build step in Jenkins.
Does this have something to do with the user being used when starting up Jenkins? As a side note, the default user.name is jenkins and my JENKINS_HOME resides in /Users/Shared/Jenkins. I would appreciate any help as to how to achieve this.
Your immediate problem seems to be that you are running Jenkins in the background and sudo wants to input a password. Run Jenkins in the foreground with $ java -jar jenkins.war.
However, this most probably won't solve your problem as you'll be asked to enter a password when the command runs - from the terminal you started Jenkins from (presumably it's not what you want). You need to find a way to copy your files without needing root permissions. In general, it it not a good idea to rely on administrative permissions in your builds (there are exceptions, but your case is not it).