On the top right corner of the SSH-in-Browser terminal for a Compute Engine, there is a download button. What I'd like to do is trigger this functionality directly from the VM terminal.
Why?
Every once in a while, I'm requested to make a copy of a database for any reason. So far, this requires me to 1) Make the dumps, 2) Zip the dumps, 3) Download the zip and 4) Send it to whoever requested it. I decided to automate this since its becoming more frequent.
So far, 1 and 2 are done, and I would usually leave it at that, but I'm wondering if I can just trigger the "downloading function" from the script that handles the dumps and zipping, just to save me a few clicks and writing down the path. For example:
#!/bin/bash
here=$(pwd)
mkdir /home/me/back
mkdir /home/me/back/data
/home/me/saveTables $1
/home/me/saveRecords $1
cd /home/me/back
zip -r $here/$1.zip ./* > /dev/null
sudo rm -dr /home/me/back
# and then download it with something like
# 'gcpdk' is made up just for example purposes
gcpdk --download $here/$1.sql
Is this possible from the SSH-in-Browser at all? So far I'm guessing that the download function is not accessible from within the VM terminal, but I have not found documentation about the downloads themselves yet, so I can't tell. Or perhaps this is something I should do with a different tool?
By the way, I can't copy files to Cloud Storage (Lack the permissions) so at least for now, that is not an option.
Any help, link or confirmation of whether or not this is doable is welcome.
And just to be extra sure: This is not something critical. Its just me being lazy. Cheers!
Related
I have 5 terraform configuration files and with remote exec on all files on the same folder.
node1.tf
node2.tf
node3.tf
node4.tf
node5.tf
so now I want the first-run node1.tf and as soon as it is provision completed then run the node2.tf file
and soo on...
how can I achieve this
if now I hit terraform apply then it will provision all the files in a different order or may be in random order but I want my files would be run one by one.
As hinted already in the comments, there seem to be a misunderstanding on your side on how Terraform works. Terraform is not a programming language. Even more so not a procedural programming language - i.e. it's not the case that what you have in the .tf files is executed top down (or in any specific order).
Instead Terraform code is declarative. I.e. you specify the desired state and the Terraform tool's job is to identify how to get from the current state to the desired one.
Most likely either Terraform is not the right tool for your task. Or you should re-consider your constraints.
What #Marcin suggests in the comments is also you can hack things with a file structure like this:
dir1/
node1.tf
dir2/
node2.tf
dir3/
node3.tf
...
And then run:
cd dir1 && terraform apply && cd ..
cd dir2 && terraform apply && cd ..
cd dir3 && terraform apply && cd ..
which in theory is what you are asking for. But be warned that opens up for a lot of trouble with how you are going to maintain inconsistent state (e.g. a problem happening during dir2's apply, etc.)
For change detection, can gsutil's rsync use the gzip'd size for change detection?
Here's the situation:
Uploaded non-gzip'd static site content to a bucket using cp -Z so it's compressed at rest in the cloud.
Modify HTML files locally.
Need to rsync only the locally modified files.
So the upshot is that the content is compressed in the cloud and uncompressed locally. Can rsync be used to figure out what's changed?
From what I've tried, I'm thinking no because of the way rsync does it's change detection:
If -c is used, compare checksums but ONLY IF file sizes are the same.
Otherwise use times.
And it doesn't look like -J/-j impacts comparing the file size (the local uncompressed filesize is compared against the compressed cloud version which of course is FALSE) so -c won't kick in. Then, the times won't match and thus everything is uploaded again.
This seems like a fairly common use case. Is there a way of solving this?
Thank you,
Hans
To figure out how rsync identifies what has been changed while using gsutils please check Change Detection Algorithm.
I am unsure how do you want to compare between gzip non-gzip, but maybe gsutil compose could be used to make that middle step while compare between files before being compressed.
Take into account that in gsutils rsync's 4th limitation:
The gsutil rsync command copies changed files in their entirety and does not employ the rsync delta-transfer algorithm to transfer portions of a changed file. This is because Cloud Storage objects are immutable and no facility exists to read partial object checksums or perform partial replacements.
I have been testing an older AWS Tools install using AWSToolsAndSDKForNet_sdk-3.3.398.0_ps-3.3.390.0_tk-1.14.4.1.msi and a newer install using AWSToolsAndSDKForNet_sdk-3.5.2.0_ps-4.1.0.0_tk-1.14.5.0.msi. The code that I am using to test with is
Set-AWSCredential -AccessKey:$ACCESSKEY -SecretKey:$SECRETKEY -StoreAs:default
$items = Get-S3Object -BucketName:$BUCKETNAME -Region:'eu-west-1' -Key:'revit/2020'
Write-Host "$($items.Length) items"
$count = 1
foreach ($item in $items) {
Write-Host "$count $($item.key)"
$count ++
}
I am seeing VERY different behavior, and can't figure out why. With 3.3 the code works as intended, I end up with a list of files in my bucket and key. Performance is pretty decent, it takes a moment but I have about 5000 files in may "subfolders".
When I run this with 4.1 it takes 3-5 times as long and returns nothing.
It seems that Help is a bit different too. A first run of get-help Get-S3Object -detailed will take as long as 10 minutes to run, with CPU, Memory and Disk access often at 99% utilization. A second run is quite quick. 3.3 Does nothing of the sort.
So, is this current build of AWS Tools for Powershell just not ready for prime time? My searches for AWS Tools 4.1 performance have turned up nothing.
For what it is worth, I am using the MSI installer because I need the install to actually work consistently, and the NuGet approach has been very problematic on a number of production workstations. But if there is another option I would love to look at it. The main issue is I need ultimately to do the install and immediately load the modules and work with AWS. I don't have that working with the MSI based install yet, but that's for a different thread.
It looks like they changed the results from Get-S3Object. You will need to add -Select S3Objects.Key to get the results you're looking for (or just -select *). Here's the excerpt from the change notes:
Most cmdlets have a new parameter: -Select. Select can be used to change the value returned by the cmdlet. For example the service API used by Get-S3Object returns a ListObjectsResponse object but the cmdlet is configured to return only the S3Objects field. Now you can specify -Select * to receive the full API response. You can also specify the path to a nested result property like -Select S3Objects.Key. In certain situations it may be useful to return a cmdlet parameter, this can be achieved with -Select ^ParameterName.
Found by going to the Change Notes and doing a CTRL+F for Get-S3Object. Hope this resolves it for you!
Hhi all,
I'm using kettle4.0.1 communty version, here iam comfortable with spoon, but for running jobs and all i need to use pan and carte, my problem is other than spoon.bat niether of pan.bat nor carte.bat is opening. iam unable to run kitchen.bat also.. can someone suggest me with best solution
First of all, in order to run jobs you will need to use kitchen.bat (unless you want to execute them remotely). Kitchen, Pan and Carte are command line tools, therefore you will need to specify your parameters also on the command line.
For example, you want to run a file called job.kjb located in C:/jobs/ with a minimal log level. You would execute kitchen.bat from the commandline as follows
kitchen.bat -file=C:/jobs/job.kjb -level=minimal
Please see also more information here:
http://wiki.pentaho.com/display/EAI/Kitchen+User+Documentation
There is a directory where a buddy adds new builds of a product.
The listing looks like this
$ ls path-to-dir/
01
02
03
04
$
where the numbers listed are not files but names of directories containing the builds.
I have to manually go and check every time whether there is a new build or not. I am looking for a way to automate this, so that the program can send an email to some people (including me) whenever path-to-dir/ is updated.
Do we have an already existing utility or a Perl library that does this?
inotify.h does something similar, but it is not supported on my kernel (2.6.9).
I think there can be an easy way in Perl.
Do you think this will work?
Keep running a loop in Perl that does a ls path-to-dir/ after, say, every 5 minutes and stores the results in an array. If it finds that the new results are different from the old results, it sends out an email using Mail or Email.
If you're going for perl, I'm sure the excellent File::ChangeNotify module will be extremely helpful to you. It can use inotify, if available, but also all sorts of other file-watching mechanisms provided by different platforms. Also, as a fallback, it has its own watching implementation, which works on every platform, but is less efficient than the specialized ones.
Checking for different ls output would send a message even when something is deleted or renamed in the directory. You could instead look for files with an mtime newer than the last message sent.
Here's an example in bash, you can run it every 5 minutes:
now=`date +%Y%m%d%H%M.%S`
if [ ! -f "/path/to/cache/file" ] || [ -n "`find /path/to/build/dir -type f -newer /path/to/cache/file`" ]
then
touch /path/to/cache/file -t "$now"
sendmail -t <<< "
To: aaa#bbb.ccc
To: xxx#yyy.zzz
Subject: New files found
Dear friend,
I have found a couple of new files.
"
fi
Can't it be a simple shell script?
while :;do
n = 'ls -al path-to-dir | wc -l'
if n -gt old_n
# your Mail code here; set old_n=n also
fi
sleep 5
done
Yes, a loop in Perl as described would do the trick.
You could keep a track of when the directory was last modified; if it hasn't changed, there isn't a new build. If it has changed, an old build might have been deleted or a new one added. You probably don't want to send alerts when old builds are removed; it is crucial that the email is sent when new builds are added.
However, I think that msw has the right idea; the build should notify when it has completed the copy out to the new directory. It should be a script that can be changed to notify the correct list of people - rather than a hard-wired list of names in the makefile or whatever other build control system you use.
you could use dnotify it is the predecessor of inotify and should be available on your kernel. It is still supported by newer kernels.