Why does this work gcloud config list project --format 'value(core.project)' but not this gcloud config list project --format=value(core.project)? The documentation uses the = notation. I get the error number expected when using it. My guess is that it's trying to evaluate the projection value(core.project) as a number and the `` tells it to evaluate as a string.
I'm unfamiliar with zsh but the bash returns a syntax error unrecognized token '(' if you do this.
This is not an issue with gcloud per se.
The issue is that zsh (and bash) shells have their own interpretation of (...). In bash, this is the command for a subshell.
The solution is to ensure that the flag values are passed to the command as-is rather than be evaluated by the shell.
As #hobbs points out correctly, you can use '...' or "..." to wrap the flag value correctly without issue. My preference is --flag=value and, when using bash, '...' when no variable expansion is ever desired and "..." when it is.
My personal preference is to always use = and default to "..."
gcloud config list project \
--format="value(core.project)"
Related
I want to execute shell commands inside if-block of Gnuplot. I tried the following:
datatype = 'full'
if ( datatype eq 'full' ) {
# Run shell command
!echo 'full'
} else {
# Run different shell command
!echo 'not full'
}
However, this gives me the error "filename.plt" line xx: invalid command
FYI, I already know instead of !echo, I can use print to do the same thing. That's not the point. I want to use shell commands with the symbol ! within the if-block. Any help will be appreciated.
#choroba has given the correct solution: use the system("command") function instead of !command.
As to why this is necessary, in brief:
(1) The ! operator is interpreted as indicating a shell command only if it is the first token on a gnuplot input line. Otherwise it is interpreted as a logical NOT.
(2) Gnuplot bracketed clauses { lots of commands } are treated internally as a single long input line. This is an implementation detail that has changed over time. The safest general guideline is that a bracketed clause should not contain any syntax that is documented as acting on or affecting a single line of input. This includes # macro substitution, single-line if statements, and as you found, the ! operator.
(1+2) Therefor a ! inside a bracketed clause is not seen as being the first token, and is interpreted as a logical NOT operator instead.
I would like to automate zsh's installation and configuration and I am currently unable to check out either a file whose name begin with '~' exists and is not empty.
zsh_rcfile='~/.zshrc'
set -- $zsh_rcfile
if [ -s "${zsh_rcfile}" ]; then
printf "Zsh is already configured."
fi
When I execute the script into the terminal, no error is returned but no output is produced as well.
I tried to hard code the pathname, or use it without the curly braces but the result is the same.
I also tried not using the set command (which prevent nasty surprises with empty names or names beginning with a dash).
The 'if' statement would work without the tilde symbol (i.e. ~) but it would account for an inferior solution as I need to automate other processes required to cross the system tree (and not just the 'home' partition).
Does anyone accept to help me achieving the comparison against a path beginning with '~'?
N.B.: I'm using zsh 5.7.1 (x86_64-debian-linux-gnu).
Tilde expansion isn't performed on the result of a parameter expansion. You want to leave the ~ unquoted so that it is expanded when you define the parameter.
% zsh_rcfile='~/.zshrc'
% print $zsh_rcfile
~/.zshrc
% zsh_rcfile=~/.zshrc
% print $zsh_rcfile
/Users/<user>/.zshrc
The -s operator returns false if the file doesn't exist in the first place (which makes sense, since a nonexistent file is trivially empty).
-s file
true if file exists and has size greater than zero.
I've designed a data transformation in Dataprep and am now attempting to run it by using the template in Dataflow. My flow has several inputs and outputs - the dataflow template provides them as a json object with key/value pairs for each input & location. They look like this (line breaks added for easy reading):
{
"location1": "project:bq_dataset.bq_table1",
#...
"location10": "project:bq_dataset.bq_table10",
"location17": "project:bq_dataset.bq_table17"
}
I have 17 inputs (mostly lookups) and 2 outputs (one csv, one bigquery). I'm passing these to the gcloud CLI like this:
gcloud dataflow jobs run job-201807301630 /
--gcs-location=gs://bucketname/dataprep/dataprep_template /
--parameters inputLocations={"location1":"project..."},outputLocations={"location1":"gs://bucketname/output.csv"}
But I'm getting an error:
ERROR: (gcloud.dataflow.jobs.run) unrecognized arguments:
inputLocations=location1:project:bq_dataset.bq_table1,outputLocations=location2:project:bq_dataset.bq_output1
inputLocations=location10:project:bq_dataset.bq_table10,outputLocations=location1:gs://bucketname/output.csv
From the error message, it looks to be merging the inputs and outputs so that as I have two outputs, each two inputs are paired with the two outputs:
input1:output1
input2:output2
input3:output1
input4:output2
input5:output1
input6:output2
...
I've tried quoting the input/output objects (single and double, plus removing the quotes in the object), wrapping them in [], using tildes but no joy. Has anyone managed to execute a dataflow job with multiple inputs?
I finally found a solution for this via a huge process of trial and error. There are several steps involved.
Format of --parameters
The --parameters argument is a dictionary-type argument. There are details on these in a document you can read by typing gcloud topic escaping in the CLI, but in short it means you'll need an = between --parameters and the arguments, and then the format is key=value pairs with the value enclosed in quote marks ("):
--parameters=inputLocations="object",outputLocations="object"
Escape the objects
Then, the objects need the quotes escaping to avoid ending the value prematurely, so
{"location1":"gcs://bucket/whatever"...
Becomes
{\"location1\":\"gcs://bucket/whatever\"...
Choose a different separator
Next, the CLI gets confused because while the key=value pairs are separated by a comma, the values also have commas in the objects. So you can define a different separator by putting it between carats (^) at the start of the argument and between the key=value pairs:
--parameters=^*^inputLocations="{"\location1\":\"...\"}"*outputLocations="{"\location1\":\"...\"}"
I used * because ; didn't work - maybe because it marks the end of the CLI command? Who knows.
Note also that the gcloud topic escaping info says:
In cmd.exe and PowerShell on Windows, ^ is a special character and
you must escape it by repeating it. In the following examples, every time
you see ^, replace it with ^^^^.
Don't forget customGcsTempLocation
After all that, I'd forgotten that customGcsTempLocation needs adding to the key=value pairs in the --parameters argument. Don't forget to separate it from the others with a * and enclose it in quote marks again:
...}*customGcsTempLocation="gs://bucket/whatever"
Pretty much none of this is explained in the online documentation, so that's several days of my life I won't get back - hopefully I've helped someone else with this.
We want to migrate from Bitbucket Pipelines to Google Cloud Build to test, build and push Docker images.
How can we use environment variables without a CryptoKey? For example:
- printf "https://registry.npmjs.org/:_authToken=${NPM_TOKEN}\nregistry=https://registry.npmjs.org" > ~/.npmrc
To use environment variables in the args portion of your build steps you need:
"a shell to resolve environment variables with $$" (as mentioned in the example code here)
and you also need to be careful with your usage of quotes (use single quotes)
See below the break for a more detailed explanation of these two points.
While the Using encrypted resources docs that David Bendory also linked to (and which you probably based your assumption on) show how to do this using an encrypted environment variable specified via secretEnv, this is not a requirement and it works with normal environment variables too.
In your specific case you'll need to modify your build step to look something like this:
# you didn't show us which builder you're using - this is just one example of
# how you can get a shell using one of the supported builder images
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'printf "https://registry.npmjs.org/:_authToken=%s\nregistry=https://registry.npmjs.org" $$NPM_TOKEN > ~/.npmrc']
Note the usage of %s in the string to be formatted and how the environment variable is passed as an argument to printf. I'm not aware of a way that you can include an environment variable value directly in the format string.
Alternatively you could use echo as follows:
args: ['-c', 'echo "https://registry.npmjs.org/:_authToken=$${NPM_TOKEN}\nregistry=https://registry.npmjs.org" > ~/.npmrc']
Detailed explanation:
My first point at the top can actually be split in two:
you need a shell to resolve environment variables, and
you need to escape the $ character so that Cloud Build doesn't try to perform a substitution here
If you don't do 2. your build will fail with an error like: Error merging substitutions and validating build: Error validating build: key in the template "NPM_TOKEN" is not a valid built-in substitution
You should read through the Substituting variable values docs and make sure that you understand how that works. Then you need to realise that you are not performing a substitution here, at least not a Cloud Build substitution. You're asking the shell to perform a substitution.
In that context, 2. is actually the only useful piece of information that you'll get from the Substituting variable values docs (that $$ evaluates to the literal character $).
My second point at the top may be obvious if you're used to working with the shell a lot. The reason for needing to use single quotes is well explained by these two questions. Basically: "You need to use single quotes to prevent interpolation happening in your calling shell."
That sounds like you want to use Encrypted Secrets: https://cloud.google.com/cloud-build/docs/securing-builds/use-encrypted-secrets-credentials
I'm writing a program, foo, in C++. It's typically invoked on the command line like this:
foo *.txt
My main() receives the arguments in the normal way. On many systems, argv[1] is literally *.txt, and I have to call system routines to do the wildcard expansion. On Unix systems, however, the shell expands the wildcard before invoking my program, and all of the matching filenames will be in argv.
Suppose I wanted to add a switch to foo that causes it to recurse into subdirectories.
foo -a *.txt
would process all text files in the current directory and all of its subdirectories.
I don't see how this is done, since, by the time my program gets a chance to see the -a, then shell has already done the expansion and the user's *.txt input is lost. Yet there are common Unix programs that work this way. How do they do it?
In Unix land, how can I control the wildcard expansion?
(Recursing through subdirectories is just one example. Ideally, I'm trying to understand the general solution to controlling the wildcard expansion.)
You program has no influence over the shell's command line expansion. Which program will be called is determined after all the expansion is done, so it's already too late to change anything about the expansion programmatically.
The user calling your program, on the other hand, has the possibility to create whatever command line he likes. Shells allow you to easily prevent wildcard expansion, usually by putting the argument in single quotes:
program -a '*.txt'
If your program is called like that it will receive two parameters -a and *.txt.
On Unix, you should just leave it to the user to manually prevent wildcard expansion if it is not desired.
As the other answers said, the shell does the wildcard expansion - and you stop it from doing so by enclosing arguments in quotes.
Note that options -R and -r are usually used to indicate recursive - see cp, ls, etc for examples.
Assuming you organize things appropriately so that wildcards are passed to your program as wildcards and you want to do recursion, then POSIX provides routines to help:
nftw - file tree walk (recursive access).
fnmatch, glob, wordexp - to do filename matching and expansion
There is also ftw, which is very similar to nftw but it is marked 'obsolescent' so new code should not use it.
Adrian asked:
But I can say ls -R *.txt without single quotes and get a recursive listing. How does that work?
To adapt the question to a convenient location on my computer, let's review:
$ ls -F | grep '^m'
makefile
mapmain.pl
minimac.group
minimac.passwd
minimac_13.terminal
mkmax.sql.bz2
mte/
$ ls -R1 m*
makefile
mapmain.pl
minimac.group
minimac.passwd
minimac_13.terminal
mkmax.sql.bz2
mte:
multithread.ec
multithread.ec.original
multithread2.ec
$
So, I have a sub-directory 'mte' that contains three files. And I have six files with names that start 'm'.
When I type 'ls -R1 m*', the shell notes the metacharacter '*' and uses its equivalent of glob() or wordexp() to expand that into the list of names:
makefile
mapmain.pl
minimac.group
minimac.passwd
minimac_13.terminal
mkmax.sql.bz2
mte
Then the shell arranges to run '/bin/ls' with 9 arguments (program name, option -R1, plus 7 file names and terminating null pointer).
The ls command notes the options (recursive and single-column output), and gets to work.
The first 6 names (as it happens) are simple files, so there is nothing recursive to do.
The last name is a directory, so ls prints its name and its contents, invoking its equivalent of nftw() to do the job.
At this point, it is done.
This uncontrived example doesn't show what happens when there are multiple directories, and so the description above over-simplifies the processing.
Specifically, ls processes the non-directory names first, and then processes the directory names in alphabetic order (by default), and does a depth-first scan of each directory.
foo -a '*.txt'
Part of the shell's job (on Unix) is to expand command line wildcard arguments. You prevent this with quotes.
Also, on Unix systems, the "find" command does what you want:
find . -name '*.txt'
will list all files recursively from the current directory down.
Thus, you could do
foo `find . -name '*.txt'`
I wanted to point out another way to turn off wildcard expansion. You can tell your shell to stop expanding wildcards with the the noglob option.
With bash use set -o noglob:
> touch a b c
> echo *
a b c
> set -o noglob
> echo *
*
And with csh, use set noglob:
> echo *
a b c
> set noglob
> echo *
*