time operator in jsonpath kubectl - kubectl

I set label "mytime" in timestemp format for my pod. Now i want select all pods with expired time? some think like this:
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.labels.mytime<$now()}{.metadata.name}{ "\n"}{end}'
but i see error
error: error executing jsonpath "{range .items[*]}{.metadata.labels.mytime<$now()}{.metadata.name}{ \"\\n\"}{end}": Error executing template: unrecognized identifier now(). Printing more information for debugging the template:
template was:
{range .items[*]}{.metadata.creationTimestamp>$now()}{.metadata.name}{ "\n"}{end}
object given to jsonpath engine was: ...
how use time in condition?

kubectl -n test get deployment -o jsonpath='{.items[?(#.metadata.labels.mytime<"2020-10-08_14-15-07")].metadata.name}'

This did the trick for me:
I couldn't get the result with now, may be due to the difference in format.
kubectl get pods -o=jsonpath="{range .items[?(#.metadata.labels.mytime<=\"2022-12-19\")]}[{.metadata.labels.mytime},{.metadata.namespace},{.metadata.name}] {end}"

Related

Error: change the --out parameter error in R

I am trying to generate a file using Plink with your independently statistically significant hits. I have ran the command:
system("plink/plink --bfile data/ BB5707 --clump results/results_1741182.assoc.log --clump-p1 5e-08 --clump-p2 0.05 --clump-r2 0.1 --clump-kb 250 --out results/results_1741182.assoc.linear_clumped.clumped")
But I am getting the following error:
Error: Failed to open results/results_1741182.assoc.linear_clumped.clumped.log. Try changing the --out parameter.
[1] 2
What do you suggest is wrong with this.
You can try to remove ".clumped" from the end of your output name

cloudbuil.yaml does not unmarshall when using base64-encoded value on build trigger

On my cloudbuild.yaml definition, I used to have a secrets section to get environment values from Google KMS. The secretEnv fields had keys mapping to 'encrypted + base64-encoded' values:
...
secrets:
- kmsKeyName: <API_PATH>
secretEnv:
<KEY>: <ENCRYPTED+BASE64>
I've tried to put this value on a substitution instead, which is replaced when a build trigger is used:
...
secrets:
- kmsKeyName: <API_PATH>
secretEnv:
<KEY>: ${_VALUE}
With that I intend to keep the file generic.
However, the build process keeps failing with a message failed unmarshalling build config cloudbuild.yaml: illegal base64 data at input byte 0. I've checked several times and the base64 value was not copied wrong into the substitution on the trigger.
Thank you in advance.
https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values
After reading Using user-defined substitutions section carefully, I've seen that
The length of a parameter key is limited to 100 bytes and the length
of a parameter value is limited to 4000 bytes.
Mine was a 253-character long string.
I managed to reproduce a similar error to yours (exactly this one: "Failed to trigger build: failed unmarshalling build config cloudbuild.yaml: json: cannot unmarshal string into Go value of type map[string]json.RawMessage, it is because using"). But this was only when my variable was something like "name:content" instead of "name: content". Notice the white space, so important.
Then, going back to your point... user-defined substitutions are limited to 255 characters (yes, docs are currently wrong and this has been reported). But, for example, if you use something like:
substitutions:
variable_name: cool_really_long_content_but_still_no_255_chars
And then you do this:
steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/$cool_really_long_content_but_still_no_255_chars", "."]
It still will fail if "gcr.io/$PROJECT_ID/$cool_really_long_content_but_still_no_255_chars" is, in fact, more than 255 chars even if your really long content is still not 255 chars. And this error will appear in Build details>Logs instead of being a popup that you see when you click "run trigger" in "build triggers" section on Google Cloud Build which is where the kind of the reported error appears since logs in that case are showing disabled in Build details section.

What is the entry point/command required to run an etcd container in ECS?

With all of the entry points and commands I've tried so far, I'm getting this error "no such file or directory."
I need to:
1) Set an env variable for HostIP using a special curl request to AWS
2) Run the etcd container, giving it arguments that use $HostIP
It seems that it takes a string array, but I'm not sure how it works. I'm thinking the commands are:
/bin/sh -c "export export HostIP=$(curl -s 169.254.169.254/latest/meta-data/local-ipv4)"
and
etcd -advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001
-other-similar-args...
but I need to change them to be comma separated.
1) How do I escape the commas/quotes?
2) Do I need to use a comma and start a new string for every space?
3) Does anyone have a working example???
Update: I made a custom container with an entrypoint.sh which contains this:
#!/bin/sh export HOST=$(curl -s 169.254.169.254/latest/meta-data/local-hostname) export HostIP=$(curl -s 169.254.169.254/latest/meta-data/local-ipv4)
/usr/local/bin/etcd -name etcd0 \ -advertise-client-urls
http://${HostIP}:2379,http://${HostIP}:4001 \ -listen-client-urls
http://0.0.0.0:2379,http://0.0.0.0:4001 \
-initial-advertise-peer-urls http://${HostIP}:2380 \ -listen-peer-urls http://0.0.0.0:2380 \ -initial-cluster-token etcd-cluster-1 \ -initial-cluster etcd0=http://${HostIP}:2380 \
-initial-cluster-state new
Now the issue is that the container starts up but uses localhost instead of 0.0.0.0 or the IP we fetch with curl from AWS. It seems to suffer some kind of error and falls back to localhost.
When running tasks from the ECS dashboard command line arguments need to be separated by commas, but keep in mind that the whole string passed to sh -c is a single argument interpreted by the sub-shell.
In other words your task definition should looks as follows:
/bin/sh,-c,export HostIP=$(curl -s 169.254.169.254/latest/meta-data/local-ipv4) && etcd ...
However, if you have control over the image then you could consider writing a custom ENTRYPOINT script that encapsulates all of that.

Pig's "dump" is not working on AWS

I am trying Pig commands on EMR of AWS. But even small commands are not working as I expected. What I did is following.
Save the following 6 lines as ~/a.csv.
1,2,3
4,2,1
8,3,4
4,3,3
7,2,5
8,4,3
Start Pig
Load the csv file.
grunt> A = load './a.csv' using PigStorage(',');
16/01/06 13:09:09 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
Dump the variable A.
grunt> dump A;
But this commands fails. I expected that this command produces 6 tuples which are described in a.csv. The dump commands a lot of INFO lines and ERROR lines. The ERROR lines are following.
91711 [main] ERROR org.apache.pig.tools.pigstats.PigStats - ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
16/01/06 13:10:08 ERROR pigstats.PigStats: ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
91711 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
16/01/06 13:10:08 ERROR mapreduce.MRPigStatsUtil: 1 map reduce job(s) failed!
[...skipped...]
Input(s):
Failed to read data from "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/user/hadoop/a.csv"
Output(s):
Failed to produce result in "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/tmp/temp-718505580/tmp344967938"
[...skipped...]
91718 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias A. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
16/01/06 13:10:08 ERROR grunt.Grunt: ERROR 1066: Unable to open iterator for alias A. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
(I have changed IP-like description.) The error message seems to say that the load operator also fails.
I have no idea why even the dump operator fails. Can you give me any advice?
Note
I also use TAB in a.csv instead commas and execute A = load './a-tab.csv';, but it does not help.
$ pig -x local -> A = load 'a.csv' using PigStorage(','); -> dump A;. Then
Input(s):
Failed to read data from "file:///home/hadoop/a.csv"
If I use the full path, namely A = load '/home/hadoop/a.csv' using PigStorage(',');, then I get
Input(s):
Failed to read data from "/home/hadoop/a.csv"
I have encountered the same problem. You may try to su root use the root user, then ./bin/pig at PIG_HOME to start pig in mapreduce mode. On the other hand, you also can use the current user by sudo ./bin/pig at PIG_HOME to start pig, but you must export JAVA_HOME and HADOOP_HOME in the ./bin/pig file.
If you want to use your local file system, you should have to start your pig in step 2 as below
bin/pig -x local
If you start just as bin/pig that will search the file in DFS. That's why you get error Failed to read data from "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/user/hadoop/a.csv"

Fabric raise error if grep results returned

I am using Fabric to deploy Django (of course). I want to be able to run a local command which greps a string, and if returns any results, raises an exception and halts deploy.
Something like:
local('grep -r -n "\s console.log" .')
So if I get > 0 results, I want to halt progress.
What is the best way to handle this?
Run it like this:
with settings(warn_only=True):
local('grep -r -n "\s console.log" .')
This will prevent Fabric from aborting the script execution in case the call returns anything different to zero.