I am trying to generate a file using Plink with your independently statistically significant hits. I have ran the command:
system("plink/plink --bfile data/ BB5707 --clump results/results_1741182.assoc.log --clump-p1 5e-08 --clump-p2 0.05 --clump-r2 0.1 --clump-kb 250 --out results/results_1741182.assoc.linear_clumped.clumped")
But I am getting the following error:
Error: Failed to open results/results_1741182.assoc.linear_clumped.clumped.log. Try changing the --out parameter.
[1] 2
What do you suggest is wrong with this.
You can try to remove ".clumped" from the end of your output name
Related
I set label "mytime" in timestemp format for my pod. Now i want select all pods with expired time? some think like this:
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.labels.mytime<$now()}{.metadata.name}{ "\n"}{end}'
but i see error
error: error executing jsonpath "{range .items[*]}{.metadata.labels.mytime<$now()}{.metadata.name}{ \"\\n\"}{end}": Error executing template: unrecognized identifier now(). Printing more information for debugging the template:
template was:
{range .items[*]}{.metadata.creationTimestamp>$now()}{.metadata.name}{ "\n"}{end}
object given to jsonpath engine was: ...
how use time in condition?
kubectl -n test get deployment -o jsonpath='{.items[?(#.metadata.labels.mytime<"2020-10-08_14-15-07")].metadata.name}'
This did the trick for me:
I couldn't get the result with now, may be due to the difference in format.
kubectl get pods -o=jsonpath="{range .items[?(#.metadata.labels.mytime<=\"2022-12-19\")]}[{.metadata.labels.mytime},{.metadata.namespace},{.metadata.name}] {end}"
I am trying Pig commands on EMR of AWS. But even small commands are not working as I expected. What I did is following.
Save the following 6 lines as ~/a.csv.
1,2,3
4,2,1
8,3,4
4,3,3
7,2,5
8,4,3
Start Pig
Load the csv file.
grunt> A = load './a.csv' using PigStorage(',');
16/01/06 13:09:09 INFO Configuration.deprecation: fs.default.name is deprecated. Instead, use fs.defaultFS
Dump the variable A.
grunt> dump A;
But this commands fails. I expected that this command produces 6 tuples which are described in a.csv. The dump commands a lot of INFO lines and ERROR lines. The ERROR lines are following.
91711 [main] ERROR org.apache.pig.tools.pigstats.PigStats - ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
16/01/06 13:10:08 ERROR pigstats.PigStats: ERROR 0: java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
91711 [main] ERROR org.apache.pig.tools.pigstats.mapreduce.MRPigStatsUtil - 1 map reduce job(s) failed!
16/01/06 13:10:08 ERROR mapreduce.MRPigStatsUtil: 1 map reduce job(s) failed!
[...skipped...]
Input(s):
Failed to read data from "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/user/hadoop/a.csv"
Output(s):
Failed to produce result in "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/tmp/temp-718505580/tmp344967938"
[...skipped...]
91718 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias A. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
16/01/06 13:10:08 ERROR grunt.Grunt: ERROR 1066: Unable to open iterator for alias A. Backend error : java.lang.IllegalStateException: Job in state DEFINE instead of RUNNING
(I have changed IP-like description.) The error message seems to say that the load operator also fails.
I have no idea why even the dump operator fails. Can you give me any advice?
Note
I also use TAB in a.csv instead commas and execute A = load './a-tab.csv';, but it does not help.
$ pig -x local -> A = load 'a.csv' using PigStorage(','); -> dump A;. Then
Input(s):
Failed to read data from "file:///home/hadoop/a.csv"
If I use the full path, namely A = load '/home/hadoop/a.csv' using PigStorage(',');, then I get
Input(s):
Failed to read data from "/home/hadoop/a.csv"
I have encountered the same problem. You may try to su root use the root user, then ./bin/pig at PIG_HOME to start pig in mapreduce mode. On the other hand, you also can use the current user by sudo ./bin/pig at PIG_HOME to start pig, but you must export JAVA_HOME and HADOOP_HOME in the ./bin/pig file.
If you want to use your local file system, you should have to start your pig in step 2 as below
bin/pig -x local
If you start just as bin/pig that will search the file in DFS. That's why you get error Failed to read data from "hdfs://ip-xxxx.eu-central-1.compute.internal:8020/user/hadoop/a.csv"
I am running a query where $sale_ids could potentially contain 100's to a thousands of sale_ids. I am looking for a way to fix the regex error without modifying the core of CI 3.
This didn't happen in version 2 and is NOT considered a bug in CI 3 (I brought up the issue before).
Is there a way I can get this to work? I could change the logic of the application but this would require days of work.
I am looking for a way to extend/override a class so I can allow for this query to work If there is not a way to do this by override I will have to hack the core (I don't know how).
$this->db->select('sales_payments.*, sales.sale_time');
$this->db->from('sales_payments');
$this->db->join('sales', 'sales.sale_id=sales_payments.sale_id');
$this->db->where_in('sales_payments.sale_id', $sale_ids);
$this->db->order_by('payment_date');
Error is:
Severity: Warning
Message: preg_match(): Compilation failed: regular expression is too large at offset 53249
Filename: database/DB_query_builder.php
Line Number: 2354
Backtrace:
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/application/models/Sale.php
Line: 123
Function: get
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/application/models/Sale.php
Line: 48
Function: _get_all_sale_payments
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/application/models/reports/Summary_payments.php
Line: 60
Function: get_payment_data
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/application/controllers/Reports.php
Line: 1887
Function: getData
File: /Applications/MAMP/htdocs/phppos/PHP-Point-Of-Sale/index.php
Line: 323
Function: require_once
There wasn't a good way to modify the core so I came up with a small change that I made to the code with large where_in's. Start a group and create where where_in's in smaller chunks
$this->db->group_start();
$sale_ids_chunk = array_chunk($sale_ids,25);
foreach($sale_ids_chunk as $sale_ids)
{
$this->db->or_where_in('sales_payments.sale_id', $sale_ids);
}
$this->db->group_end();
I am using command line argument and by question is related to my last question as i want to check it through webcam but when i give command line argument regarding camera operation (0),
it give me error on it as invalid argument 0.
This my explained last question but in that question i am using image path, but now i want to open webcam functionality for testing. This is the Sample Program i am using for testing and checking result.
To use default camera, you need to not pass -i argument.
This command line works for me:
path\\to\\cascade\\facefinder -m 128 -M 1024 -a 0.0 -q 5.0 -c 1.1 -t 0.1
All but first arguments are optional.
I am reading the images from a folder by using the below code in MatLab
[folder1] = uigetdir();
f=dir(folder1);
for k=1:size(f,1)-2
new_file=f(k+2).name;
end
[idx,idx]=sort(cellfun(#(x) str2num(char(regexp(x,'\d*','match'))),new_file));
filename1=new_file(idx);
This code works for images in the folder with name "test_base1", "test_base2", ....
I am facing an error with a different input for this expression.
For file = {'test_30min1','test_30min10','test_30min2'};
it gives error as follows
??? Error using ==> cellfun
Non-scalar in Uniform output, at index 1, output 1.
Set 'UniformOutput' to false.
I tried giving regexp(x,'\w*','match') and many other combinations in this expression. I am not able to get the solution. May i know what is the solution for this??