I need to import an existing order to terraform state.
For example:
Consider I need to pass ID and Environment values to import the script.
if we have to pass only one argument say ID, we can use the below script
terraform import hashicups_order.sample {id}
In my case I need to pass two arguments, we can say it id and environmentValue. So how can we do that?
terraform import hashicups_order.sample {id} {one more argument???}
TF import has the following form
terraform import [options] ADDRESS ID
ADDRESS ID is a single value (not multiple values) uniquely identifying the resource to be imported.
If you wish to pass any other values to import you have to use -var in [options] as explained in the docs.
Related
I want to dynamically choose from Create Disposition options depending on the arguments.
In the the DataflowPipelineOptions I am accepting load type in a ValueProvider via arguments. However I am not able to get the string from the ValueProvider to decide on what create disposition option to use.
withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
I want 'CREATE_IF_NEEDED' to be dynamic. I want to replace this with something like this. Note following is just a pseudocode. I am looking for solution here.
create_disp = options.getLoad()
withCreateDisposition(create_disp
You can pass a program argument representing createDisposition
Program argument (CREATE_NEVER or CREATE_IF_NEEDED) :
--bqCreateDisposition=CREATE_NEVER
In the Option class in Java, you can pass a field as Enum (there is a default value in this case with CREATE_IF_NEEDED) :
import org.apache.beam.sdk.io.gcp.bigquery.BigQueryIO;
import org.apache.beam.sdk.options.Default;
import org.apache.beam.sdk.options.Default.Enum;
import org.apache.beam.sdk.options.Description;
import org.apache.beam.sdk.options.PipelineOptions;
public interface MyOptions extends PipelineOptions {
#Description("BQ create disposition")
#Default
#Enum("CREATE_IF_NEEDED")
BigQueryIO.Write.CreateDisposition getBqCreateDisposition();
void setBqCreateDisposition(BigQueryIO.Write.CreateDisposition value);
}
I want to create an import to my CodeQL query.
I want that this import will be named Utils and I will created inside it a predicate named isNumber.
How can I creat such import?
This how I want my code to look like:
import cpp
import Utils
where
if exists(...)
then isNumber(size.(VariableAccess).getTarget())
else ...
select ...
I don't know how I create the Utils import, it writes:
Could not resolve module Utils
I tried to create a folder named Utils near my code query (code.ql) but it didn't work.
I found how to do it.
Need to create a file named Utils.qll in the same folder of your CodeQL query.
This is its code:
import cpp
predicate isNumber(Variable v){
v.getUnspecifiedType() instanceof IntegralType
}
I am trying make a ancestor query like this example and transfer it to template version.
The problem is that the parameter ancestor_id is for the function make_query during pipeline construction.
If I don't pass it when create and stage the template, I will get RuntimeValueProviderError: RuntimeValueProvider(option: ancestor_id, type: int).get() not called from a runtime context. But if I pass it at template creating, it seems like a StaticValueProvider that never change when I execute the template.
What is the correct way to pass parameter to template for pipeline construction?
import apache_beam as beam
from apache_beam.io.gcp.datastore.v1.datastoreio import ReadFromDatastore
from apache_beam.options.pipeline_options import PipelineOptions
from google.cloud.proto.datastore.v1 import entity_pb2
from google.cloud.proto.datastore.v1 import query_pb2
from googledatastore import helper as datastore_helper
from googledatastore import PropertyFilter
class Test(PipelineOptions):
#classmethod
def _add_argparse_args(cls, parser):
parser.add_value_provider_argument('--ancestor_id', type=int)
def make_query(ancestor_id):
ancestor = entity_pb2.Key()
datastore_helper.add_key_path(ancestor, KIND, ancestor_id)
query = query_pb2.Query()
datastore_helper.set_kind(query, KIND)
datastore_helper.set_property_filter(query.filter, '__key__', PropertyFilter.HAS_ANCESTOR, ancestor)
return query
pipeline_options = PipelineOptions()
test_options = pipeline_options.view_as(TestOptions)
with beam.Pipeline(options=pipline_options) as p:
entities = p | ReadFromDatastore(PROJECT_ID, make_query(test_options.ancestor_id.get()))
Two problems.
The ValueProvider.value.get() method can only run in a run-time method like ParDo.process(). See example.
Further, your challenge is that your are using Google Cloud Datastore IO (a query from datastore). As of today (May 2018),
the official documentation indicates that, Datastore IO is NOT accepting runtime template parameters yet.
For python, particularly,
The following connectors accept runtime parameters.
File-based IOs: textio, avroio, tfrecordio
A workaround: you probably can first run a query without any templated parameters to get a PCollection of entities. At this time, since any transformers can accept a templated parameter you might be able to use it as a filter. But this depends on your use case and it may not applicable to you.
Pyomo solver invocation can be achieved by command line usage or from a Python script.
How does the command line call with the summary flag
pyomo solve model.py input.dat --solver=glpk --summary
translate to e.g. the usage of a SolverFactory class in a Python script?
Specifically, in the following example, how can one specify a summary option? Is it an (undocumented?) argument to SolverFactory.solve?
from pyomo.opt import SolverFactory
import pyomo.environ
from model import model
opt = SolverFactory('glpk')
instance = model.create_instance('input.dat')
results = opt.solve(instance)
The --summary option is specific to the pyomo command. It is not a solver option. I believe all it really does is execute the line
pyomo.environ.display(instance)
after the solve, which you can easily add to your script. A more direct way of querying the solution is just to access the value of model variables or the objective by "evaluating" them. E.g.,
instance.some_objective()
instance.some_variable()
instance.some_indexed_variable[0]()
or
pyomo.environ.value(instance.some_objective)
pyomo.environ.value(instance.some_variable)
pyomo.environ.value(instance.some_indexed_variable)
I prefer the former, but the latter is more appropriate if you are accessing the values of immutable, indexed Param objects. Also, note that variables have a .value attribute that you can access directly (and update if you want to provide a warmstart).
Per default the --summary command option stores a 'result' file in json format into the directory of your model.
You can achieve the same result by adding the following to your code:
results = opt.solve(instance, load_solutions=True)
results.write(filename='results.json', format='json')
or:
results = opt.solve(instance)
instance.solutions.store_to(results)
results.write(filename='results.json', format='json')
I'm trying to get a Hudson job to get built in a custom workspace path that is automatically generated using yyyyMMdd-HHmm. I can get the $BUILD_ID variable expanded as mentioned in bug 3997, and that seems to work fine. However, the workspace path is incorrect as it is of the format yyyy-MM-dd_HH-mm-ss. I've tried using the ZenTimestamp plugin v2.0.1, which changes the $BUILD_ID, but this only seems to take effect after the workspace is created.
Is there a method of defining a custom workspace in the manner that I want it?
You can use a groovy script to achieve that.
import hudson.model.*;
import hudson.util.*;
import java.util.*;
import java.text.*;
import java.io.*;
//Part 1 : Recover build parameter
AbstractBuild currentBuild = (AbstractBuild) Thread.currentThread().executable;
def envVars= currentBuild.properties.get("envVars");
def branchName = envVars["BRANCH_NAME"];
//Part 2 : Define new workspace Path
def newWorkspace = "C:\\Build\\"+branchName;
//Part 3 : Change current build workspace
def newWorspaceFilePath = new FilePath(new File(newWorkspace));
currentBuild.setWorkspace(newWorspaceFilePath);