I'm a little baffled; I don't seem to be able to change any of the properties associated with an existing AWS::AutoScaling::Trigger.
This seems like an awfully basic thing you would want to do once you've got things set up; perhaps adjusting the period, breachduration, increments, upper+lower threshold.
If I take my template, just change the "UpperThreshold" property, and try to update my existing stack with it, I get this error and the update fails:
The following resource(s) failed to update: [AutoscalingTrigger].
Updating the properties of AWS::AutoScaling::Trigger resources is not supported.
Suggestions?
Thanks...
I'm not sure if this helps or not, but I came across this note on the UpdateAutoScalingGroup page:
"To update an Auto Scaling group with a launch configuration that has the InstanceMonitoring flag set to False, you must first ensure that collection of group metrics is disabled. Otherwise, calls to UpdateAutoScalingGroup will fail. If you have previously enabled group metrics collection, you can disable collection of all group metrics by calling DisableMetricsCollection."
Related
You have specified an Apollo key but have not specified a graph ref; usage reporting is disabled. To enable usage reporting, set the APOLLO_GRAPH_REF environment variable to your-graph-id#your-graph-variant. To disable this warning, install ApolloServerPluginUsageReportingDisabled
I am struggling to find instructions for how to 'finish' the setup for an apollo classic graph. All of this worked fine last week but no longer does (I can see on Apollo's instructions page that something changed on 4 Oct and maybe requires people to change their graph reference).
I'm trying to solve these problems one step at a time, but cannot make sense of the instructions for apollo.
The format represented in the error message has two fragments, separated by an 'a' symbol. The format of the apollo key in the federation 2 instructions also has two fragments, but they are separated by a colon.
To find your-graph-id#your-graph-variant, log in to the apollo sutdio, got to settings and under the This Graph tab, look at the first section of General settings, you should find your-graph-id.
The variant is usually defaulted as current, i.e variant=current. If you have specified a different variant, be sure to use it.
In your .env you'd have to specify these two keys:
APOLLO_KEY=<your-apollo-api-key>
APOLLO_GRAPH_REF=<your-graph-id>#<current-or-your-variant>
Restart your server and you should be good.
As I'm working with an existing terraform code that has been run successfully against AWS, I discovered I'd like to reuse the code in a different region without having to have a 2nd set of the same code. Some of the code affects global services which means I don't need it to be rerun in the other regions, so I would like to include the count = "${var.alreadyrun}" == "yes" ? 1 : 0 , in some of the terraform modules.
However, when I add the above line to the existing code for the specific modules, when I run terraform plan against the same region it was already run against, it tells me it's going to destroy and re-add those modules. I don't want to destroy and the recreated modules, I just want to skip it and move on to the next. Is there a way I can do this?
Adding count to a module block causes Terraform to track multiple instances for that block, and so the address of the module will change from something like module.example to be like module.example[0] instead, and so by default Terraform will assume you want to destroy the old module instance with no instance key and create a new one with instance key zero.
However, if you are using Terraform v1.1 or later you can add an additional declaration to tell Terraform that you want to "move" the existing module instance to a new address instead. For a module "example" block, that would look like this:
module "example" {
source = "./modules/example"
count = var.enable_example ? 1 : 0
# ...
}
moved {
from = module.example
to = module.example[0]
}
There are more details on moved blocks in the Terraform documentation section Refactoring.
As a side-note, when declaring a conditional module or resource based on a n input variable like this it's more typical to name it something like enable_example as I showed above, rather than a name like "already run", because a Terraform configuration should typically declare a desired state rather than describing how to reach that state.
You might also wish to investigate the possibility of splitting your Terraform configuration into multiple parts so that there's a "global" configuration that you use only once and then a "regional" configuration that you use for each region. That will then avoid the need to treat one of the regions as "special", also being responsible for the global infrastructure, and thus create a clearer dependency graph between all of your configurations for future maintainers to understand.
Both of those suggestions are away from your direct question, though; a moved block as I described above is the more direct answer.
I am trying to install one from two features based on the value that should be set inside of the custom action.
Firstly, I set the value of a property:
UINT __stdcall ConfigurationCheckAction(MSIHANDLE hInstall)
{
HRESULT hr = S_OK;
UINT er = ERROR_INSTALL_FAILURE;
hr = WcaInitialize(hInstall, "ConfigurationCheckAction");
if (condition) {
MsiSetProperty( hInstall, TEXT("STREAM"), TEXT("RED") );
}
else {
MsiSetProperty( hInstall, TEXT("STREAM"), TEXT("BLUE") );
}
return WcaFinalize(er);
}
Secondly, I make two conditions per two features:
<Feature Id='Complete' Level='1'>
<Feature Id="Red" ConfigurableDirectory="TARGETDIR" Title="F1" Level="0">
<Condition Level="1">STREAM</Condition>
</Feature>
<Feature Id="Blue" ConfigurableDirectory="TARGETDIR" Title="F2" Level="0">
<Condition Level="1">NOT STREAM</Condition>
</Feature>
</Feature>
Note that I don't define property inside of the wxs file previously, as I would like to set it from the custom action.
My custom action is called before InstallInitialize and Execute is immediate.
From the installation log I have confirmation that the property is set.
However, my conditional installation does not work, as it seems like what is in the condition is always evaluated as false.
I tried evaluating conditions:
STREAM, STREAM=RED, STREAM="RED", < ![CDATA[STREAM=RED]]>
I am running out of ideas what to do and would appreciate help.
Too late to test all of this, but here goes with some information. I will check back tomorrow. Essentially I think the problem is your custom action sequencing. Try before Costing.
Some things to consider:
Custom action sequencing: you need to sequence your custom action right and it needs to be present in both silent and interactive installation modes.
Did you try to sequence the set property custom action before CostInitialize? You state you set it before InstallInitialize, but try it before CostInitialize instead (you might have tried).
And did you remember to insert this custom action in the InstallUISequence as well as the InstallExecuteSequence? You need to insert in both sequences in case the setup runs in silent mode. Before CostInitialize in both sequences I believe.
Feature Level: manipulating features via the feature level and INSTALLLEVEL is just one way to do feature control, you can also set features via the command line or using a custom action.
Setting a feature level to 0 should hide the feature from view in the setup's custom dialog.
Setting a feature level higher than the setup's INSTALLLEVEL will deselect the feature from installation.
And the other way around setting a feature level lower or equal to the setup's INSTALLLEVEL will select the feature for installation.
The conditional syntax allowed is quite flexible, and could provide the functionality you need outright - but I have never used them properly. Here is an example from the Installshield forum.
ADDLOCAL & REMOVE: you can manipulate the feature selection by changing the values of the ADDLOCAL and REMOVE properties from a custom action (technically also REINSTALL and ADVERTISE) - and these properties can be set via the command line as well.
Win32: you can also use the Win32 functions MsiGetFeatureState and MsiSetFeatureState - from a C++ custom action - to set feature selection.
Frankly it is a bit mad the whole thing. Also keep in mind that there are feature action states (what is going to happen to a feature) and feature installed states (what state it is in). The Win32 function documentation should explain.
Cross-linking for easy retrieval:
Unselected Feature Being Installed
I have done something similar, but we ended up controlling this at a component level(adding the condition to the <Component/> elements instead of the feature element using a transform during heat). But our condition utilizes CDATA while also using double quotes for the value, which you don't list in what you've tried. So first I'd try the following conditions in your features:
<Condition><![CDATA[STREAM="RED"]]></Condition>
<Condition><![CDATA[STREAM="BLUE"]]></Condition>
If that still does not work, I would try the following:
Add the STREAM property with a default value to your WiX. Then test it with that default value to see if having the default value set to begin with makes it work. That could mean you need to set the property sooner, possibly off a UI event. <Property Id="STREAM" Value="RED"/>
As a last resort, you could add the conditions to each component as I did, but we only did that for very specific reasons, hopefully you can get the conditional feature to work with the above suggestions!
I hope the above fixes your problem, or at least leads you to the answer!
Thank you for your replies. In the end, a combination of your suggestions helped me.
I want to state what did and what did not work:
Adding property to WiX with a default value was not necessary (as well with adding property of this property Secure='yes')
Calling custom action before CostInitialize did not solve the problem on its own, but I believe it was one of the factors that resolved an issue.
Conditional sintax was corrected by:
a) Putting condition inside of CDATA and adding quotes to the value of property as suggested: <Condition><![CDATA[STREAM="RED"]]></Condition>
b) Reversing condition levels so feature has condition level 1 and condition has level 0. This means that feature is always installed, unless the condition expression is false.
Concerning the correct order of the custom actions, the description of the custom action type 51 contains the decisive hint:
"To affect a property used in a condition on a component or feature, the custom action must come before the CostFinalize action in the action sequence."
I am learning Kubernetes recently, and I am not very clear about the difference between "kubectl apply" and "kubectl replace". Is there any situation that we can only use one of them?
I have written up a thorough explanation of the differences between apply, replace, and patch: Kubernetes Apply vs. Replace vs. Patch. It includes an explanation that the current top-ranked answer to this question is wrong.
Briefly, kubectl apply uses the provided spec to create a resource if it does not exist and update, i.e., patch, it if it does. The spec provided to apply need only contain the required parts of a spec, when creating a resource the API will use defaults for the rest and when updating a resource it will use its current values.
The kubectl replace completely replaces the existing resource with the one defined by the provided spec. replace wants a complete spec as input, including read-only properties supplied by the API like .metadata.resourceVersion, .spec.nodeName for pods, .spec.clusterIP for services, and .secrets for service accounts. kubectl has some internal tricks to help you get that right, but typically the use case for replace is getting a resource spec, changing a property, and then using that changed, complete spec to replace the existing resource.
The kubectl replace command has a --force option which actually does not use the replace, i.e., PUT, API endpoint. It forcibly deletes (DELETE) and then recreates, (POST) the resource using the provided spec.
Updated Answer
My original was rather controversial and I would even say now, in hindsight, half incorrect. So here is an updated answer which I hope will be more helpful:
commands like kubectl patch, replace, delete, create, even edit are all imperative: they tell kubectl exactly what to do
the kubectl apply command is OTOH "declarative" in that it tells kubernetes, here is a desired state (the yaml from the file provided to the apply command), now figure out how to get there: create, patch, replace the object, etc whatever it takes... you get the idea.
So the 2 commands are hugely different.
EG with apply you can give it just the changes you want: it will figure out what properties of the object need to be changed, and leave the other ones alone; if those properties are "immutable" (eg, the nodeName of a pod), it will complain, and if you then repeat the command with --force, it is smart enough to know to do the equivalent of a replace --force.
In general, you should favor apply (with --force when necessary), and only use the imperative commands when the declarative approach does not give the expected result (although I would love to see examples of this -- I'm guessing this would happen only when you would need several steps because of interdependencies that will have negative consequences if done with apply).
The difference between apply and replace is similar to the difference between apply and create.
create / replace uses the imperative approach, while apply uses the declarative approach.
If you used create to create the resource, then use replace to update it. If you used apply to create the resource, then use apply to update it.
Note that both replace and apply require a complete spec, and both create the new resources first before deleting the old ones (unless --force is specified).
you can add option -v=8 when use kubectl, and you will find the log like this
apply --force
patch 422
delete 200
get 200
get 200
get 404
post 201
replace --force
get 200
delete 200
get 404
post 201
kubectl apply .. will use various heuristics to selectively update the values specified within the resource.
kubectl replace ... will replace / overwrite the entire object with the values specified. This should be preferred as you're avoiding the complexity of the selective heuristic update. However some resources like ingresses/load balancers can't really be replaced as they're immutable.
Example of the heuristic update leading to non obvious operation: https://github.com/kubernetes/kubernetes/issues/67135
From: https://github.com/kubernetes/website/blob/master/content/en/docs/concepts/cluster-administration/manage-deployment.md
Disruptive updates
In some cases, you may need to update resource fields that cannot be
updated once initialized, or you may just want to make a recursive
change immediately, such as to fix broken pods created by a
Deployment. To change such fields, use replace --force, which deletes
and re-creates the resource.
I´m trying to perform some actions in the pipeline "httpRequestBegin" only when necessary.
My processor is executed after Sitecore resolves the user (processor type="Sitecore.Pipelines.HttpRequest.UserResolver, Sitecore.Kernel" ), as i´m resolving the user too if Sitecore is not able to resolve it first.
Later, i want to add some rendering in the pipeline "insertRenderings", only if actions in the previous pipeline were executed (If i resolved the user, show a message), so i´m trying to save some "flag" in the first step, to check in the second.
My question is, where can I store that flag? I´m trying to find some kind of "per request" cache...
So far, I've tried:
The session: Wrong, it's too early, session doesn't exists yet.
Items (HttpContext.Current.Items): It doesn't work either, my item is not there on the seconds step.
So far i'm using the application cache (HttpContext.Current.Cache) with some unique key, but I don´t like this solution.
Anybody body knows a better approach to share this "flag"?
You could add a flag to the request header and then check it's existence in the latter pipelines, e.g.
// in HttpRequest pipeline
HttpContext.Current.Request.Headers.Add("CustomUserResolve", "true");
// in InsertRenderings pipeline
var customUserResolve = HttpContext.Current.Request.Headers["CustomUserResolve"];
if (Sitecore.MainUtil.GetBool(customUserResolve, false))
{
// custom logic goes here
}
This feels a little dirty, I think adding to Request.QueryString or Request.Params would been nicer but those are readonly. However, if you only need this for a one time deal (i.e. only the first time it is resolved) then it will work since in the next request the Headers are back to default without your custom header added.
HttpContext.Current.Cache or HttpRuntime.Cache could be the fastest solution here. Though this approach would not preserve data when the AppPool gets recycled.
If you add only a few keys to the cache and then maintain them, this solution might work for you. If each request puts an entry into the cache, it may eventually overflow the memory used by worker process in a long run.
As alternative to this you may try to use Sitecore.Context.ClientData property. It uses ClientDataStore that employs a database (look for clientDataStore section in the web.config file) to store data. These entries can survive the AppPool recycle.
Though if you use them a lot, it may become a bottleneck under the load when you need to write to and/or read from the entries.
If you do know that there could be a lot of entries created for sharing purposes, I'd create a scheduled task to clean up the data store from obsolete entries.
I know this is a very old question, but I just want post solution I worked around
Below will hold data per http request basis.
HttpContext.Current.Items["ModuleInfo"] = "Custom Module Info"
we can store data to httpcontext in one sitecore pipeline and retrieve in another...
https://www.codeproject.com/Articles/146455/When-Can-We-Use-HttpContext-Current-Items-to-Store