I'm using kube-aws to run a Kubernetes cluster on AWS, and everything works as expected.
Now, I realize that cron jobs aren't turned on in the version I'm using (v1.7.10_coreos.0), while the documentation for Kubernetes only states the following:
For previous versions of cluster (< 1.8) you need to explicitly enable batch/v2alpha1 API by passing --runtime-config=batch/v2alpha1=true to the API server (see Turn on or off an API version for your cluster for more).
And the documentation directed to in that text only states this (it's the actual, full documentation):
Specific API versions can be turned on or off by passing --runtime-config=api/ flag while bringing up the API server. For example: to turn off v1 API, pass --runtime-config=api/v1=false. runtime-config also supports 2 special keys: api/all and api/legacy to control all and legacy APIs respectively. For example, for turning off all API versions except v1, pass --runtime-config=api/all=false,api/v1=true. For the purposes of these flags, legacy APIs are those APIs which have been explicitly deprecated (e.g. v1beta3).
I have been unsuccessful in finding information about how to change the configuration of a running cluster, and I, of course, don't want to try to re-run the command on api-server.
Note that kube-aws still use hyperkube, and not kubeadm. Also, the /etc/kubernetes/manifests-directory only contains the ssl-directory.
The setting I want to apply is this: --runtime-config=batch/v2alpha1=true
What is the proper way, preferably using kubectl, to apply this setting and have the apiservers restarted?
Thanks.
batch/v2alpha1=true is set by default in kube-aws. You can find it here
Related
I am looking for a way to run docker compose create ecs without having to manually select where it gets AWS credentials from (as it's being run from a build agent).
In the following AWS blog it shows it being used with a flag --from-env (which is exactly what I want), however that flag doesn't seem to actually exist, either in the official docs, or by trial and error. Is there something I am missing?
Apparently it's a known issue
https://github.com/docker/docker.github.io/issues/11845
You have to enable experimental support for the docker cli in Linux to create an ecs context :S
My (python) code runs inside a docker container.
The container is deployed on AWS EC2 for our production and testing purposes, but sometimes on our local machines or other cloud vendors for development and CICD purposes.
For some functionality, I want my python code to be able to distinguish between an EC2 deployment and non-EC2. Is this possible?
I found this answer which uses the EC2 instance metadata endpoint, But I'm wondering:
a) Would this also work from within a docker container?
b) Isn't there a more elegant solution? Issuing an HTTP request and waiting for it seems a bit too much.
(I'm aware that a simple solution is probably to add some proprietary environment variable or flag, trying to find a more native to check this)
I recommend you to go with a custom environment variable. This way you will be able to easily reproduce the required behaviour outside of AWS (on your workstation or using other cloud provider).
Using curl or checking for presence of /etc/cloud would make your application behaviour dependent on third-party services/tools. Beside logic complexity (you'd have to handle possible curl errors, like invalid response codes) that can lead to bugs you surely don't want to meet.
I'm using Google Cloud Profiler (located at https://console.cloud.google.com/profiler) and would like to know how my profiling data changes across different builds of my application.
One way to do that would be to check the range of dates during which a particular commit was running on production, but that's time consuming because I have to:
Get the start date/time of release, determine the date/time of the next release
Set those dates manually in the profiler interface from the link above
That's really not terrible, but it'd be great to be able to set BUILD_ID environment variable like I can in Cloud Build and then be able to access that from the UI. Is something like this possible? Or is my approach the best way to do this at the moment?
Comparing across service versions would likely be a simpler and more precise way to do this (as opposed to using the time interval to select for profiles). To compare across service versions, it is necessary that the profiling agents set the service version.
The service version can be specified in the configuration passed to the agent (for the Go, Python, or Node.js agent) or via the -cprof_service_version flag (for the Java agent). If one is setting the service version using the configuration passed to the agent (applicable for the Go, Python, and Node.js agents), it may be convenient to use a flag or command line argument to set the service version so that the source code won't need to updated with each new version.
If one is running on Knative or App Engine standard, the service version should be auto-populated. These environments set the K_REVISION and GAE_VERSION environment variables (respectively), and the profiling agents (for all supported languages) use these environment variables to populate the service version. If one is running in another environment and modifying the source code is inconvenient or infeasible, one can set either the K_REVISION or GAE_VERSION environment variable in the environment running the application with the agent enabled to specify the service version.
My understanding is that the BUILD_ID is available at build time, but not at run time, so I don't know that it's possible for agents to use that directly.
(Disclosure: I work on Cloud Profiler at Google)
You can set the service version for this purpose. Please refer to the agent documentation for how to set it for supported languages.
For example, this shows using ServiceVersion for Go services.
I am just looking into using GCP for cloud computing stuff. So far I have been using AWS and the boto3 library and was trying to use the google python client API for launching instances.
So an example I came across was from their docs here. The instance machine type is specified as:
machine_type = "zones/%s/machineTypes/n1-standard-1" % zone
and then it passed to the configuration as:
config = {
'name': name,
'machineType': machine_type,
....
I wonder how does one go about specifying machines with GPU and custom RAM and processors etc. from the python API?
The Python API is basically a wrapper around the REST API, so in the example code you are using, the config object is being built using the same schema as would be passed in the insert request.
Reading that document shows that the guestAccelerators structure is the relevant one for GPUs.
Custom RAM and CPUs are more interesting. There is a format for specifying a custom machine type name (you can see it in the gcloud documentation for creating a machine type). The format is:
[GENERATION]custom-[NUMBER_OF_CPUs]-[RAM_IN_MB]
Generation refers to the "n1" or "n2" in the predefined names. For n1, this block is empty, for n2, the prefix is "n2-". That said, experimenting with gcloud seems to indicate that "n1-" as a prefix also works as you would expect.
So, for a 1 CPU n1 machine with 5GB of ram, it would be a custom-1-5120. This is what you would replace the n1-standard-1 in your example with.
You are, of course, subject to the limits of how to specify a custom machine such as the fact that RAM must be a multiple of 256MB.
Finally, there's a neat little feature at the bottom of the console "create instance" page:
Clicking on the relevant link will show you the exact REST object you need to create the machine you have defined in the console at that very moment, so it can be very useful to see how a particular parameter is used.
You can create a Compute Engine instance using the Compute Engine API. Specifically, we can use the insert API request. This accepts a JSON payload in a REST request that describes the desired VM instance that you desire. A full specification of the request is found in the docs. It includes:
machineType - specs of different (common) machines including CPUs and memory
disks - specs of disks to be added including size and type
guestAccelerators - specs for GPUs to add
many more options ...
One can also create a template description of the machine structure you want and simplify the creation of an instance by naming the template to use and thereby abstracting the configuration details out of code and into configuration.
Beyond using REST requests (which can be passed from a python), you also have the capability to create Compute Engines from:
GCP Console - web interface
gcloud - command line (which I suspect can also be driven from within Python)
Deployment Manager - configuration driven deployment which includes Python as a template language
Terraform - popular environment for creating Infrastructure as Code environments
Is it possible to setup auto-scaling capabilities for an app depending on the workload?
I haven't found anything useful neither in the Developer Console nor in the docs. Is there may be a hidden possibility via the CLI?
Just wondering if this is possible as I'm doing a basic evaluation on Swisscom Application Cloud.
There are several opensource autoscaling projects of various readiness for production use like
https://github.com/cloudfoundry-incubator/app-autoscaler
https://github.com/cloudfoundry-samples/cf-autoscaler
Pivotal Cloud Foundry supports auto-scaling of the applications out of the box (http://docs.pivotal.io/pivotalcf/1-8/appsman-services/autoscaler/autoscale-configuration.html)
This capability is not present at the moment, and it is not part of the (open source) cloudfoundry platform either. Some platforms provide it, but this has not been released to the community yet!
There are various ways how you can do that.
As described by Anatoly, you can obvisouly use the "Auto Scaler" Service, if this is deployed from your respective Provider.
(you can figure that out by just calling this Feature-Flags-API check: https://apidocs.cloudfoundry.org/253/feature_flags/get_the_app_scaling_feature_flag.html)
An other option is actually writing your own small auto-scaler based on the custom-defined scaling-behaviours you've to meet your application. (DIY ;))
Get Load
: First you need to get information about your current "load" of the app (i.e. memory usage, cpu usage etc). You can easily do that by pulling data from the v2/apps//stats API. See details here:
https://apidocs.cloudfoundry.org/253/apps/get_detailed_stats_for_a_started_app.html
Write some magic :
Now you need to write some magic around to verify if the app is under heavy load. Could be CPU or Memory or other bottle necks you try to get our of the stats API.
Scale up/down :
With the PUT v2/apps// API you can easily now change the amount of instances of your app by filling the paramter "instances" accordingly.
https://apidocs.cloudfoundry.org/253/apps/updating_an_app.html
For PCF you can take a look at this https://github.com/Pivotal-Field-Engineering/autoscaling-cli-plugin. It should give you what you are looking for.
You will need to install it via the
cf install-plugin https://github.com/pivotal-cf-experimental/autoscaling-cli-plugin
and configure using steps similar to below
Get the details of the autoscaler from your marketplace
cf m | grep app-autoscaler
Install the auto scaler plugin using service & plan from above
cf create-service <service> <plan> myAutoScaler
Bind the service to your app (or u can do this via you deployment manifest)
cf bind-service myApp myAutoScaler
Configure your scaling parameters
cf configure-autoscaling --min-threshold ## --max-threshold ## --max-instances # --min-instances # myApp myAutoScaler