vmc instances appname 3 gives error: Unknown app '3' - cloud-foundry

I have installed Ruby and Gems and also installed VMC following the documentation on the cloudfoundry website. I could deploy a simple hello world application successfully. Several commands seem to work fine. However, few commands just fail and I have no clue why.
When I run the following command:
vmc instances hellor 3
I get an error: Unknwon app '3'
When I just run:
vmc instances hellor
It retrieves the instance fine and displays it without any error. But, when I specify a number after that to increase the instances, it just seem to treat that number as an appname and gives me error. What could be the reason. I could not find anyone else facingup this issue on any of the forums. Any help on this will be highly appreciated. I am deploying on cloudfoundry.com

The behavior of this command depends on the version of vmc you are using. You can see the version of vmc you are running with vmc --version.
With vmc version 0.3.x, the instances command works as you are expecting it to in your question. If you run vmc help with version 0.3.x, you will see this among other output:
instances <appname> <num|delta> Scale the application instances up or down
With vmc version 0.4.x (also known as vmc-ng), the instances command works differently and the scale command is introduced, as Hitesh says. If you run vmc help --all with version 0.4.x, you will see this among other ouput:
instances APPS... List an app's instances
scale [APP] Update the instances/memory limit for an application

"vmc instances [APP]" is used to list the number of instances you have. To actually scale your application you can do "vmc scale [APP]" as shown below:
hghia#SEA-007~/workgalaxy/hello$ vmc scale hello
Instances> 3
1: 64M
2: 128M
3: 256M
4: 512M
5: 1G
6: 2G
Memory Limit> 64M
Scaling hello... OK
hghia#SEA-007~/workgalaxy/hello$ vmc instances hello
Getting instances for hello... OK
instance #0: running
started: 2012-12-10 03:41:39 PM
instance #1: running
started: 2012-12-10 03:46:56 PM
instance #2: running
started: 2012-12-10 03:46:56 PM
Thanks,
- Hitesh

Related

My GKE pods stoped with error "no command specified: CreateContainerError"

Everything was Ok and nodes were fine for months, but suddenly some pods stopped with an error
I tried to delete pods and nodes but same issues.
Try below possible solutions to resolve your issue:
Solution 1 :
Check a malformed character in your Dockerfile and cause it to crash.
When you encounter CreateContainerError is to check that you have a valid ENTRYPOINT in the Dockerfile used to build your container image. However, if you don’t have access to the Dockerfile, you can configure your pod object by using a valid command in the command attribute of the object.
So workaround is to not specify any workerConfig explicitly which makes the workers inherit all configs from the master.
Refer to Troubleshooting the container runtime, similar SO1, SO2 & Also check this similar github link for more information.
Solution 2 :
Kubectl describe pod podname command provides detailed information about each of the pods that provide Kubernetes infrastructure. With the help of this you can check for clues, if Insufficient CPU follows the solution below.
The solution is to either:
1)Upgrade the boot disk: If using a pd-standard disk, it's recommended to upgrade to pd-balanced or pd-ssd.
2)Increase the disk size.
3)Use node pool with machine type with more CPU cores.
See Adjust worker, scheduler, triggerer and web server scale and performance parameters for more information.
If you still have the issue, you can then update the GKE version for your cluster Manually upgrading the control planeto one of the fixed versions.
Also check whether you have updated it in the last year to use the new kubectl authentication coming in the GKE v1.26 plugin?
Solution 3 :
If you're having a pipeline on GitLab that deploys an image to a GKE cluster: Check the version of the Gitlab runner that handles the jobs of your pipeline .
Because it turns out that every image built through a Gitlab runner running on an old version causes this issue at the container start. Simply deactivate them and only let Gitlab runners running last version in the pool, replay all pipelines.
Check the gitlab CI script using an old docker image like docker:19.03.5-dind, update to docker:dind helps the kubernetes to start the pod again.

Cloud Run /Cloud Code deployment error in intellij

trying to follow the Getting Started instructions for Deploying a Cloud Run service with Cloud Code in Intellij (deploying HelloWorld Flask app container with Cloud Run: Deploy) but getting the following error, any idea why this might be happening
it worked initially i.e. deployed the app on Cloud Run service using the same steps, and then started throwing this error after a week or so when trying to redeploy, there was no change in project settings.
intellij and docker versions are the latest.
authenticated to google cloud project with gcloud auth login --update-adc
The local run works fine (Cloud Run: Run Locally),
but running the Cloud Run: Deploy throws this "code 89" error
Preparing Google Cloud SDK (this may take several minutes for first time setup)...
Creating skaffold file: /var/.../skaffold8013155926954225609.tmp
Configuring image push settings in /var/.../skaffold8013155926954225609.tmp
../Library/Application Support/cloud-code/bin/versions/../
skaffold build --filename /var/.../skaffold8013155926954225609.tmp --tag latest --skip-tests=true
invalid skaffold config: getting minikube env:
running [/Users/USER/Library/Application Support/google-cloud-tools-java/managed-cloud-sdk/LATEST/google-cloud-sdk/bin/
minikube docker-env --shell none -p minikube --user=skaffold]
- stdout: "false exit code 89"
- stderr: ""
- cause: exit status 89
Failed to build and push Cloud Run container image.
Please ensure your builder settings are correct, network is available, you are logged in to a valid GCP project, and try again.
Edit: I see minikube error code 89: ExGuestUnavailable and it's an error code specific to the guest host, still unclear what might be causing this
Looks like an issue with skaffold attempting to communicate with minikube (which could be used for building images as well). Please try cleaning minikube
minikube stop
minikube delete --all --purge
and try again.
ok, i still don't know why it fails to deploy to cloud run from intellij but i got it to deploy from command line
cd my-flask-app
#step 1: build container image from Dockerfile and submit to container registry
gcloud builds submit --tag gcr.io/GCP_PROJECT_ID/my-flask-app
#step 2: deploy the image on cloud run (reference)
gcloud run deploy --image gcr.io/GCP_PROJECT_ID/my-flask-app
references:
https://cloud.google.com/build/docs/building/build-containers
https://cloud.google.com/container-registry/docs/quickstart
Edit: the answer above did the trick : minikube delete --all --purge

How do I get a podman/buildah container to run under CentOS on GCE?

1. Summarize the problem
I am following this simple tutorial from Developers RedHat to get a simple node/express container working.
I cannot get a container to run under a CentOS 7 VM on GCE.
I have a CentOS 7 GCE virtual machine, where I have Docker installed.
I am able to successfully build and run Docker containers and push them to Google's container registry with no problem.
Now I am trying to build podman/buildah containers, and do the same.
I have buildman/podman installed. When I run this:
podman build -t hello-world-nodejs .
I get the following error message:
cannot clone: Invalid argument user namespaces are not enabled in /proc/sys/user/max_user_namespaces Error: could not get runtime: cannot re-exec process
any ideas?
Additionally, if there are any guides into getting this image into Google's container registry, and running under Cloud Run, it would be greatly appreciated.
Ultimately the destination for some containers is a cloud service.
2. Provide background including what you've already tried
I have tried doing a web search for a solution, nothing found that has solved the problem so far.
3. Show some code
podman build -t hello-world-nodejs .
4. Describe expected and actual results including any error messages
I can create and run docker images/containers on this GCE VM, I am trying to do the same with buildah/podman.
The following solved this issue for me:
sudo bash -c 'echo 10000 > /proc/sys/user/max_user_namespaces'
sudo bash -c "echo $(whoami):110000:65536 > /etc/subuid"
sudo bash -c "echo $(whoami):110000:65536 > /etc/subgid"
And then if you encounter an errors related to lchown run the following:
sudo rm -rf ~/.{config,local/share}/containers /run/user/$(id -u)/{libpod,runc,vfs-*}
I have spun up a CentOS 7 VM on GCE and got same issue. The issue is caused because User Namespaces is not enabled on the kernel by default. You have 2 options, either running podman as root (or using sudo) or enabling User Namespaces in your CentOS VM (the hard way).
According to the post here, the use of user namespace and the allocations of uid and gid’s that are required to make rootless containers work securely in your environment.
Probably StackOverflow is not the best place to ask this question. It's better to ask in the ServerFault site since it's a server and not coding problem.

Cloud Composer GKE Node upgrade results in Airflow task randomly failing

The problem:
I have a managed Cloud composer environment, under a 1.9.7-gke.6 Kubernetes cluster master.
I tried to upgrade it (as well as the default-pool nodes) to 1.10.7-gke.1, since an upgrade was available.
Since then, Airflow has been acting randomly. Tasks that were working properly are failing for no given reason. This makes Airflow unusable, since the scheduling becomes unreliable.
Here is an example of a task that runs every 15 minutes and for which the behavior is very visible right after the upgrade:
airflow_tree_view
On hover on a failing task, it only shows an Operator: null message (null_operator). Also, there is no log at all for that task.
I have been able to reproduce the situation with another Composer environment in order to ensure that the upgrade is the cause of the dysfunction.
What I have tried so far :
I assumed the upgrade might have screwed up either the scheduler or Celery (Cloud composer defaults to CeleryExecutor).
I tried restarting the scheduler with the following command:
kubectl get deployment airflow-scheduler -o yaml | kubectl replace --force -f -
I also tried to restart Celery from inside the workers, with
kubectl exec -it airflow-worker-799dc94759-7vck4 -- sudo celery multi restart 1
Celery restarts, but it doesn't fix the issue.
So I tried to restart the airflow completely the same way I did with airflow-scheduler.
None of these fixed the issue.
Side note, I can't access Flower to monitor Celery when following this tutorial (Google Cloud - Connecting to Flower). Connecting to localhost:5555 stay in 'waiting' state forever. I don't know if it is related.
Let me know if I'm missing something!
1.10.7-gke.2 is available now [1]. Can you further upgrade to 1.10.7-gke.2 to see if the issue persists?
[1] https://cloud.google.com/kubernetes-engine/release-notes

CloudFoundry nodeJS tutorial is out of date?

I'm following the nodeJS tutorial and as I get the point of pushing the application to Cloudfoundry.com the push flow in the tutorial and the one I see are very different.
I use vmc 0.999 and this is what I see:
Name> hello-node
Instances> 1
1: node 2: other Framework> node
1: node 2: node06 3: node08 4: other Runtime> 3
1: 64M 2: 128M 3: 256M 4: 512M 5: 1G 6: 2G Memory Limit> 64M
Creating ido-hello-node... OK
1: ido-hello-node.cloudfoundry.com 2: none URL>
ido-hello-node.cloudfoundry.com
Updating ido-hello-node... OK
Create services for application?> n
Save configuration?> y
Saving to manifest.yml... OK Uploading ido-hello-node... OK Using
manifest file manifest.yml
Starting ido-hello-node... OK Checking ido-hello-node
Am I doing something wrong or is the tutorial simply out-dated?
There is no vmc 0.999 so I am not sure what version you actually have - the latest version as of now (obtained by typing 'gem install vmc --pre') is vmc-0.5.0-rc1. You can check which version you have using 'vmc --version'
Yes, the tutorials at http://docs.cloudfoundry.com are outdated and we are busy working on updated versions at http://cloudfoundry.github.com.
You can find the latest Node tutorial here: http://cloudfoundry.github.com/docs/using/deploying-apps/javascript/
If you find any mistakes or would like to suggest additions please feel free to contribute by sending a Github pull request.
It looks like your application pushed successfully. Was there a problem running it?
The vmc command is changing, and the docs you reference may be getting out of date. New docs are being created and published here: http://cloudfoundry.github.com/docs/using/deploying-apps/javascript/. These docs are a work-in-progress, but are more up to date.