`docker compose create ecs` without user input - amazon-web-services

I am looking for a way to run docker compose create ecs without having to manually select where it gets AWS credentials from (as it's being run from a build agent).
In the following AWS blog it shows it being used with a flag --from-env (which is exactly what I want), however that flag doesn't seem to actually exist, either in the official docs, or by trial and error. Is there something I am missing?

Apparently it's a known issue
https://github.com/docker/docker.github.io/issues/11845
You have to enable experimental support for the docker cli in Linux to create an ecs context :S

Related

Dataproc custom image: Cannot complete creation

For a project, I have to create a Dataproc cluster that has one of the outdated versions (for example, 1.3.94-debian10) that contain the vulnerabilities in Apache Log4j 2 utility. The goal is to get the alert related (DATAPROC_IMAGE_OUTDATED), in order to check how SCC works (it is just for a test environment).
I tried to run the command gcloud dataproc clusters create dataproc-cluster --region=us-east1 --image-version=1.3.94-debian10 but got the following message ERROR: (gcloud.dataproc.clusters.create) INVALID_ARGUMENT: Selected software image version 1.3.94-debian10 is vulnerable to remote code execution due to a log4j vulnerability (CVE-2021-44228) and cannot be used to create new clusters. Please upgrade to image versions >=1.3.95, >=1.4.77, >=1.5.53, or >=2.0.27. For more information, see https://cloud.google.com/dataproc/docs/guides/recreate-cluster, which makes sense, in order to protect the cluster.
I did some research and discovered that I will have to create a custom image with said version and generate the cluster from that. The thing is, I have tried to read the documentation or find some tutorial, but I still can't understand how to start or to run the file generate_custom_image.py, for example, since I am not confortable with cloud shell (I prefer the console).
Can someone help? Thank you

Application information missing in Spinnaker after re-adding GKE accounts - using spinnaker-for-gke

I am using a Spinnaker implementation set up on GCP using the spinnaker-for-gcp tools. My initial setup worked fine. However, we recently had to re-configure our GKE clusters (independently of Spinnaker). Consequently I deleted and re-added our gke-accounts. After doing that the Spinnaker UI appears to show the existing GKE-based applications but if I click on any of them there are no clusters or load balancers listed anymore! Here are the spinnaker-for-gcp commands that I executed:
$ hal config provider kubernetes account delete company-prod-acct
$ hal config provider kubernetes account delete company-dev-acct
$ ./add_gke_account.sh # for gke_company_us-central1_company-prod
$ ./add_gke_account.sh # for gke_company_us-west1-a_company-dev
$ ./push_and_apply.sh
When the above didn't work I did an experiment where I deleted the two account and added an account with a different name (but the same GKE cluster) and ran push_and_apply. As before, the output messages seem to indicate that everything worked, but the Spinnaker UI continued to show all the old account names, despite the fact that I deleted them and added new ones (which did not show up). And, as before, not details could be seen for any of the applications. Also note that hal config provider kubernetes account list did show the new account name and did not show the old ones.
Any ideas for what I can do, other than complete recreating our Spinnaker installation? Is there anything in particular that I should look for in the Spinnaker logs in GCP to provide more information?
Thanks in advance.
-Mark
The problem turned out to be that the data that was in my .kube/config file in Cloud Shell was obsolete. Removing that file, recreating it (via the appropriate kubectl commands) and then running the commands mentioned in my original description fixed the problem.
Note, though, that it took a lot of shell script and GCP log reading by our team to figure out the problem. Ultimately, what would have been nice would have been if the add_gke_account.sh or push_and_apply.sh scripts could have detected the issue, presumably by verifying that the expected changes did, in fact, correctly occur in the running spinnaker.

Is there a gcloud command to disable all previous versions of a secret?

I am looking for a gcloud command to disable all the previous secret versions except the latest one.
Let me explain my entire use case:
So, I've a bitbucket pipeline which creates a new version every time I run this pipeline. I am using the following command to add a new version to already existing secret:
gcloud secrets versions add api-server-versions --data-file=./new.json
Now, this command creates a new version of secret every time the pipeline is ran. Leaving previous versions are still enabled.
So, what I want to do is disable all the previous versions of secret as soon as the new secret version is created.
Is there any gcloud command to achieve this or any other way to do this using commands?
Posting this as a community wiki, since it's based on the discussion between #Sethvargo's and the OP in the comments to the question:
Unfortunately there is no way to do this with a single command nor by using | or filter in the gcloud command so what you have to do in order to achieve this is to list the secrets and disable each one individually.
All that being said, this might be considered a good feature on a gcloud command, so if you'd like, you can create a feature request so this is considered to be implemented by the Google Cloud team in the future.
In order to disable secret version it requires secret admin role on the project or organization and Iam roles cannot be granted on secret versions
refer this link for more details.
another way is you can disable previous versions and add new versions by using command

Enabling Google Cloud Shell "boost" mode via gcloud cli

I use the method mentioned in this excellent answer https://stackoverflow.com/a/49515502/10690958 to connect to Google Cloud Shell via ssh on my ubuntu workstation. Occasionally, I need to enable "boost-mode". In that case, I currently have to open the Cloud Shell via firefox (https://console.cloud.google.com/cloudshell/editor?shellonly=true), then login and enable boost mode. After that I can close firefox, and use the gcloud method to access the cloud shell VM in boost mode.
I would like to do this (access boost-mode) purely through the gcloud cli, since using the browser is quite cumbersome.
The official docs dont mention any method of enabling boost mode via gcloud There seem to be only three options i.e. ssh/scp/sshfs via gcloud alpha cloud-shell. Is there perhaps a way to enable this via some configuration option?
thanks
There does not seem to be any option to enable the boost mode from either the v1 or v1alpha1 versions of the Cloud Shell API (both versions undocumented).
The gcloud command actually uses the API to get the status of your Cloud Shell environment, which contains information about how to connect through SSH, updates the SSH keys if needed, and then connects using that info (use gcloud alpha cloud-shell ssh --log-http if you want to check it by yourself).
As far as I can see, when you click the "Boost mode" button, the browser makes a call to https://ssh.cloud.google.com/devshell?boost=true&forceNewVm=true (and some more parameters), but I can't make it work on the command line, so I'm guessing it's doing some other stuff that I can't identify.
If you need this for your workflow, you could raise a feature request on Google's issue tracker.
It is now possible to access the Cloud Shell in boost mode from the CLI with this command: gcloud alpha cloud-shell ssh --boosted. Other possible arguments are documented here. Just a warning: the first time I tried that my home directory became unreadable and started returning "Input/output error", logging out and in again fixed the issue.

Does sagemaker use nvidia-docker or docker runtime==nvidia by default or user need to manually set up?

As stated in the question, "Does sagemaker use nvidia-docker or docker runtime==nvidia by default or user need to manually set up?"
Some common error message showed as "CannotStartContainerError. Please ensure the model container for variant variant-name-1 starts correctly when invoked with 'docker run serve’." and it didn't show as running with nividia driver.
So, do we need manually set up?
I'm using tensorflow-gpu image as base images for my containers and I can use the gpu without specifying anything gpu related. When building docker containers for sagemaker you have to beware of folder structure and that your container is able to start with the command serve(which the error suggest).
If you have problem setting this up I find this example the most useful one to get the hang of it.