E: Malformed entry 7 in list file /etc/apt/sources.list.d/google-cloud-sdk.list (Suite) E: The list of sources could not be read - google-cloud-platform

Getting this error when trying to sudo apt-get update. E: Malformed entry 7 in list file /etc/apt/sources.list.d/google-cloud-sdk.list (Suite)
E: The list of sources could not be read.
I tried to run sed to remove and no luck.
Please help.
Okay, after following the first 5 steps in the link: cloud.google.com/sdk/docs/quickstart-debian-ubuntu I received the following output:
Your Google Cloud SDK is configured and ready to use!
Commands that require authentication will use cloud#postaprayer.org by default
Commands will reference project post-a-prayer by default
Compute Engine commands will use region us-west2 by default
Compute Engine commands will use zone us-west2-a by default
Run gcloud help config to learn how to change individual settings
This gcloud configuration is called [postaprayerdns]. You can create additional configurations if you work with multiple accounts and/or projects.
Run gcloud topic configurations to learn more.
Some things to try next:
Run gcloud --help to see the Cloud Platform services you can interact with. And run gcloud help COMMAND to get help on any gcloud command.
Run gcloud topic --help to learn about advanced features of the SDK like arg files and output formatting
Okay. I was able to get into the google-cloud-sdk.list file and edit it using sudo nano /etc/apt/sources.list.d/google-cloud-sdk.list
From there I edited the .list file and deleted line 7 (which stated clear)
I edited these instructions to solve this error: https://askubuntu.com/questions/332669/unable-to-edit-etc-apt-sources-list-file
sudo nano /etc/apt/sources.list.d/google-cloud-sdk.list
`

Solved. Used Nano command to edit *.list file, deleted corrupt entry 7, and then saved.
Summary:
sudo nano /etc/apt/sources.list.d/google-cloud-sdk.list
From there I edited the .list file and deleted line 7 (which stated clear)
I edited these instructions to solve this error: https://askubuntu.com/questions/332669/unable-to-edit-etc-apt-sources-list-file
sudo nano /etc/apt/sources.list.d/google-cloud-sdk.list

Related

Error while trying to authenticate with `gcloud init`

I am trying to athenticate to the gcloud sdk using : gcloud init.
I get a URL I'm supposed to access in order to copy a token and return it to the CLI... but instead of a token, I get this error :
Erreur d'autorisation
Erreur 400 : invalid_request
Missing required parameter: redirect_uri
Is this a bug?
gcloud version info:
Google Cloud SDK 377.0.0
alpha 2022.03.10
beta 2022.03.10
bq 2.0.74
bundled-python3-unix 3.8.11
core 2022.03.10
gsutil 5.8
I am running gcloud init on wsl2 (Ubuntu 18.04). This error occurs right after the installation of gcloud with sudo apt install google-cloud-sdk.
I had the same problem and gcloud has slightly changed the way their auth flow works.
Run gcloud auth login and then copy the whole output (not just the URL) to a terminal on a computer that has both a web browser and gcloud CLI installed. The command you should copy looks like
gcloud auth login --remote-bootstrap="https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=****.apps.googleusercontent.com&scope=openid+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&state=****&access_type=offline&code_challenge=****&code_challenge_method=S256&token_usage=remote"
When you run that on your computer that has a web browser, it will open a browser window and prompt you to log in. Once you authorize your app in the web browser you get a new URL in your terminal that looks like
https://localhost:8085/?state=****&code=****&scope=email%20openid%20https://www.googleapis.com/auth/userinfo.email%20https://www.googleapis.com/auth/cloud-platform%20https://www.googleapis.com/auth/appengine.admin%20https://www.googleapis.com/auth/compute%20https://www.googleapis.com/auth/accounts.reauth&authuser=0&hd=****&prompt=consent
Paste this new URL back into the prompt in your headless machine after Enter the output of the above command: (in your case, this would be in your WSL2 terminal). Press enter and you get the output
You are now logged in as [****].
Your current project is [None]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
[8]+ Done code_challenge_method=S256
Try
gcloud init --console-only
Then you will get the url which will work.
You must log in to continue. Would you like to log in (Y/n)? y
WARNING: The --[no-]launch-browser flags are deprecated and will be removed on June 7th 2022 (Release 389.0.0). Use --no-browser to replace --no-launch-browser.
Go to the following link in your browser:
https://accounts.google.com/o/o....
update 2022-06-20. option console-only is removed for version 389.0.0.
So instead use
gcloud init --no-browser
There are some workarounds and they depend on your particular Windows environment.
In this post and in this one you can check the most related issues with respect to gcloud running in WSL.
Here you can find some Google groups related threads that might be helpful.
Finally, you could check some related Windows troubleshootings that can help in issues related to WSL2 on your own environment.
EDIT:
it seems this answer and the one from #K.I. give other commands that don't rely on implementation details. I've tested those 3 commands:
gcloud init --console-only
gcloud auth login --no-launch-browser
gcloud init --no-launch-browser
Original answer, another workaround (17/07/2022):
DISPLAY=":0" gcloud auth login
is a workaround mentioned in this issue. Instead of requiring you to install gcloud CLI outside WSL2, it pretends there is a browser.
A link is printed, click it, login on your browser, and you're authenticated with the CLI.
Then run again gcloud init.
You can do it without error by using another method of gcloud installation :
curl https://sdk.cloud.google.com | bash
exec -l $SHELL #restart shell
gcloud init

Unable to execute a step on a running EMR

I have an EMR cluster 5.28.1 running in AWS but I forgot to install from python libraries as part of the bootstrap action. Now that the cluster is running, I was simply attempting to add a step via the EMR console. Here are my settings
JAR: s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar
Main class: None
Arguments: s3://xxxx/install_python_libraries.sh
Unfortunately, I get the following error.
Cannot run program "s3://xxxxx/install_python_libraries.sh" (in directory "."): error=2, No such file or directory
I am not sure what I am doing wrong. The shell script looks like this.
#!/bin/bash -xe
# Non-standard and non-Amazon Machine Image Python modules:
sudo pip-3.6 install boto3
sudo pip-3.6 install xmltodict
I also tried this by simply using 'command-runner.jar' but I get the same error. Can you please help me figure out the problem so I do this via the console? I would like to install the libraries on all nodes - master and core.
Thanks
The issue is the xxx.sh files EOL/carriage return type.
In other words, if it is Windows ("\r\n") then it will not work and return the ./ file not found error.
Convert it to unix type ("\n") using something like notepad++ and it will run fine.
(In notepad++ edit>EOL Conversion>Unix(LF) hit save and try again)

invalid argument "gcr.io//hello-app:v1" for "-t, --tag" flag: invalid reference format

I'm following this tutorial: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app for Google Cloud Platform. I'm using the Google Cloud Shell command line. When I got to the step:
To build the container image of this application and tag it for uploading, run the following command:
docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
I get an error:
invalid argument "gcr.io//hello-app:v1" for "-t, --tag" flag: invalid reference format
Bear in mind I already have 3 instances cluster (created from Kubernetes Engine) and one VM instance created on its own, existing in my VM instances, created from previous tutorials. Not sure if this has anything to do with the error.
Thanks in advance.
You missed setting PROJECT_ID. In the the "Before you begin" section of the tutorial you linked to it has you run
gcloud config set project [PROJECT_ID]
and then in Step 1 you run
export PROJECT_ID="$(gcloud config get-value project -q)"
After those two commands you should have the shell variable set correctly.
I also got the same error when running
docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
but changing it to (my PROJECT_ID is say deepworld123)
docker build -t gcr.io/deepworld123/hello-app:v1 .
fixed it for me. Even though i did set PROJECT_ID=deepworld123.
Your tutorial link doesn't work (it's a link to a GCP dashboard, not a tutorial), but presumably there was a step where you were supposed to set the PROJECT_ID variable, which you skipped. The error message shows nothing between the two slashes where ${PROJECT_ID} appears in your command.
Had a very similar issue involving PROJECT_ID not being set correctly. The solution has to deal with formating as the error message says.
My PROJECT_ID string has the following format companyname.com:companyname-1 After I followed all the steps in the accepted answer the error message was the same.
It turns out the : needs to be replaced by a /. The final gcr.io string looks like:
gcr.io/companyname.com/companyname-1/hello-app:v1

Cannot chmod file on Openshift online v3 : Operation not permitted

I am migrating a Django application from Openshift v2 to v3 (In case you don't know, RedHat is shutting down v2 on September 30th, see: https://blog.openshift.com/migrate-to-v3-v2-eol/)
So, I am following this blog post to help me: https://blog.openshift.com/migrating-django-applications-openshift-3/ . I am new to all these Docker / Kubernetes concepts the new version is build upon.
I was able to make some progress : I managed to get a successful build of my app. Yet it crashes at deployment time:
---> Running application from script (app.sh) ...
/usr/libexec/s2i/run: line 42: /opt/app-root/src/app.sh: Permission denied
Indeed, app.sh has lost its x permission. I log into the failing container as debug and see it:
> oc debug dc/<my app>
> (app-root)sh-4.2$ ls -l /opt/app-root/src/app.sh
-rw-rw-r--. 1 default root 127 Sep 6 21:20 /opt/app-root/src/app.sh
The blog posts states "Ensure that the app.sh file is executable by running chmod +x app.sh.", which I did on my local repo. Whatever, I want to do it again directly in the pod, but it doesn't work:
(app-root)sh-4.2$ chmod +x /opt/app-root/src/app.sh
chmod: changing permissions of ‘/opt/app-root/src/app.sh’: Operation not permitted
So, how can I set the x permission to app.sh ? Thank you
Without looking into more details, any S2I builder image will gladly use your custom supplied run script to start the application in an alternative way.
Create .s2i/bin/ (mind the dot) in your source code directory, place the run script into it and rebuild the app in OpenShift - it will automatically use your custom run script upon deployment.
This is the preferred way of starting applications using custom commands in OpenShift.
Regarding your immediate problem, there is a very simple reason why you can not change the permissions of the script: you were trying to modify the permissions in the deployed pod, and not the builder pod. Deployed pods run using different UIDs, usually somewhere in the range of 100000000, and definitely do not match the file ownership as generated by the build. Hence permission denied.
The root cause of your problem (app.sh losing executable permissions) must be in the way the build process installs those files, and indeed looking at the /usr/libexec/s2i/assemble script in the base image does seem to reveal the culprit. The last two lines are:
# set permissions for any installed artifacts
fix-permissions /opt/app-root
If you wanted to change this part of the build instead of using a custom run script, I suggest you then create .s2i/bin/assemble in your project's source code and make it look sort of like this:
#!/bin/bash
echo "Running stock build:"
${STI_SCRIPTS_PATH}/assemble
echo "Fixing the mess:"
chmod 755 /opt/app-root/src/app.sh
This will fix whatever the stock build process does to file permissions, and will do it using the same UID as the rest of the build, so file ownership shouldn't be an issue.
as I stumbled upon this issue myself I've found a way to resolve it.
You have to make your file app.sh executable and push it in your repo as such.
If git does not track this modification as it did for me, you have to use: git update-index --chmod=+x app.sh for it to work.

Google cloud compute startup script ignored with no logging

I have a standard Debian 8.9 instance on google cloud compute (GCE) where my startup script is ignored.
In the custom metadata field, for startup-script, I am trying to run an Rscript (which is used for batch execution of R files), followed by a system shutdown, with the following:
#! /bin/bash
sudo /usr/bin/Rscript /home/myuser/launch_script.R
sudo shutdown -h now
Starting the instance is immediately followed by a shutdown and the Rscript is ignored. Removing the last line to shutdown causes the GCE instance to start, but the Rscript to be ignored. Running just "sudo /usr/bin/Rscript /home/myuser/launch_script.R" from the terminal results in the script being run. It has a chmod of 755, so I don't think this is a permissions issue.
In addition to this problem, I have read elsewhere that logging should happen in /var/log/, but there is nothing there. Instead, I have a bunch of log files (that only contain the start-up script and nothing else) in the root of my instance:
I got in touch with Google cloud support, who gave the following response:
script definition is kept under /var/run/google.startup.script
If the script does not run initially, you can force it manually with : $ sudo google_metadata_script_runner --script-type startup # for Debian, or # sudo /usr/share/google/run-startup-scripts # on Ubuntu and older images
I'm posting this information here, because it is not in their documentation (as of August 2017). I'm not sure how helpful it is, since the google.startup.script didn't exist in my case (using the latest Debian image on GCE), but I did run the other commands.
However, I think my main issues were:
I was using autossh to connect to a remote database. The startup-script was running before autossh. Building a 40 second delay into the script and running the script as a user (not sudo-type root) seems to have solved this problem for now. Autossh was being run as the main user, which I think gets loaded before lower-privilege user-defined scripts get loaded.
I was using some gcloud commands from the user account which had its own authentication issues. Running gcloud auth login as the user and ensuring correct permissions on my private key solved this.
Always remember to check the messages and syslog files in /var/log for troubleshooting. This allowed me to see the order of things being loaded at system-boot.