Permission denied: '/models/default' - google-cloud-ml

I follow this page
https://cloud.google.com/vertex-ai/docs/export/export-model-tabular
I trained the model on the Google Cloud Platform console Then exported the model per the instructions. However when I run the docker run command I get the following:
sudo docker run -v /home/grgcp8787/feature-model/model-2101347040586891264/tf-saved-model/emotion-feature:/models/default -p 8080:8080 -it europe-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server-v1:latestUnable to find image 'europe-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server-v1:latest' locally
Trying to pull repository europe-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server-v1 ...
latest: Pulling from europe-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server-v1
Digest: sha256:6b2ac764102278efa467daccc80acbcdcf119e3d7be079225a6d69ba5be7e8c5
Status: Downloaded newer image for europe-docker.pkg.dev/vertex-ai/automl-tabular/prediction-server-v1:latest
ERROR:root:Caught exception: [Errno 13] Permission denied: '/models/default'
Traceback (most recent call last):
File "/google3/third_party/py/cloud_ml_autoflow/prediction/launcher.py", line 311, in main
if is_ucaip_model():
File "/google3/third_party/py/cloud_ml_autoflow/prediction/launcher.py", line 279, in is_ucaip_model
contents = os.listdir(local_model_artifact_path())
PermissionError: [Errno 13] Permission denied: '/models/default'
Is my google cloud CLI not be initialized?
I am not sure what I did wrong, or what I need to change to fix it.
Thank you for your help, in advance.

Related

Elastic Beanstalk server using 97% memory warning

I just discovered that my Elastic Beanstalk server has health status with a warning "97% of memory is in use". Because of this I can not deploy updates or ssh and run the django shell. I just receive the following error:
MemoryError
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/var/app/venv/staging-LQM1lest/lib/python3.7/site-packages/sentry_sdk/worker.py", line 70, in start
self._thread.start()
File "/var/app/venv/staging-LQM1lest/lib/python3.7/site-packages/sentry_sdk/integrations/threading.py", line 54, in sentry_start
return old_start(self, *a, **kw) # type: ignore
File "/usr/lib64/python3.7/threading.py", line 852, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
I received more memory errors when trying AWS documented troubleshooting suggestions which required installing a package:
sudo amazon-linux-extras install epel
From here I don't know what else I can do to troubleshoot this issue or how to fix it.

Error while installing the VCpp redistributalble uisng .ebextensions

I have a an ASP.NET Core web application which publishes to AWS using ElasticBeanstalk. In order to configure the windows environment I am using .ebextensions, which will copy the vcpp redistributables from S3 and installs them while creating the environment.
When published I am getting the error 'Error occurred during build: Command 01_instlVCx64 failed". Below is the command in my .ebextensions
files:
"c:\\vcpp-redistributables\\vc_redist_x64.exe":
source: https://<bucket_name>.s3.eu-west-2.amazonaws.com/vcpp-redistributables/vc_redist_x64.exe
authentication: S3Access
commands:
01_instlVCx64:
command: c:\\vcpp-redistributables\\vc_redist_x64.exe /q /norestart
Below is the trace back from the logs
2022-03-22 15:31:35,876 [ERROR] Error encountered during build of prebuild_0_GWebApp: Command 01_instlVCx64 failed
Traceback (most recent call last):
File "cfnbootstrap\construction.pyc", line 578, in run_config
File "cfnbootstrap\construction.pyc", line 146, in run_commands
File "cfnbootstrap\command_tool.pyc", line 127, in apply
cfnbootstrap.construction_errors.ToolError: Command 01_instlVCx64 failed
2022-03-22 15:31:35,876 [ERROR] -----------------------BUILD FAILED!------------------------
Could you please let me know what am I missing?
Thanks in advance.
Found the issue couple of days before. So, thought of answering my own question, so that it will be useful for others.
The issue is Elastic beanstalk instance (Windows server 2019) already has VCpp redistributables installed and which is later version that I am trying to install as part of .ebextensions. So, when I tried to install, it failed.
I figured it out, by enabling RDP connection on the EC2 instance that is created as part of Elastic beanstalk and run the scripts manually, which gave a detailed error message.
Hope it helps someone in the future.

Submit a pyspark job with a config file on Dataproc

I'm newbie on GCP and I'm struggling with submitting pyspark job in Dataproc.
I have a python script depends on a config.yaml file. And I notice that when I submit the job everything is executed under /tmp/.
How can I make available that config file in the /tmp/ folder?
At the moment, I get this error:
12/22/2020 10:12:27 AM root INFO Read config file.
Traceback (most recent call last):
File "/tmp/job-test4/train.py", line 252, in <module>
run_training(args)
File "/tmp/job-test4/train.py", line 205, in run_training
with open(args.configfile, "r") as cf:
FileNotFoundError: [Errno 2] No such file or directory: 'gs://network-spark-migrate/model/demo-config.yml'
Thanks in advance
Below a snippet worked for me:
gcloud dataproc jobs submit pyspark gs://network-spark-migrate/model/train.py --cluster train-spark-demo --region europe-west6 --files=gs://network-spark-migrate/model/demo-config.yml -- --configfile ./demo-config.yml

googlecloudsdk.core.exceptions.RequiresAdminRightsError: You cannot perform this action because you do not have permission

I am trying to install google-cloud-sdk in ubuntu-18.04. I am following offical docs given here. When I run ./google-cloud-sdk/install.sh I get following error:-
Welcome to the Google Cloud SDK!
To help improve the quality of this product, we collect anonymized usage data
and anonymized stacktraces when crashes are encountered; additional information
is available at <https://cloud.google.com/sdk/usage-statistics>. This data is
handled in accordance with our privacy policy
<https://policies.google.com/privacy>. You may choose to opt in this
collection now (by choosing 'Y' at the below prompt), or at any time in the
future by running the following command:
gcloud config set disable_usage_reporting false
Do you want to help improve the Google Cloud SDK (y/N)? N
Traceback (most recent call last):
File "/home/vineet/./google-cloud-sdk/bin/bootstrapping/install.py", line 225, in <module>
main()
File "/home/vineet/./google-cloud-sdk/bin/bootstrapping/install.py", line 200, in main
Prompts(pargs.usage_reporting)
File "/home/vineet/./google-cloud-sdk/bin/bootstrapping/install.py", line 123, in Prompts
scope=properties.Scope.INSTALLATION)
File "/home/vineet/google-cloud-sdk/lib/googlecloudsdk/core/properties.py", line 2406, in PersistProperty
config.EnsureSDKWriteAccess()
File "/home/vineet/google-cloud-sdk/lib/googlecloudsdk/core/config.py", line 198, in EnsureSDKWriteAccess
raise exceptions.RequiresAdminRightsError(sdk_root)
googlecloudsdk.core.exceptions.RequiresAdminRightsError: You cannot perform this action because you do not have permission to modify the Google Cloud SDK installation directory [/home/vineet/google-cloud-sdk].
Re-run the command with sudo: sudo /home/vineet/google-cloud-sdk/bin/gcloud ...
I tried to search it on stackoverflow and github-issues but in vain.
Would appreciate any hint to solve it.
As stated on the error message.
Re-run the command with sudo: sudo /home/vineet/google-cloud-sdk/bin/gcloud ...
The install.sh script should be run using sudo.
There are also other alternatives to install the Google Cloud SDK in Ubuntu 18.04 just as installing the package with apt-get as explained on the documentation.

gcloud 403 permission errors with wrong project

I used to work at a company and had setup my gcloud previously with gcloud init or gcloud auth login (I don't recall which one). We were using google container engine (GKE).
I've since left the company and been removed from the permissions on that project.
Now today, I wanted to setup a brand new app engine for myself unrelated to the previous company.
Why is it that I cant run any commands without getting the below error? gcloud init, gcloud auth login or even gcloud --help or gcloud config list all display errors. It seems like it's trying to login to my previous company's project with gcloud container cluster but I'm not typing that command at all and am in a differerent zone and interested in a different project. Where is my config for gcloud getting these defaults?
Is this a case where I need to delete my .config/gcloud folder? Seems rather extreme of a solution just to login to a different project?
Traceback (most recent call last):
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/gcloud.py", line 65, in <module>
main()
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/gcloud.py", line 61, in main
sys.exit(googlecloudsdk.gcloud_main.main())
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/gcloud_main.py", line 130, in main
gcloud_cli = CreateCLI([])
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/gcloud_main.py", line 119, in CreateCLI
generated_cli = loader.Generate()
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 329, in Generate
cli = self.__MakeCLI(top_group)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 517, in __MakeCLI
log.AddFileLogging(self.__logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 676, in AddFileLogging
_log_manager.AddLogsDir(logs_dir=logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 365, in AddLogsDir
self._CleanUpLogs(logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 386, in _CleanUpLogs
self._CleanLogsDir(logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 412, in _CleanLogsDir
os.remove(log_file_path)
OSError: [Errno 13] Permission denied: '/Users/terence/.config/gcloud/logs/2017.07.27/19.07.37.248117.log'
And the log file:
/Users/terence/.config/gcloud/logs/2017.07.27/19.07.37.248117.log
2017-07-27 19:07:37,252 DEBUG root Loaded Command Group: ['gcloud', 'container']
2017-07-27 19:07:37,253 DEBUG root Loaded Command Group: ['gcloud', 'container', 'clusters']
2017-07-27 19:07:37,254 DEBUG root Loaded Command Group: ['gcloud', 'container', 'clusters', 'get_credentials']
2017-07-27 19:07:37,330 DEBUG root Running [gcloud.container.clusters.get-credentials] with arguments: [--project: "REMOVED_PROJECT", --zone: "DIFFERENT_ZONE", NAME: "REMOVED_CLUSTER_NAME"]
2017-07-27 19:07:37,331 INFO ___FILE_ONLY___ Fetching cluster endpoint and auth data.
2017-07-27 19:07:37,591 DEBUG root (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/REMOVED_PROJECT/zones/DIFFERENT_ZONE/clusters/REMOVED_CLUSTER_NAME".
Traceback (most recent call last):
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 712, in Execute
resources = args.calliope_command.Run(cli=self, args=args)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 871, in Run
resources = command_instance.Run(args)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/surface/container/clusters/get_credentials.py", line 69, in Run
cluster = adapter.GetCluster(cluster_ref)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/api_lib/container/api_adapter.py", line 213, in GetCluster
raise api_error
HttpException: ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/REMOVED_PROJECT/zones/DIFFERENT_ZONE/clusters/REMOVED_CLUSTER_NAME".
2017-07-27 19:07:37,596 ERROR root (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/REMOVED_PROJECT/zones/DIFFERENT_ZONE/clusters/REMOVED_CLUSTER_NAME".
I had to delete my .config/gcloud to make this work although I don't believe that is a good "solution".
Okay so not sure if things have changed but ran into a similar issue. Please try this before nuking your configuration.
gcloud supports multiple accounts and you can see what account is active by running gcloud auth list.
ACTIVE ACCOUNT
* Work-Email#company.com
Personal-Email#gmail.com
If you are not on the correct one, you can do
$ gcloud config set account Personal-Email#gmail.com
And it'll set the correct account. Running a gcloud auth list again should show the ACTIVE now on your personal.
if you haven't auth'd into your personal, you'll need to login. You can rungcloud auth login Personal-Email#gmail.com and follow the flow from there and then return to the above.
Make sure to set PROJECT_ID or whatever things you may need when switching.
Now from there I found it's STILL possible that you might not be auth'd correctly. I think for this, you may need to restart your terminal session or even simply doing a source ~/.bash_profile was sufficient. (Perhaps I needed to do this to refresh the GOOGLE_APPLICATION_CREDENTIALS environment variable but I'm not sure).
Hope this helps. Try this before nuking
Rename / delete config/gcloud/logs folder and try Instead of deleting .config/gcloud folder.
This Solution worked for me :)