I'm following this tutorial to run SageMaker notebooks in PyCharm IDE (I have PyCharm Pro if that matters). In the tutorial they mention installing Docker and AWS CLI, but they don't mention what to do with them. The instructions just say to install Docker, configure AWS CLI, and run the code. I have Docker and AWS CLI installed on my laptop, and I downloaded the linked github repo example, but I'm unable to run it on PyCharm. The line os.listdir(path=args.train) is throwing the error:
FileNotFoundError: [Errno 2] No such file or directory: '/path/to/project/tensorflow-sagemaker-on-pycharm-main/tf_code/data'
And that's probably because I'm not running the Docker container? I've never used Docker before, so I'm not sure how to continue from here.
You're trying to run a training job in your machine, instead of in the cloud (AKA SageMaker Local Mode). To do that you need to have Docker running.
I suggest following this blog post which is more focused on the task you're trying to achieve. It also links to a github repo with many examples for local mode.
Related
this last week I have been trying to upload a flask app using AWS Beanstalk.
The main problem for me was loading a very heavy library as part of the bundle (there is a 500mb limit for uploading the bundle code).
Instead, I tried to use requirements.txt file so it would download the library directly to the server.
Unfortunately, every time I tried to include the library name in the requirements file, it failed to load it (torch library).
on pythonanywhere server there is a console which allows you to access the virtual environment and simply type
pip install torch
which was very useful and comfortable.
I am looking for something similar in AWS beanstalk, so that I could install the library directly instead of relying on the requirements.txt file.
I have been at it for a few days now and can't make any progress.
your help would be much appreciated.
another question,
is it possible to load the venv to Amazon-S3 and then access the folder from the beanstalk environment?
Its not a good practice to "manually" install your dependencies or configure your EB env from inside. This is only useful for testing and debugging purposes. Thus keep that it mind.
To get your venv, you have to ssh to your EB instance using regular ssh or web-based clients available in AWS EC2 console when you locate your EB EC2 instance. Session manager should work out-of-the-box to enable you to login to the instance.
When you login to the instance, then to activate your venv, you do:
# start bash
bash
# source venv
source /var/app/venv/staging-*/bin/activate
All of a sudden I cannot get GCP local cloud builds running.
I've tried updating the to the latest version of the various pieces
Docker Desktop 2.5.01 Engine 19.03.13
Google Cloud SDK 317.0.0
cloud-build-local 0.5.2
And I have done all possible Windows updates for my current build 2004(19041.572)
If I do a dry run, all is successful and there are no issues.
When I do a full run, a busybox container fires up, status changes to EXITED (0), and that where it all just stops.
Terminal output below
cloud-build-local --config=cloudbuild-hosting-prod.yaml
--dryrun=false . 2020/11/10 15:12:44 Warning: The server docker version installed (19.03.13) is different from the one used in GCB
(19.03.8) 2020/11/10 15:12:44 Warning: The client docker version
installed (19.03.13) is different from the one used in GCB (19.03.8)
A colleague of mine is having the same issue, yet if I run the same build on my laptop, all works fine, so its not the YAML file (which hasn't changed anyway) or a code issue
Docker Desktop 2.5.0.0 Engine 19.03.13
Google Cloud SDK 301.0.0
cloud-build-local
Any advice on how to troubleshoot what the issue could be.
I set up my Selenium project (Maven, Java, TestNG) in GitHub repo and it is connected to Jenkins. I am able to execute the Maven project via Jenkins and do the testing. This requires all dependant tools (Maven,Java,Jenkins) set up in my local machine.
But we have a requirement to do this in the cloud. I know we can use Selenium Grid-Docker, BrowserStack or GCP to execute the tests in the cloud but what we need is to have everything installed in the cloud and any external user with access being able to execute any test via UI or executable file without installing anything in user's local machine.
Is this possible at all? If yes,how?
I searched a lot and couldn't find anything. One of my friends said it can be done using AWS but doesn't know how. I just need guidance on the path to take here and I'm willing to learn and implement it myself.
Solved this my deploying code to AWS-EC2.
Here's what I did.
I created a TestNG-Maven project and uploaded to GitHub. Then created a AWS-EC2 t2.micro linux instance and installed Chrome and Jenkins in it. I accessed Jenkins from my local machine and connected it to GitHub repo. From Jenkins when I build the project everything was getting downloaded in EC2 and execution happened in EC2. This will be chrome-headless execution.
I'm having the sample problem as Vaclav. I've followed the GCR quick start to the letter which entailed creating a new project (called gcr-project) and copying the code for a Flask (python) app.
After building the docker image, I entered the commands:
gcloud auth configure-docker
docker tag quickstart-image gcr.io/gcr-project/quickstart-image:tag1
docker push gcr.io/gcr-project/quickstart-image:tag1
The response was:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So it would be nice to know if the issue is with the credentials (I'm using cloud SDK OK for other projects) or permissions. The documentation here suggests you need storage-admin rights but the projects already has it, see screen cap here
Would appreciate any tips for trouble shooting this as I was looking for to using the GCR but this problem is a hard stop for me.
UPDATE:
I tried the same process with the cloud shell
me#cloudshell:~ (gcr-project-XXXXXX)$ docker push gcr.io/gcr-project/quickstart-image:tag1
The push refers to repository [gcr.io/gcr-project/quickstart-image]
4399528b7213: Preparing
1d10b1eeca74: Preparing
75156020d862: Preparing
c5697656a146: Preparing
2a435270de82: Preparing
c35f70b5c25a: Waiting
28e260baaf1b: Waiting
556c5fb0d91b: Waiting
denied: Token exchange failed for project 'gcr-project'. Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=gcr-project before performing this operation.
me#cloudshell:~ (gcr-project-XXXXXX)$
This prompted me to check the API & Services dashboard to confirm the container-registry API was enabled - It is.
UPDATE 2:
I'm having these problems on a machine running ubuntu 19.04. Per the comments below I was able to do a push via the cloud shell. So I then went through the same exercise on a MacBook Pro - worked no problems.
So I then uninstalled Cloud SDK per the doco having used the standard linux install instructions previously. I then re-installed using the debian-ubuntu install instructions (version 274.0.1-0)... STILL no go.
When I do a docker pull on the image (because push worked on MBP) I get this error: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
And when I do a push I get this error: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So at this stage, given the success on the MBP and the lack thereof on the linux/ubuntu machine, the problem is constrained to to linux/ubuntu installs.
UPDATE 3:
I got on to a separate ubuntu server, did a clean install with sudo snap install google-cloud-sdk --classic , did everything else per the docs and still had the exact same problem. So I recon this is a linux google cloud SDK specific problem.
Is there anyone out there Ubuntu land who as been able install and use cloud SDK with GCR recently?????????
I was able to replicate this issue on multiple ubuntu machines. I tried again after the most recent cloud SDK update (276.0.0) but had no luck.
In the end I went with json key file authentincati described in the docs here as a work around which worked fine.
I want to run my bat file on windows with the help of GCP Composer, but i am not sure if we can communicate with windows machine as composer is fully based on linux environment. Please help me if you have any solution.
There are a couple possible solutions described in this thread, basically:
Installing ssh into your Windows machine and then connecting to run commands remotely using the Airflow’s ssh operator.
Install a package like pywinrm, which allows you to run Windows commands on a target machine from Python code. Then, use the Python operator, within your DAG, to make the call. You may refer to the GCP documentation for steps on installing additional Python packages in Composer.