How to download jar from artifact registry (GCP)? - google-cloud-platform

I have a maven Artifact Registry and am able to add dependency in pom.xml and get the jar.
I have another usecase where I would like to only download the jar using CLI something which you can easily do with other external maven repos eg curl https://repo1.maven.org/maven2/org/apache/iceberg/iceberg-spark-runtime/0.7.0-incubating/iceberg-spark-runtime-0.7.0-incubating.jar --output temp.jar
I don't see any instructions about how to do this.

I needed this too.
I have configured a service account following gcp guide
Then, I have executed the following command to get authbasic credz :
gcloud artifacts print-settings gradle \
[--project=PROJECT] \
[--repository=REPOSITORY] \
[--location=LOCATION] \
--json-key=KEY-FILE \
[--version-policy=VERSION-POLICY] \
[--allow-snapshot-overwrites]
In the output you have the artifactRegistryMavenSecret.
Finally you get your artifact with :
curl -L -u _json_key_base64:{{ artifactRegistryMavenSecret }} https://{{ region }}-maven.pkg.dev/{{ projectId }}/{{ repository }}/path/of/artifact/module/{{ version }}/app-{{ version }}.jar -o file.jar

It seems like this feature as mentioned does not exist yet for Artifact Registry based on this open feature request (this feature request has currently no ETA). However, you can try to implement a Cloud build automation not to only save your built artifact in Artifact Registry, but also to store them in Google Cloud Storage or other Storage repositories; so you can easily access the JARs (since Cloud Storage supports direct downloading).
In order to do this, you would need to integrate Cloud Build with Artifact Registry. The documentation page has instructions to use Maven projects with Cloud Build and Artifact Registry. In addition, you can configure Cloud Build to store built artifacts in Cloud Storage.
Both of these integrations are configured through a Cloud Build configuration file. In this file, the steps for building a project are defined, including integrations to other serverless services. This integration would involve defining a target Maven repository:
steps:
- name: gcr.io/cloud-builders/mvn
args: ['deploy']
And a location to deploy the artifacts into Cloud Storage:
artifacts:
objects:
location: [STORAGE_LOCATION]
paths: [[ARTIFACT_PATH],[ARTIFACT_PATH], ...]

Additional to #Nicolas Roux's answer:
artifactRegistryMavenSecret is basically an encode64 of the Service Account json key.
So instead of runnig gcloud artifacts print-settings gradle and curl -u _json_key_base64:{{ artifactRegistryMavenSecret }}, another way is you can directly use the token from gcloud auth print-access-token, then apply this token to cURL.
For example:
1. gcloud auth activate-service-account SERVICE_ACCOUNT#DOMAIN.COM \
--key-file=/path/key.json --project=PROJECT_ID
2. curl --oauth2-bearer "$(gcloud auth print-access-token)" \
-o app-{{ version }}.jar \
-L https://{{ region }}-maven.pkg.dev/{{ projectId }}/{{ repository }}/path/of/artifact/module/{{ version }}/app-{{ version }}.jar
By that, if you're working with Google Auth Action (google-github-actions/auth#v0) in Github Actions Workflow, then you can easily run the curl command without needing to extract artifactRegistryMavenSecret.

Related

How to copy artifacts between projects in Google Artifact Registry

Previously using Container Registry one could copy a container between projects using this method
However I am unable to get this working using Artifact Registry. If I try
gcloud artifacts docker tags add \
us-east4-docker.pkg.dev/source-proj/my-repo/my-image:latest \
us-east4-docker.pkg.dev/dest-proj/my-repo/my-image:latest
It gives the error
ERROR: (gcloud.artifacts.docker.tags.add) Image us-east4-docker.pkg.dev/source-proj/my-repo/my-image
does not match image us-east4-docker.pkg.dev/dest-proj/my-repo/my-image
I have searched and can not find any examples or documentation on how to do this.
You can use the gcrane tool to copy images in Artifact Registry.
For example, the following command copies image my-image:latest from the repository my-repo in the project source-proj to the repository my-repo in another project called dest-proj.
gcrane cp \
us-east4-docker.pkg.dev/source-proj/my-repo/my-image:latest \
us-east4-docker.pkg.dev/dest-proj/my-repo/my-image:latest
Here is the link to the Google Cloud official documentation.
This can be done using
gcrane cp us-east4-docker.pkg.dev/source-proj/my-repo/my-image:latest us-east4-docker.pkg.dev/dest-proj/my-repo/my-image:latest

How do i continue working with Amplify on a new machine?

I'm using react native for my project. On my old machine, when i ran amplify status, i had Auth, Api and Storage services listed.
I moved to my new machine, installed node, watchman, brew etc... and then navigated to my react native project and ran: react-native run-ios, and voila, my app is running. All the calls to my AWS Api, Auth and Storage are working perfectly.
Now i can make some amplify commands. Such as amplify status. I tried: amplify env add: here's what i got:
Users-MBP-2:projectname username$ amplify env add
Note: It is recommended to run this command from the root of your app directory
? Do you want to use an existing environment? Yes
? Choose the environment you would like to use: dev
Using default provider awscloudformation
✖ There was an error initializing your environment.
init failed
Error: ENOENT: no such file or directory, open '/Users/username/.aws/credentials'
at Object.openSync (fs.js:462:3)
at Proxy.readFileSync (fs.js:364:35)
at Object.readFileSync (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/util.js:95:26)
at IniLoader.parseFile (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:6:47)
at IniLoader.loadFrom (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/shared-ini/ini-loader.js:56:30)
at Config.region (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/node_loader.js:100:36)
at Config.set (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:507:39)
at Config.<anonymous> (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:342:12)
at Config.each (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/util.js:507:32)
at new Config (/usr/local/lib/node_modules/#aws-amplify/cli/node_modules/aws-sdk/lib/config.js:341:19) {
errno: -2,
syscall: 'open',
code: 'ENOENT',
path: '/Users/username/.aws/credentials'
}
Do you think credentials info needs to be brought/configured to my new machine?
When i run amplify configure project it's like doing an amplify init and building a project from scratch. I'm being asked:
? Enter a name for the project: ProjectName
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using (Use arrow keys)
angular
ember
ionic
react
❯ react-native
vue
none
etc....
I also already have a region, username and accessKey, secretAccess key etc..
I do not want to replace or ruin anything in my current backend or current project! Whats going on?
Ensure amplify-cli is installed and you're logged in with your AWS details.
npm install -g #aws-amplify/cli
amplify configure
Running amplify configure is mainly to give the cli knowledge of your AWS account so subsequent commands can have access to things.
If you get amplify: command not found errors try restarting your terminal. If still no luck, you will need to check amplify has been added to your PATH variable.
Run amplify env add , but choose an existing environment. This will let you choose the environment you created on your other machine so you can pull those settings down to your new machine.
amplify env add
? Do you want to use an existing environment? Yes
Production
Follow up with:
amplify pull
You don't need to run amplify add auth again or anything. All of that will pull down automatically after you've done the above.
You DO NOT need to do all config again, but some for sure
You have to install amplify cli npm install -g #aws-amplify/cli
use amplify pull
https://docs.amplify.aws/cli/start#amplify-pull
Follow the rest of steps -
-- provide the accessKeyId, secretAccessKey
-- region
-- select amplify project
and then rest of app related thing like IDE, directory......
I tried every solution then I found this. (in MacBook)
% sudo -i
Password:
~ root# npm install -g #aws-amplify/cli
-- Ctrl+D to exist from Root user
% amplify pull --appId xxxx --envName yyyy.
Note: To get --appId xxxx --envName yyyy
Log in to the AWS console. Choose AWS Amplify. Click your app. Go to Backend
environments. Find the backend environment you wish to pull. Click
Edit backend. See top right then click 'Local setup instructions
' ( amplify pull --appId
YOUR_APP_ID --envName YOUR_ENV_NAME )
Waiting until it request to verify your amplify.
✔ Successfully received Amplify Studio tokens.
? Choose your default editor: Visual Studio Code
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react
? Source Directory Path: src
? Distribution Directory Path: build
? Build Command: npm run-script build
? Start Command: npm run-script start
✔ Synced UI components.
? Do you plan on modifying this backend? Yes
⠴ Building resource api/xxxx✅ GraphQL schema compiled successfully.
Edit your schema at ....
✔ Successfully pulled backend environment yyyy from the cloud.
✅
Successfully pulled backend environment staging from the cloud.
Run 'amplify pull' to sync future upstream changes.
% amplify pull
% npm install
% npm start
Hope this help every one!!
Happy Coding :)

Google Cloud Function Environment Variables Directory

I've got a Cloud Function working properly, but now I'd like to obfuscate some credentials with environmental variables. When I try running this command:
gcloud beta functions deploy my-function --trigger-http --set-env-vars user=username,pass=password --runtime nodejs6 --project my-project
I get this error:
ERROR: (gcloud.beta.functions.deploy) OperationError: code=3, message=Function load error: File index.js or function.js that is expected to define function doesn't exist in the root directory.
I created the function using the GCP web UI, and I can't find the directory where the function lives to cd into. Presumably running the same command from the directory that the function lives in would work.
Where do cloud functions live in my project?
If you check the details of the function deployment on the logs, you'll notice field protoPayload.request.function.sourceUploadUrl contains a URL where your source code is uploaded during the deployment.
This URL is is in the form of https://storage.googleapis.com/gcf-upload-<region>-<random>/<random>.zip. This means that the function is uploaded that GCS bucket. That GCS bucket is not on your project (it's from Google), so you won't have direct access to the files. You can download the files stored on that bucket through the console (the "Download zip" button on the source page).
The upload bucket can also be found through
gcloud functions list --format='table[](name,sourceUploadUrl)'
Knowing this, you have 2 paths:
Use another way of deploying the function (e.g. from a source repo)
Use the API to patch the function
I'm partial to the second option, since it's really easy to execute:
curl -XPATCH -H"Authorization: Bearer $(gcloud auth print-access-token)" -H'content-type:application/json' 'https://cloudfunctions.googleapis.com/v1/projects/<PROJECT_ID>/locations/<REGION>/functions/<FUNCTION_ID>?updateMask=environmentVariables' -d'{"environmentVariables":{"user":"<USERNAME>", "password":"<PASSWORD>"}}'
However, if you need to access anywhere Google related, rather than pass user/password, it's best to use the application default credentials, and providing access to the service account on the resource. To find out the service account used by your function, you can run:
gcloud functions list --format='table[](name,serviceAccountEmail)'

Fetching Tags in Google Cloud Builder

In the newly created google container builder I am unable to fetch git tags during a build. During the build process the default cloning does not seem to fetch git tags. I added a custom build process which calls git fetch --tags but this results in the error:
Fetching origin
git: 'credential-gcloud.sh' is not a git command. See 'git --help'.
fatal: could not read Username for 'https://source.developers.google.com': No such device or address
# cloudbuild.yaml
#!/bin/bash
openssl aes-256-cbc -k "$ENC_TOKEN" -in gcr_env_vars.sh.enc -out gcr_env_vars.sh -
source gcr_env_vars.sh
env
git config --global url.https://${CI_USER_TOKEN}#github.com/.insteadOf git#github.com:
pushd vendor
git submodule update --init --recursive
popd
docker build -t gcr.io/project-compute/continuous-deploy/project-ui:$COMMIT_SHA -f /workspace/installer/docker/ui/Dockerfile .
docker build -t gcr.io/project-compute/continuous-deploy/project-auth:$COMMIT_SHA -f /workspace/installer/docker/auth/Dockerfile .
This worked for me, as the first build step:
- name: gcr.io/cloud-builders/git
args: [fetch, --depth=100]
To be clear, you want all tags to be available in the Git repo, not just to trigger on tag changes? In the latter, the triggering tag should be available IIUC.
I'll defer to someone on the Container Builder team for a more detailed explanation, but that error tells me that they used gcloud to clone the Google Cloud Source Repository (GCSR), which configures a Git credential helper named as such. They likely did this in another container before invoking yours, or on the host. Since gcloud and/or the gcloud credential helper aren't available in your container, you can't authenticate properly with GCSR.
You can learn a bit more about the credential helper here.

How to uninstall Cloud SDK?

First I installed stand-alone gsutil on Fedora 25, it ran nice for months.
Then I installed Cloud SDK, and my Google Cloud credentials have been broken ever since.
I don't need Cloud SDK after all. I just want to use gsutil again.
Is there a way to uninstall Cloud SDK and credentials from Linux?
Or maybe uninstall all Google Cloud products and reinstall the stand-alone gsutil?
To explain the likely reason this is happening:
When you install the Cloud SDK, it takes some steps to make sure that when you type gsutil from the shell, it resolves to the Cloud SDK version (depending on the installation method, it might make some executable scripts in /usr/local/bin/, or put /path/to/cloud/sdk/bin at the front of your PATH environment variable). This Cloud SDK wrapper script for gsutil does some extra auth logic, loading an extra .boto file which contains credentials produced from running gcloud auth login. You can see this extra .boto file when running gcloud version -l:
$ gsutil version -l
[...]
using cloud sdk: True
config path(s): /home/USER/.boto, /home/USER/.config/gcloud/legacy_credentials/USER#gmail.com/.boto
[...]
It's likely that the auth credentials in that extra .boto file are overriding the credentials in your $HOME/.boto file.
How to use standalone gsutil again:
You'll need to ensure that the first gsutil your shell finds is the standalone version. This essentially means that the directory containing the standalone gsutil executable should come before the cloud sdk directory in your PATH environment variable. This can be done via prepending it to your PATH variable, via adding something like this to the end of your .bashrc file:
if [ -d "/path/to/standalone/gsutil/directory" ]; then
PATH="/path/to/standalone/gsutil/directory:$PATH"
fi
After doing this, you can run this command to reload your .bashrc file and check the "using cloud sdk" value of your gsutil info:
$ source "$HOME/.bashrc"; gsutil version -l
If this still shows that you're using the Cloud SDK version of gsutil, you might have an alias defined for gsutil - you can check for this by running:
$ type gsutil
If you still encounter auth issues when using the standalone version of gsutil, you'll need to generate new credentials:
$ gsutil config