File In Storage Bucket looses "Share publicly" permission after automated build - google-cloud-platform

I'm a bit new to Google Cloud and am using a storage bucket to host a static website.
I've integrated automated builds via a build trigger when my master branch gets updated. I'm successfully able to see the changes when I push to GitHub, but when a preexisting file such as index.html gets updated, the file looses the permission to "Share publicly"
I've followed the tutorial below with the only difference being you the object permissions are now handled at the individual file level on the platform rather then a the top level for the bucket.
https://cloud.google.com/community/tutorials/automated-publishing-container-builder
This is my cloudbuild.yaml file
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", ".", "gs://www.mysite.com"]

If you don't configure at the bucket level to have all objects in that bucket publicly readable by default, you'll need to re-apply the permission to the newly uploaded file.
If you know all your updated files need to be set as publicly readable, you can use the -a option with your rsync command and use the canned_acl named "public-read". Your cloudbuild.yaml file would look like this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-a", "public-read", "-r", "-c", "-d", ".", "gs://www.mysite.com"]
If you don't want to set all object publicly readable at once, you'll need to set permissions on a per object basis by listing objects and applying permissions with the following command:
gsutil acl ch -u AllUsers:R gs://nameBucket/dir/namefile.ext

Related

Use DockerImageAsset image in Asset bundling

I want to upload a local file in the repository to s3 after it has been processed by a custom docker image with AWS CDK. I don't want to make the docker image public (Its not a big restriction tho). Also, I don't want to build the image for each s3 deployment
Since I don't want to build the docker image for each bucket deployment, I have created a DockerImageAsset, and tried to give image uri as BucketDeployment's bundle property. Code is below:
const image = new DockerImageAsset(this, "cv-builder-image", {
directory: join(__dirname, "../"),
});
new BucketDeployment(this, "bucket-deployment", {
destinationBucket: bucket,
sources: [
Source.asset(join(__dirname, "../"), {
bundling: {
image: DockerImage.fromRegistry(image.imageUri),
command: [
"bash",
"-c",
'echo "heloo" >> /asset-input/cv.html && cp /asset-input/cv.html /asset-output/cv.html',
],
},
}),
],
});
DockerImageAsset is deployed fine. But it throw this during BucketDeployment's deployment
docker: invalid reference format: repository name must be lowercase
I can see the image being deployed to AWS.
Any help is appreciated. Have a nice dayy
As far as I understand - to simplify - you have a Docker image which you use to launch a utility container that just takes a file and outputs an artifact (another file).
Then you want to upload the artifact to S3 using the BucketDeployment construct.
This is a common problem when dealing with compiling apps like Java to .jar artifacts or frontend applications (React, Angular) to static output (HTML, CSS, JS) files.
The way I've approached this in the past is: Split the artifact generation as a separate step in your pipeline and THEN trigger the "cdk deploy" as a subsequent step.
You would have less headache and you control all parts of the process, including having access to the low level Docker commands like docker build ... and docker run ..., and in effect, leverage local layer caching in the best possible way. If you rely on CDK to do the bundling for you - there's a bit of magic behind the scenes that's not always obvious. I'm not saying it's impossible, it's just more "work".

VM Manager - OS Policy Assignment for a Windows VM in GCP

I am trying to create a couple of os policy assignments to configure - run some scripts with PowerShell - and install some security agents on a Windows VM (Windows Server 2022), by using the VM Manager. I am following the official Google documentation to setup the os policies. The VM Manager is already enabled, nevertheless I have difficulties creating the appropriate .yaml file which is required for the policy assignment since I haven't found any detailed examples.
Related topics I have found:
Google documentation offers a very simple example of installing an .msi file - Example OS policies.
An example of a fixed policy assignment in Terraform registry - google_os_config_os_policy_assignment, from where I managed to better comprehend the required structure for the .yaml file even though it is in a .json format.
Few examples provided at GCP GitHub repository (OSPolicyAssignments).
OS Policy resources in JSON representation - REST Resource, from where you can navigate to sample cases based on the selected resource.
But, it is still not very clear how to create the desired .yaml file. (ie. Copy some files, run a PowerShell script to perform an installation or an authentication). According to the Google documentation pkg, repository, exec, and file are the supported resource types.
Are there any more detailed examples I could use to understand what is needed? Have you already tried something similar?
Update: Adding an additional source.
You need to follow these steps:
Ensure that the OS Config agent is installed in your VM by running the below command in PowerShell:
PowerShell Get-Service google_osconfig_agent
you should see an output like this:
Status Name DisplayName
------ ---- -----------
Running google_osconfig... Google OSConfig Agent
if the agent is not installed, refer to this tutorial.
Set the metadata values to enable OSConfig agent with Cloud Shell command:
gcloud compute instances add-metadata $YOUR_VM_NAME \
--metadata=enable-osconfig=TRUE
Generate an OS policy and OS policy assignment yaml file. As an example, I am generating an OS policy that installs a msi file retrieved from a GCS bucket, and an OS policy assignment to run it in all Windows VMs:
# An OS policy assignment to install a Windows MSI downloaded from a Google Cloud Storage bucket
# on all VMs running Windows Server OS.
osPolicies:
- id: install-msi-policy
mode: ENFORCEMENT
resourceGroups:
- resources:
- id: install-msi
pkg:
desiredState: INSTALLED
msi:
source:
gcs:
bucket: <your_bucket_name>
object: chrome.msi
generation: 1656698823636455
instanceFilter:
inventories:
- osShortName: windows
rollout:
disruptionBudget:
fixed: 10
minWaitDuration: 300s
Note: Every file has its own generation number, you can get it with the command gsutil stat gs://<your_bucket_name>/<your_file_name>.
Apply the policies created in the previous step using Cloud Shell command:
gcloud compute os-config os-policy-assignments create $POLICY_NAME --location=$YOUR_ZONE --file=/<your-file-path>/<your_file_name.yaml> --async
Refer to the Examples of OS policy assignments for more scenarios, and check out this example of a PowerShell script.
Down below you can find the the .yaml file that worked, in my case. It copies a file, and executes a PowerShell command, so as to configure and deploy a sample agent (TrendMicro) - again this is specifically for a Windows VM.
.yaml file:
id: trendmicro-windows-policy
mode: ENFORCEMENT
resourceGroups:
- resources:
- id: copy-exe-file
file:
path: C:/Program Files/TrendMicro_Windows.ps1
state: CONTENTS_MATCH
permissions: '755'
file:
gcs:
bucket: [your_bucket_name]
generation: [your_generation_number]
object: Windows/TrendMicro/TrendMicro_Windows.ps1
- id: validate-running
exec:
validate:
interpreter: POWERSHELL
script: |
$service = Get-Service -Name 'ds_agent'
if ($service.Status -eq 'Running') {exit 100} else {exit 101}
enforce:
interpreter: POWERSHELL
script: |
Start-Process PowerShell -ArgumentList '-ExecutionPolicy Unrestricted','-File "C:\Program Files\TrendMicro_Windows.ps1"' -Verb RunAs
To elaborate a bit more, this .yaml file:
copy-exe-file: It copies the necessary installation script from GCS to a specified location on the VM. Generation number can be easily found on "VERSION HISTORY" when you select the object on GCS.
validate-running: This stage contains two different steps. On the validate it checks if the specific agent is up and running on the VM. If not, then it proceeds with the enforce step, where it executes the "TrendMicro_Windows.ps1" file with PowerShell. This .ps1 file downloads, configures and installs the agent. Note 1: This command is executed as Administrator and the full path of the file is specified. Note 2: Instead of Start-Process PowerShell a Start-Process pwsh can also be utilized. It was vital for one of my cases.
Essentially, a PowerShell command can be directly run at the enforce
step, nonetheless, I found it much easier to pass it first to a .ps1
file, and then just run this file. There are some restriction with the
.yaml file anywise.
PS: Passing osconfig-log-level - debug as a key-value pair as Metadata - directly to a VM or applied to all of them (Compute Engine > Setting - Metadata > EDIT > ADD ITEM) - provide some additional information and may help you on dealing with errors.

gcloud builds submit of Django website results in error "does not have storage.objects.get access"

I'm trying to deploy my Django website with Cloud Run, as described in Google Cloud Platform's documentation, but I get the error Error 403: 934957811880#cloudbuild.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object., forbidden when running the command gcloud builds submit --config cloudmigrate.yaml --substitutions _INSTANCE_NAME=trouwfeestwebsite-db,_REGION=europe-west6.
The full output of the command is: (the error is at the bottom)
Creating temporary tarball archive of 119 file(s) totalling 23.2 MiB before compression.
Some files were not included in the source upload.
Check the gcloud log [C:\Users\Sander\AppData\Roaming\gcloud\logs\2021.10.23\20.53.18.638301.log] t
o see which files and the contents of the
default gcloudignore file used (see `$ gcloud topic gcloudignore` to learn
more).
Uploading tarball of [.] to [gs://trouwfeestwebsite_cloudbuild/source/1635015198.74424-eca822c138ec
48878f292b9403f99e83.tgz]
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: could not resolve source: googleapi: Error 403: 934957811880#cloudbuild.gserviceaccount.com does not have storage.objects.get access to the Google Cloud Storage object., forbidden
On the level of my storage bucket, I granted 934957811880#cloudbuild.gserviceaccount.com the permission Storage Object Viewer, as I see on https://cloud.google.com/storage/docs/access-control/iam-roles that this covers storage.objects.get access.
I also tried by granting Storage Object Admin and Storage Admin.
I also added the "Viewer" role on IAM level (https://console.cloud.google.com/iam-admin/iam) for 934957811880#cloudbuild.gserviceaccount.com, as suggested in https://stackoverflow.com/a/68303613/5433896 and https://github.com/google-github-actions/setup-gcloud/issues/105, but it seems fishy to me to give the account such a broad role.
I enabled Cloud run in the Cloud Build permissons tab: https://console.cloud.google.com/cloud-build/settings/service-account?project=trouwfeestwebsite
With these changes, I still get the same error when running the gcloud builds submit command.
I don't understand what I could be doing wrong in terms of credentials/authentication (https://stackoverflow.com/a/68293734/5433896). I didn't change my google account password nor revoked permissions of that account to the Google Cloud SDK since I initialized that SDK.
Do you see what I'm missing?
The content of my cloudmigrate.yaml is:
steps:
- id: "build image"
name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/${PROJECT_ID}/${_SERVICE_NAME}", "."]
- id: "push image"
name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/${PROJECT_ID}/${_SERVICE_NAME}"]
- id: "apply migrations"
name: "gcr.io/google-appengine/exec-wrapper"
args:
[
"-i",
"gcr.io/$PROJECT_ID/${_SERVICE_NAME}",
"-s",
"${PROJECT_ID}:${_REGION}:${_INSTANCE_NAME}",
"-e",
"SETTINGS_NAME=${_SECRET_SETTINGS_NAME}",
"--",
"python",
"manage.py",
"migrate",
]
- id: "collect static"
name: "gcr.io/google-appengine/exec-wrapper"
args:
[
"-i",
"gcr.io/$PROJECT_ID/${_SERVICE_NAME}",
"-s",
"${PROJECT_ID}:${_REGION}:${_INSTANCE_NAME}",
"-e",
"SETTINGS_NAME=${_SECRET_SETTINGS_NAME}",
"--",
"python",
"manage.py",
"collectstatic",
"--verbosity",
"2",
"--no-input",
]
substitutions:
_INSTANCE_NAME: trouwfeestwebsite-db
_REGION: europe-west6
_SERVICE_NAME: invites-service
_SECRET_SETTINGS_NAME: django_settings
images:
- "gcr.io/${PROJECT_ID}/${_SERVICE_NAME}"
Thank you very much for any help.
The following solved my problem.
DazWilkin was right in saying:
it's incorrectly|unable to reference the bucket
(comment upvote for that, thanks!!). In my secret (configured on Secret Manager; or alternatively you can put this in a .env file at project root folder level and making sure you don't exclude that file for deployment in a .gcloudignore file then), I now
have set:
GS_BUCKET_NAME=trouwfeestwebsite_sasa-trouw-bucket (project ID + underscore + storage bucket ID)
instead of
GS_BUCKET_NAME=sasa-trouw-bucket
Whereas the tutorial in fact stated I had to set the first, I had set the latter since I found the underscore splitting weird, nowhere in the tutorial had I seen something similar, I thought it was an error in the tutorial.
Adapting the GS_BUCKET_NAME changed the error of gcloud builds submit to:
Creating temporary tarball archive of 412 file(s) totalling 41.6 MiB before compression.
Uploading tarball of [.] to [gs://trouwfeestwebsite_cloudbuild/source/1635063996.982304-d33fef2af77a4744a3bb45f02da8476b.tgz]
ERROR: (gcloud.builds.submit) PERMISSION_DENIED: service account "934957811880#cloudbuild.gserviceaccount.com" has insufficient permission to execute the build on project "trouwfeestwebsite"
That would mean that least now the bucket is found, only a permission is missing.
Edit (a few hours later): I noticed this GS_BUCKET_NAME=trouwfeestwebsite_sasa-trouw-bucket (project ID + underscore + storage bucket ID) setting then caused trouble in a later stage of the deployment, when deploying the static files (last step of the cloudmigrate.yaml). This seemed to work for both (notice that the project ID is no longer in the GS_BUCKET_NAME, but in its separate environment variable):
DATABASE_URL=postgres://myuser:mypassword#//cloudsql/mywebsite:europe-west6:mywebsite-db/mydb
GS_PROJECT_ID=trouwfeestwebsite
GS_BUCKET_NAME=sasa-trouw-bucket
SECRET_KEY=my123Very456Long789Secret0Key
Then, it seemed that there also really was a permissions problem:
for the sake of completeness, afterwards, I tried adding the permissions as stated in https://stackoverflow.com/a/55635575/5433896, but it didn't prevent the error I reported in my question.
This answer however helped me: https://stackoverflow.com/a/33923292/5433896. =>
Setting the Editor role on the cloudbuild service account helped the gcloud builds submit command to continue its process further without throwing the permissions error.
If you have the same problem: I think a few things mentioned in my question can also help you - for example I think doing this may also have been important:
I enabled Cloud run in the Cloud Build permissons tab:
https://console.cloud.google.com/cloud-build/settings/service-account?project=trouwfeestwebsite

Text Compression when serving from a Github Trigger

I'm trying to figure out how to serve my js, css and html as compressed gzip from my Google Cloud Storage bucket. I've set up my static site properly, and also built a Cloud Build Trigger to sync the contents from the repository on push. My problem is that I don't want to have gzips of these files on my repository, but rather just serve them from the bucket.
I might be asking too much for such a simple setup, but perhaps there is a command I can add to my cloudbuild.yaml to make this work.
At the moment it is just this:
steps:
- name: gcr.io/cloud-builders/gsutil
args: ["-m", "rsync", "-r", "-c", "-d", ".", "gs://my-site.com"]
As far as I'm aware this just syncs the bucket to the repo. Is there another command that could ensure that the aforementioned files are transferred as gzip? I've seen use of the gsutil cp
but not within this specific Cloud Build pipeline setup from Github.
Any help would be greatly appreciated!
The gsutil command setmeta lets you add metadata information to the files that overwrites the default http server. Which is handy for Content-Type, and Cache-* options.
gsutil setmeta -h "Content-Encoding: gzip" gs://bucket_name/folder/*
For more info about Transcoding with gzip-uploaded files: https://cloud.google.com/storage/docs/transcoding

How do I retrieve assets from a Google Storage bucket within a Google Container Registry automated build?

I've created a mirrored GitHub repo in Google's Container Registry and then created a Build Trigger. The dockerfile in the repo includes gsutil -m rsync -r gs://asset-bucket/ local-dir/ so that I can move shared private assets into the container.
But I get an error:
ServiceException: 401 Anonymous caller does not have storage.objects.list access to asset-bucket
I have an automatically created service account (#cloudbuild.gserviceaccount.com) for building and it has the Cloud Container Builder role. I tried adding Storage Object Viewer, but I still get the error.
Shouldn't the container builder automatically have the appropriate permissions?
Are you using the gcr.io/cloud-builders/gsutil build step to do this? That should use default credentials properly and it should Just Work.
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: [ "-m", "rsync", "gs://asset-bucket/", "local-dir/" ]
Alternatively, you could try the GCS Fetcher.
Just to be specific about the answer from #david-bendory, privileged calls cannot occur inside a dockerfile. I created a cloudbuild.yaml that looks like this:
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: [ "-m", "rsync", "-r", "gs://my-assets/", "." ]
dir: "static"
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/project-name', '.']
images: ['gcr.io/$PROJECT_ID/project-name']
and a dockerfile that includes
COPY static/* www/