I build artifacts on jenkins builds on cloudbees and for dev and test env (which are on Run#Cloud) the deployments are done from Jenkins.
However for production deployments, I would need to download the artifact (as URL) on the production machine. is there a way to set this up so that it does not ask for cloudbees login.
If you don't want artifacts to be public you need some credentials to access cloudbees jenkins (this isn't a FOSS project, is it ?)
You can use jenkins token to authenticate, so you don't need to publish your cloudbees password on production server.
See https://[account].ci.cloudbees.com/user/[me#mycompany.com]/configure to retrieve token, then you can access jenkins using
wget http://[me%40mycompany.com]:[token]#<account>.ci.cloudbees.com/...
Related
I'm using GCP build triggers connected to Bitbucket repositories. The connection is made using user credentials. Bitbucket has announced they're ending support for account password usage:
Beginning March 1, 2022, Bitbucket users will no longer be able to use
their Atlassian account password when using Basic authentication with
the Bitbucket API or Git over HTTPS. For security reasons, we require
all users to use Bitbucket app passwords.
Problem is, when trying to connect to a repository in GCP, the only option to supply Bitbucket credentials is via a web login (which to the point of app passwords, you cannot login via the bitbucket.org with an app password).
GCP Bitbucket login prompt via bitbucket.org
Expected behavior: GCP provides an option to submit app password credentials when connecting to a Bitbucket repository.
I followed directions for GCP Cloud Build integration with Bitbucket Cloud and successfully built out a functioning trigger for my repository here. I only built the trigger in GCP and used the generated webhook URL when creating the webhook in Bitbucket: I didn't create SSH keys, nor is my cloudbuild.yaml entirely valid - so the builds are failing.
Access to the Bitbucket repository was provided through GCP GUI in Cloud Build.
I have been informed of this change as well. I am trying to understand the scope of the change and its impact. It states that you cannot log in Atlassian account and password. However, besides using app passwords, you can also log in using OAuth2. https://developer.atlassian.com/cloud/bitbucket/oauth-2/
In the case of GCP Build Triggers, when I first set up the Bitbucket repository to connect to, I need to go through the "Authorization Code Grant" flow and acknowledge what access I am granting to Google Cloud Source Repository. If you check the Bitbucket API endpoints being called, they are URLs that are being used for "Authorization Code Grant" flow.
Based on these findings, am I right to say that there is no necessity to change existing triggers or mirrored repositories on GCP since they are using OAuth2 in the first place instead of Atlassian accounts and passwords?
If you can setup the build trigger to be done by a webhook you can configure the build with ssh key. But if you have to configure it as a manual trigger then using the bitbucket login credentials is the only option. Personally, I don't like this config with user login though.
The only good thing is even now(after bitbucket stopped supporting the login credentials for code checkout) the code checkout in GCP is working fine.
i try to fully automate the cloud build trigger creation via sh script
As source I use Github.
So far it's possible to create the trigger
gcloud beta builds triggers create github \
--repo-name=organisation/repo \
--repo-owner=organisation \
--branch-pattern="^main$" \
--build-config=cloudbuild.yaml
BUT each repo has to be authorized manually before otherwise you get the Error:
ERROR: (gcloud.beta.builds.triggers.create.github) FAILED_PRECONDITION: Repository mapping does not exist. Please visit https://console.cloud.google.com/cloud-build/triggers/connect?project=********* to connect a repository to your project
Which links me to the UI to create the authorization manually
Is there a way to also automate that step?
Currently there is no way to connect to external repositories using the API, but there is an ongoing feature request for this to be implemented.
There are two options you can adopt now:
Connect all the repositories at once from the Cloud Console. This way, you will be able to automate the creation of triggers for those repositories.
Use Cloud Source Repositories, which are connected to Cloud Build by default, as indicated here. Check this documentation on how to create a remote repository in CSR from a local git repository.
If you use another hosted Git provider, such as on GitHub or Bitbucket, and still need to mirror the repository into Cloud Source Repositories, you must have the cloudbuilds.builds.create permission for the Google Cloud project with which you're working. This permission is typically granted through the cloudbuild.builds.editor role.
Here are some links to this information.
Creating and managing build trigger
I am trying to set up a cloud build trigger from a public github repository with the Cloud Build GitHub App. I installed the app on my repository and authorized it but when I was redirected to GCP to connect the repository to a project this error message came up:
Failed to retrieve GitHub repositories.
The caller does not have permission
error
I suspect it may have something to do with having two factor authentication enabled on my github account, which I need for an organization.
I was able to mirror the same github repository from cloud source repositories without any issues though. I am the owner of the repository and gcp project.
*edit
Looks like the issue is due to having 2 factor authentication enabled on my github account. I disabled it and cloud build was able to connect with my repository. However I will need to have 2 factor enabled as my github organization requires it.
*edit
I hadn't mentioned the github organization i was part of had an ip whitelist configured on top of requiring 2 factor auth. I left the organization and reenabled 2 factor auth and cloud build was able to connect to my repo. Not sure why I would get the original issue if the repo is not in the github organization.
After looking more into this problem you either need to add GCE IP address ranges to the github organization IP whitelist https://cloud.google.com/compute/docs/faq#find_ip_range or just disable the whitelist if able to.
I have a React application that I am bundling using Webpack. The app relies on a MongoDB database and a Node/Express server to field the backend of the app (API requests, etc.).
I want to set up continuous integration/deployment (C.I/D.), but am not sure where to start. As my app's GIT repo is with Bitbucket and I have had experience with AWS in the past, it would be good to enable C.I/D. using these. How do I go about this?
You can use Jenkins to build your project from BitBucket.
Make use of AWS CodePipeline and AWS CodeDeploy for continuous delivery on AWS.
Jenkins gives you the flexibility to work with any source control system, and has plugins for AWS CodePipeline.
From AWS CodePipeline, you can configure a stage to call a Jenkins build job.
I've been using this system in production for quite some time now, without any issues.
I am novice at Jenkins. My demo project built in github and with AWS codedeploy I can run my project succesfully. If I use AWS codepipeline without Jenkins, whatever changed in github its automatically integrated and run the project. So I want to use Jenkins, if codes have successfully built then it should run. So when I add jenkins in AWS codepipeline and integrated with my jenkins server this process has not run and it just processing in build section. What is the error or it's not integrated with jenkins? So what should I do? Kindly help me.
If your project is simple single html page then no need of using build provider.
If your project is based on maven or gradle then Jenkins will build the job and generate the output artifact file as zip and stored in jenkins workplace. Then these output artifact file is taken as input artifact file for next stage mostly for deployment purpose.
For using jenkins as Build Provider in AWS CodePipeline you should use IAM role for accessing between Jenkins server and AWS CodePipeline.
Purpose of IAM role:
Jenkins server will get input artifact files from the source provider such as AWS S3 bucket, GitHub.
Jenkins server will poll SCM based on Build trigger in your job.
After build successful, Jenkins server will store the output artifact file as zip in jenkins workplace as I mentioned earlier.
These output artifact file is taken as input for next stage. For example, Artefact file should be deployed on AWS CodeDeploy.
Thanks