How to add gitlab private repository in Argocd - argocd

I am facing one issue let's assume I have created a GitLab repository and added in argocd using CRD with my username and password, how will other developers access or create a project or an application using that repo or please suggest how can we solve this issue ?

Related

GCP yum artifact registry 403 when imported from packer instance

I am trying to install a package from the yum repository created using the GCP Artifact registry within a packer instance. I am able to install the package if the repository has public access to allUsers however, it fails if the principal is limited to a service account even though the sa has roles/artifactregistry.admin or roles/artifactregistry.reader role. The packer is using Default network with the scope of "https://www.googleapis.com/auth/cloud-platform" and the appropriate service_account_email, and account json options.
Errors during downloading metadata for repository 'MyRepository':
- Status code: 403 for https://us-central1-yum.pkg.dev/projects/project-xyz/repo-rhel8/repodata/repomd.xml (IP: 142.250.125.82)
Error: Failed to download metadata for repo 'MyRepository': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
kindly request your help with this problem.
There are many possibilities why you got above error:
You need to verify the VM has an associated service account.
Goto VM instance page
In the list of VMs, check the name of your VM and on the details tab, the service account and access scopes appear under API and IM; by default they will use compute engine default service account; you need to change that as per your account. Please check this document.
You need to check the VM service account has read the permissions to the repository as well as the cloud platform API access scope.
The problem is solved by installing yum-plugin-artifact-registry. I was using rhel8 and this package was not found. After looking into the PR (https://github.com/GoogleCloudPlatform/artifact-registry-yum-plugin/pull/14), found that I have to install dnf-plugin-artifact-registry which is found in the default registries and then was able to get my custom repo

Error on trying to add Private key with SHA-2 to AWS Opsworks

I am using OpsWorks Chef 11, it was working fine till 15 march 2022.
now getting:
ERROR: You’re using an RSA key with SHA-1, which is no longer allowed. Please use a newer client or a different key type.
Please see Improving Git protocol security on GitHub | The GitHub Blog 2 for more information.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
I recreated Key with SHA-2 and updated in Github but unable to update in OpsWorks.
Is there any way to pass new SHA-2 key to OpsWorks?
If You mean accessing a GitHub repository using an SSH URL from OpsWork, then the relevant documentation would be "AWS OpsWorks/ Using Git Repository SSH Keys".
Reminder: AWS OpsWorks Stacks does not support SSH key passphrases.
Enter the private key in the Repository SSH Key box when you add an app or specify cookbook repository. Select Git under Source Control.

aws cdk push image to ecr

I am trying to do something that seems fairly logical and straight forward.
I am using the AWS CDK to provision an ecr repo:
repository = ecr.Repository(
self,
id="Repo",
repository_name=ecr_repo_name,
removal_policy=core.RemovalPolicy.DESTROY
)
I then have a Dockerfile which lives at the root of my project that I am trying to push to the same ECR repo in the deployment.
I do this in the same service code with:
assets = DockerImageAsset(
self,
"S3_text_image",
directory=str(Path(__file__).parent.parent),
repository_name=ecr_repo_name
)
The deployment is fine and goes ahead and the ECR Repo is created, but the image is pushed to a default location aws-cdk/assets
How do I make the deployment send my Dockerfile to the exact ECR repo I want it to live in ?
AWS CDK depricated the repositoryName property on DockerImageAsset. There are a few issues on GitHub referencing the problem. See this comment from one of the developers:
At the moment the CDK comes with 2 asset systems:
The legacy one (currently still the default), where you get to specify a repositoryName per asset, and the CLI will create and push to whatever ECR repository you name.
The new one (will become the default in the future), where a single ECR repository will be created by doing cdk bootstrap and all images will be pushed into it. The CLI will not create the repository any more, it must already exist. IIRC this was done to limit the permissions required for deployments. #eladb, can you help me remember why we chose to do it this way?
There is a request for a new construct that will allow you to deploy to a custom ECR repository at (aws-ecr-assets) ecr-deployment #12597.
Use Case
I would like to use this feature to completely deploy my local image source code to ECR for me using an ECR repo that I have previously created in my CDK app or more importantly outside the app using an arn. The biggest problem is that the image cannot be completely abstracted into the assets repo because of auditing and semantic versioning.
There is also a third party solution at https://github.com/wchaws/cdk-ecr-deployment if you do not want to wait for the CDK team to implement the new construct.

How to create administration account of keycloak on AWS ECS

I am working on AWS ECS. I have uploaded Keycloak image to aws ecs, but when i run task and open that using public id, i am getting problem in administration account. There is no admin account at first and i am not able to create it.
What i have done : I have created task definition using jboss/keycloak:latest image url. Then created one cluster and run task using above task definition.
Issue : creating admin account on running task.
Thanks for your help.
If you are using official keycloak image here then you can pass folowing environment variables to generate admin user:
KEYCLOAK_USER=admin
KEYCLOAK_PASSWORD=password
Or if you don't use keycloak image (maybe you build yourself), you can also run the following command directly which does the same
/opt/jboss/keycloak/bin/add-user-keycloak.sh --user "admin" --password "password"

Git Lab to AWS S3 Integration

I am trying to build CI/CD using AWS CodePipeline.
I am integrating the Git lab with AWS S3.I am using this link -
https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/
When the code is pushed into a specific branch, the AWS API is called. ( I can see in the CloudWatch logs). But I am getting below error -
Failed to authenticate SSH session: Waiting for USERAUTH response:
GitError
Do I need to configure the GITlab username/keys anywhere on AWS/S3/Cloudformation side?
I have configured GIT PULL URL ( GitPullWebHookApi) on the Gitlab Webhooks side.
I have configured, the PublicSSHKey from AWS S3 Cloudformation into the Secret Token in Gitlab.
Am I missing any step?
Is there any document which specifies the steps to configure the Gitlab keys/user credentials for this integration?
Add the SSH public key resource "PublicSSHKey" generated by the Cloudformation Stack in the Gitlab user public key settings. Please remember that the public needs to added to each user's account who need to invoke the pipeline when committing a change in the Git repository. The Outputs tab for the CloudFormation stack contain the two webhook endpoint URLs, the output bucket name, and the public SSH key [1].
[1] https://aws-quickstart.s3.amazonaws.com/quickstart-git2s3/doc/git-to-amazon-s3-using-webhooks.pdf