I can't get the images from us.gcr.io in Spinnaker pipeline trigger configuration - google-cloud-platform

I followed this tutorial from Google (https://cloud.google.com/solutions/continuous-delivery-spinnaker-container-engine), It's worked fine, but, in step of pipeline creation (automated triggers with Docker Registry), I can't get the images from us.gcr.io. Anyone with the same problem? Any log (of microservices) can help me?

Its possible that the docker address specified in your spinnaker configuration is gcr.io. It should be us.gcr.io

Related

Customised Google Cloudshell image does not launch

Customised Google Cloud Shell image fails to launch, error is 'Cloud Shell is experiencing some issues provisioning a VM to you. Please try again in a few minutes'. Repeated attempts to launch also fail.
I created a custom Google Cloudshell Image with an Ansible lab environment and setup tutorial. When this was tested approximately 10 days ago, it seemed to work as expected. Setup was performed using the following guide
Project is hosted with the 'Open in Google Cloud Shell' button here
For convenience, this is the launch button as a link
The customised Cloud Shell image is hosted at gcr.io/diveintoansible/diveintoansible-lab-gcp-cloudshell
I've checked the permissions and these appear to be open to the public as desired.
Any advice on resolving this, greatly appreciated.
This usually happens because the base image is out of date. If your image worked a few weeks ago, you probably just need to rebuild it.

MWAA - environments constantly loading

I'm currently trying to set up an Airflow environment via MWAA. I've gone through the create environment steps twice with both ending at the page listing Airflow environments with a banner saying I was successful. However, for the past 2 days, this environments page has just shown Loading Environments, as shown below. I also see a (0) for the environment number.
So far, I've added 2 interfaces for ECR and VPC for the API and the environment but no luck. Has anyone else run into this issue or have any clue what might be happening? Thanks!
Were you able to find the solution to this issue? I had similar issues when I tried to set up the first-time MWAA on AWS Account.
https://github.com/awslabs/aws-support-tools/tree/master/MWAA
Here's a link to how to verify if all the resources are set up correctly for MWAA. If you run the script mentioned on the repo you should be able to see where the issue lies.

Error in uploading code on AWS CodeCommit

Actually I want to integrate AWS CodeCommit with AWS Elastic Beanstalk. But I am stuck in code upload on AWS CodeCommit. I have code of size 900 MB around. I have no idea so much about it so I am attaching image containing my problem related to hang process after completing the code upload process successfully. Please see the image for that.
Actually, I have setup this parameter to increase the buffer size with following command:
git config --global http.postBuffer 157286400
Main Issue is how we upload code of bigger size approx size in GB on aws codeCommit successfully.
But, after that I am facing this issue so please if you have any idea about that, please help me. Thanks in advance.
This is the image containing my problem definition
Based on your information, it seems that you are under the limits of CodeCommit, which are listed here: https://docs.aws.amazon.com/codecommit/latest/userguide/limits.html. I strongly suggest you to review them, just in case that you are falling in one of them.
Could you please provide more details about your git client, AWS region and try to run git push again with the GIT_CURL_VERBOSE=1 GIT_TRACE=1 options?

AWS elasticbeanstak EbExtensionPostBuild and EbExtensionPreBuild executioners

Good day,
I am in the process of deploying some of my applications to elasticbeanstalk on AWS, now from reading the documentation and tutorials i get it all deployed and working, but there is a big thing missing in the AWS documentation that i need to know. i can not find the information i am seeking for anywhere, can someone please give me a link to the documentation explaining this or just explain it to me please.
Who and what and from where does is the EbExtensionPreBuild and EbExtensionPostBuild actions executed, who calls them, what do they run and where do they get the commands from?
There are in total 6 actions being performed and nowhere on the internet does AWS explain what happends in these actions.
InfraWriteConfig...
DownloadSourceBundle...
EbExtensionPreBuild...
AppDeployPreHook...
EbExtensionPostBuild...
InfraCleanEbextension...
can someone please explain these actions and link them to the bits of code they execute from the .ebextensions folder .config files.
Thank you
The environment that used to answer you question is a PHP 7.3 running on 64bit Amazon Linux/2.9.2, but, maybe to others platforms, like docker the answer is the same, or at least in how to find the answer.
You can find in log file /var/log/eb-commandprocessor.log the log of all tasks that was executed in your server, the most common task is the deployment task CMD-AppDeploy.
This task is responsible to execute the following scripts:
CMD-AppDeploy
First stage : AppDeployStage0
DownloadSourceBundle:
- /opt/elasticbeanstalk/bin/download-source-bundle
EbExtensionPreBuild
- /opt/elasticbeanstalk/eb_infra/infra-embedded_prebuild.rb
AppDeployPreHook
- /opt/elasticbeanstalk/hooks/appdeploy/pre
EbExtensionPostBuild
/opt/elasticbeanstalk/eb_infra/infra-embedded_postbuild.rb
InfraCleanEbextension
/opt/elasticbeanstalk/eb_infra/infra-clean_ebextensions_dir.rb
Second stage : AppDeployStage1
AppDeployEnactHook
- /opt/elasticbeanstalk/hooks/appdeploy/enact
AppDeployPostHook
- /opt/elasticbeanstalk/hooks/appdeploy/post
You have more than one task available in Beanstalk, you can find the full config in file: /opt/elasticbeanstalk/deploy/configuration/containerconfiguration
Each script is a small part in the deployment process, if you need more details in how the deployment is done, I suggest you check each script individually.

Deploying cloud foundry with BOSH - what does a "bosh delete deployment" clean up?

We've just gone through the process of deploying a multi node (34 nodes) cloud foundry using BOSH, with a few hiccups along the way. One in particular was that it took us several "bosh deploy" runs to get through the initial compilation steps. We'd start the bosh deploy, it would start compiling, get through a few components and then fail. There is no doubt that we have some configuration issues with our VMWare based infrastructure and I suspect we are running out of resources. But here is my main question for now.
We were able to get through the compiles by issuing a "bosh delete deployment ourcloud --force" after a failure.
What does this command clear out? It obviously left successfully compiled stuff in place, but what is cleaned? Temporary storage? Anything else?
Thanks.
bosh delete deployment will delete an entire deployment, it deletes the VMs in the vcenter, clears it's db of the info and deletes the manifest. after it's done there should be no trace (except logs) of the deployment.