Termux repository has no longer Release file - termux

I was trying to run the command:
pkg install imagemagick
And getting an error message:
The repository 'https://dl.bintray.com/termux/termux-packages-24
stable Release' does no longer have a Release file. Updating from such
a repository can't be done securely, and is therefore disabled by
default. See apt-secure(8) manpage for repository creation and user
configuration details.

The package repository is currently offline due to exceeding the bandwidth quota on Bintray. We have asked for it to be re-renabled, and it should hopefully come online within a day.
You can track the progress at:
https://github.com/termux/termux-packages/issues/4358

Related

eb platform create fails with Ruby SDK deprecated error

When trying to create a custom ElasticBeanstalk platform that uses Python3.10.5, I keep running across this error:
[2022-07-01T05:50:06.466Z] INFO [5419] - [CMD-PackerBuild/PackerBuild/PackerBuildHook/build.rb] : Activity execution failed, because: Version 2 of the Ruby SDK will enter maintenance mode as of November 20, 2020. To continue receiving service updates and new features, please upgrade to Version 3. More information can be found here: https://aws.amazon.com/blogs/developer/deprecation-schedule-for-aws-sdk-for-ruby-v2/
'packer build' failed, the build log has been saved to '/var/log/packer-builder/Python3.10_Ubuntu:1.0.8-builder.log' (ElasticBeanstalk::ExternalInvocationError)
caused by: Version 2 of the Ruby SDK will enter maintenance mode as of November 20, 2020. To continue receiving service updates and new features, please upgrade to Version 3. More information can be found here: https://aws.amazon.com/blogs/developer/deprecation-schedule-for-aws-sdk-for-ruby-v2/
'packer build' failed, the build log has been saved to '/var/log/packer-builder/Python3.10_Ubuntu:1.0.8-builder.log' (Executor::NonZeroExitStatus)
I'm not sure how to get around it, as none of my actual code for this uses ruby at all.
I have tried to SSH into the packer build box and run gem install aws-sdk to get the latest version, however it eventually ends up hanging and never completes.
I'm really unsure of what to do at this point. Any advice?
Update: Was finally able to get a gem install aws-sdk -V to finish after changing the version to ruby2.4, however the problem above still persists.

Azure devops to reuse the artifacts already download in previous runs

Currently we are using Nugget packages as our Azure artifacts and during the release process we download the artifact using "Download Package" task . It is working perfectly. But we noticed that even though we have downloaded the package already,during the next run of the pipeline in the same agent, again we have to download it. This is taking lot of time. So we want to prevent the package from getting download if it already present. Could you provide a way to reuse the already downloaded package.
In release pipeline, the System.DefaultWorkingDirectory (Example: C:\agent\_work\r1\a. Same as Agent.ReleaseDirectory and System.ArtifactsDirectory) is the directory to which artifacts are downloaded during deployment of a release. The directory is cleared before every deployment if it requires artifacts to be downloaded to the agent. This is a default behavior, we can not change it unfortunately.

Unable to push to Google Container Registry - Permission issue

I'm having the sample problem as Vaclav. I've followed the GCR quick start to the letter which entailed creating a new project (called gcr-project) and copying the code for a Flask (python) app.
After building the docker image, I entered the commands:
gcloud auth configure-docker
docker tag quickstart-image gcr.io/gcr-project/quickstart-image:tag1
docker push gcr.io/gcr-project/quickstart-image:tag1
The response was:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So it would be nice to know if the issue is with the credentials (I'm using cloud SDK OK for other projects) or permissions. The documentation here suggests you need storage-admin rights but the projects already has it, see screen cap here
Would appreciate any tips for trouble shooting this as I was looking for to using the GCR but this problem is a hard stop for me.
UPDATE:
I tried the same process with the cloud shell
me#cloudshell:~ (gcr-project-XXXXXX)$ docker push gcr.io/gcr-project/quickstart-image:tag1
The push refers to repository [gcr.io/gcr-project/quickstart-image]
4399528b7213: Preparing
1d10b1eeca74: Preparing
75156020d862: Preparing
c5697656a146: Preparing
2a435270de82: Preparing
c35f70b5c25a: Waiting
28e260baaf1b: Waiting
556c5fb0d91b: Waiting
denied: Token exchange failed for project 'gcr-project'. Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=gcr-project before performing this operation.
me#cloudshell:~ (gcr-project-XXXXXX)$
This prompted me to check the API & Services dashboard to confirm the container-registry API was enabled - It is.
UPDATE 2:
I'm having these problems on a machine running ubuntu 19.04. Per the comments below I was able to do a push via the cloud shell. So I then went through the same exercise on a MacBook Pro - worked no problems.
So I then uninstalled Cloud SDK per the doco having used the standard linux install instructions previously. I then re-installed using the debian-ubuntu install instructions (version 274.0.1-0)... STILL no go.
When I do a docker pull on the image (because push worked on MBP) I get this error: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
And when I do a push I get this error: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So at this stage, given the success on the MBP and the lack thereof on the linux/ubuntu machine, the problem is constrained to to linux/ubuntu installs.
UPDATE 3:
I got on to a separate ubuntu server, did a clean install with sudo snap install google-cloud-sdk --classic , did everything else per the docs and still had the exact same problem. So I recon this is a linux google cloud SDK specific problem.
Is there anyone out there Ubuntu land who as been able install and use cloud SDK with GCR recently?????????
I was able to replicate this issue on multiple ubuntu machines. I tried again after the most recent cloud SDK update (276.0.0) but had no luck.
In the end I went with json key file authentincati described in the docs here as a work around which worked fine.

'spinnaker-igor' was not found

For the first time, I am installing Spinnaker on AWS. I am following spinnaker documentations.
https://www.spinnaker.io/setup/install/providers/aws/
But when I am running "hal deploy apply" command it gives an error.
Reading package lists...
Building dependency tree...
Reading state information...
E: Version '0.7.0-20171002182452' for 'spinnaker-igor' was not found
! ERROR Error encountered running script. See above output for more
details.
I checked the install.sh in /home/ubuntu/.hal/default directory and I see it has configured with Spinnaker repository.
Repo is "https://dl.bintray.com/spinnaker-releases/debians". I check that repo and I could find a correct version of spinnaker-igor.
Could you please give an idea to fix this issue?
Thanks
it's possible that bintray was being flaky. The exact package you were trying to download exists here and has been published since Oct 2. You can retry hal deploy apply and if fails again let us know.

How to do automatic releases/nightlies of C++ software with GitHub?

What I'm looking for is something that builds C++ code every night or on every commit, and then, crucially, runs some commands to create a zip or a package which can then be added to a "Release" on GitHub.
I know there's travis-CI, which automatically compiles commits, and it can run for example a CMake INSTALL target and then CPack, which would create a zip or installer package. But it's not possible to upload these files to GitHub or display them somewhere.
I was thinking that maybe there was another service for that available which integrates with GitHub, but couldn't find any Google hits whatsoever. Preferably this would be separate from travis-CI, since on travis you would run debug-like builds (static analysers etc.). While for a release you want to deploy, you'd put release flags, build documentation, etc.
This is for an open source project so I'm looking for something that does this free for open source projects, preferably without setting up own server infrastructure.
There are a few related posts like Travis-CI Auto-Tag Build for GitHub Release or a travis section on deployment but they haven't really answered my question.
You can use travis-CI for this, check out "build artifacts" in the documentation.
https://docs.travis-ci.com/user/deployment/releases/
At time of writing it looks like this:
GitHub Releases Uploading
Travis CI can automatically upload assets from your $TRAVIS_BUILD_DIR to your git tags on your GitHub repository.
Please note that deploying GitHub Releases works only for tags, not for branches.
For a minimal configuration, add the following to your .travis.yml:
deploy:
provider: releases
api_key: "GITHUB OAUTH TOKEN"
file: "FILE TO UPLOAD"
skip_cleanup: true
on:
tags: true
Basically you would have to tag each commit that you want to get uploaded, so you could make a cron job that does that regularly, or do it manually, only on days that interesting work happened.
Alternatively, you could make it upload all builds to a google cloud storage account, or an amazon s3 account, and then you can cron job it from there. See docs for instance here.