Issue downloading dependency from Amazon S3 - amazon-web-services

I am currently trying to download a dependency from an Amazon S3 bucket for a maven framework project but Intellij is unable to download when I compile. In the .m2 repository it shows the folder for the dependency, it just doesn't contain the required information. There is also a settings file in the .m2 providing a username and password to the S3. In the Intellij console all dependencies are underlined in red in the maven window but only the two dependencies relying on the S3 are not being imported. Also, when I install the locally they are found and work fine.
Some of the actions I have taken:
Deletion of the repository
Deletion of the .m2 folder
Invalidate and cache
Reloading all projects
Downloading sources and documentation
Rebuilding
Installing locally (as mentioned above)
Reinstalling Intellij
Deleting the project and re-loading from code commit
If anyone has any ideas then I would be very grateful to try them out!

You can locate the proper Maven dependencies in the POM file that is located in the AWS Example Github located here:
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/example_code/s3
This POM file is valid within an INtelliJ project:

Related

Repository as dependency doesn't affect changes

My nextjs front-end app on AWS has a back-end dependency in package.json linked it in this way:
"api-client": "git+https://username:password#bitbucket.org/username/api_client_dev.git".
When I update my backend repository with changes, locally (npm run dev) everything works, but the app on AWS (with Amplify), when building recognizes an error type about a variable referring to something I haven't done yet.
My front-end doesn't recognize the updated repository.
If I check my repo on bitbucket is updated.
No problems with branches.
I don't understand why. Any suggestion?
Thank you
The problem was in amplify.yml
Adding the script npm update on pre-build, force amplify to refresh cached dependencies on node_modules, my dependency included.

Azure devops to reuse the artifacts already download in previous runs

Currently we are using Nugget packages as our Azure artifacts and during the release process we download the artifact using "Download Package" task . It is working perfectly. But we noticed that even though we have downloaded the package already,during the next run of the pipeline in the same agent, again we have to download it. This is taking lot of time. So we want to prevent the package from getting download if it already present. Could you provide a way to reuse the already downloaded package.
In release pipeline, the System.DefaultWorkingDirectory (Example: C:\agent\_work\r1\a. Same as Agent.ReleaseDirectory and System.ArtifactsDirectory) is the directory to which artifacts are downloaded during deployment of a release. The directory is cleared before every deployment if it requires artifacts to be downloaded to the agent. This is a default behavior, we can not change it unfortunately.

Nexus 3.5.1 proxies from snapshot repo nothing but maven metadata files

I have upgraded nexus repository from 2.x to 3.x through following path:
2.4.14 -> 3.4.0 -> 3.5.1
All nexus services were packed in docker with data directory mapped from host's. For all services I use default either sonatype/nexus or sonatype/nexus3 containers. Nexus web interface is hidden behind nginx with simple reverse proxying.
I use the nexus service with boot-cj (with no credentials) tools which manages dependencies the same way as maven. Anyway the tool first downloads nexus-maven.xml with relevant sha1 files and tries to download jars. It works fine with all 2.x I had.
I created a proxy repository against remote sonatype-snapshots repo. When I start compilation I have Could not find artifact error. I found that the meatdata files are cached but all poms and jars.
I have tried to fix it by cleaning cache with the clean_cache file trick and more rough rm -rfv /srv/nexus3/nexus-data/cache/* with no success. There are no any logs about error. Also I have checked manually that required artefact exists in the remote repository. More obvious Rebuild index button gave no solution. I do not thing it is a problem with nginx, but who knows? Also leaving overnight to run the scheduled tasks did not help.
The expected artifact is org.eclipse.rdf4j:rdf4j:pom:2.3-20170901.145510-11.

How to do automatic releases/nightlies of C++ software with GitHub?

What I'm looking for is something that builds C++ code every night or on every commit, and then, crucially, runs some commands to create a zip or a package which can then be added to a "Release" on GitHub.
I know there's travis-CI, which automatically compiles commits, and it can run for example a CMake INSTALL target and then CPack, which would create a zip or installer package. But it's not possible to upload these files to GitHub or display them somewhere.
I was thinking that maybe there was another service for that available which integrates with GitHub, but couldn't find any Google hits whatsoever. Preferably this would be separate from travis-CI, since on travis you would run debug-like builds (static analysers etc.). While for a release you want to deploy, you'd put release flags, build documentation, etc.
This is for an open source project so I'm looking for something that does this free for open source projects, preferably without setting up own server infrastructure.
There are a few related posts like Travis-CI Auto-Tag Build for GitHub Release or a travis section on deployment but they haven't really answered my question.
You can use travis-CI for this, check out "build artifacts" in the documentation.
https://docs.travis-ci.com/user/deployment/releases/
At time of writing it looks like this:
GitHub Releases Uploading
Travis CI can automatically upload assets from your $TRAVIS_BUILD_DIR to your git tags on your GitHub repository.
Please note that deploying GitHub Releases works only for tags, not for branches.
For a minimal configuration, add the following to your .travis.yml:
deploy:
provider: releases
api_key: "GITHUB OAUTH TOKEN"
file: "FILE TO UPLOAD"
skip_cleanup: true
on:
tags: true
Basically you would have to tag each commit that you want to get uploaded, so you could make a cron job that does that regularly, or do it manually, only on days that interesting work happened.
Alternatively, you could make it upload all builds to a google cloud storage account, or an amazon s3 account, and then you can cron job it from there. See docs for instance here.

Sitecore Package upload error

I am trying to install a Sitecore package from dev to staging environment, i have used package designer to create this package, but when i try to upload this package on the staging site it results in the following error:
The File exists.<br>
I have also tried uploading the package created using the Sitecore Rocks plugin which also results in the same error.
I am installing the package using installation wizard and uploading the package and i am not overwriting the existing files.
Kindly, help!
This error occurs if the windows temp has more than 65K files. When we cleared those files the issue got resolved.
Maybe there is package with the same name as your new package. Try to rename you package zip file and then upload.
make sure you are installing in the right environment
make sure the file doesnt not already exists (you can look it under the packages folder)
restart app pool and try again. Maybe overwrite the installation file.
there was a issue with sitecore on the staging environment(probably corrupted install). so we took a risk and installed it on live..! It works fine! Thank you all for the help. Much appreciated.