Jenkins: automatic tool installers missing JSON - amazon-web-services

Normally, in ${JENKINS_HOME}/updates/ there are several JSON files for automatically installing various tools. Namely, the one I need is hudson.tasks.Maven.MavenInstaller . Two others are suddenly missing: for Ant, and for JDK.
End result is, my build fails because it can't install Maven from Apache automatically (as detailed here).
I am deploying Jenkins to AWS. What's strange is, I have an AMI (image) that previously was working fine, that suddenly is encountering this problem. I've banged my head on this one extensively with no solution.
Looks like you can find the JSON that I'm failing to download here:
http://mirrors.jenkins-ci.org/updates/current/updates/
Except the JSON there is prepended with "downloadService.post()", indicating that hudson.model.DownloadService is probably doing something (other hints point to that, as well).
Any ideas?
EDIT: Actually, it looks like the last AMI that worked does, in fact, still work.
Should mention: The project is, creating a Jenkins AMI via Chef and Packer

Found the answer to this about a week after posing. Turns out, the issue was on Jenkins update center side of things, suddenly changing to a smaller RSA Key:
https://issues.jenkins-ci.org/browse/JENKINS-31089
At the time, the workaround was this:
sed -i s/'jdk.certpath.disabledAlgorithms=MD2, RSA keySize < 1024'/'jdk.certpath.disabledAlgorithms=MD2, RSA keySize < 512'/ /usr/lib/jvm/jre/lib/security/java.security
Which allowed Java to grab the updates, even though the update center was using a smaller RSA key.

Related

'TypeError at /api/chunked_upload/ Unicode-objects must be encoded before hashing' errorwhen using botocore in Django project

I have hit a dead end with this problem. My code works perfectly in development but when I deploy my project and configure DigitalOcean Spaces & S3 bucket I get the following error when uploading media:
TypeError at /api/chunked_upload/
Unicode-objects must be encoded before hashing
I'm using django-chucked-uploads and it doesn't play well with Botocore
I'm using Python 3.7
My code is taken from this demo: https://github.com/juliomalegria/django-chunked-upload-demo
Any help will be massively helpful
This library was implemented for Python 2, so there might be a couple of things that don't work out of the box with Python 3.
This issue that you're facing is one of them since files in Python 3 are read directly as Unicode (since now py3's str is py2's unicode). The md5 hashing is the part of the code triggering this exception (this line) because it doesn't expect Unicode strings.
If you have created your own model inheriting from AbstractChunkedUpload, you can override the md5 property to encode the chunks before updating the hash. See this other SO question on how to solve this specific.
Hopefully this helped!
Disclaimer: I'm the creator of this library. However, I haven't maintained it in a long time to the point that it might be no longer usable.

AWS Tools for Powershell, version differences

I have been testing an older AWS Tools install using AWSToolsAndSDKForNet_sdk-3.3.398.0_ps-3.3.390.0_tk-1.14.4.1.msi and a newer install using AWSToolsAndSDKForNet_sdk-3.5.2.0_ps-4.1.0.0_tk-1.14.5.0.msi. The code that I am using to test with is
Set-AWSCredential -AccessKey:$ACCESSKEY -SecretKey:$SECRETKEY -StoreAs:default
$items = Get-S3Object -BucketName:$BUCKETNAME -Region:'eu-west-1' -Key:'revit/2020'
Write-Host "$($items.Length) items"
$count = 1
foreach ($item in $items) {
Write-Host "$count $($item.key)"
$count ++
}
I am seeing VERY different behavior, and can't figure out why. With 3.3 the code works as intended, I end up with a list of files in my bucket and key. Performance is pretty decent, it takes a moment but I have about 5000 files in may "subfolders".
When I run this with 4.1 it takes 3-5 times as long and returns nothing.
It seems that Help is a bit different too. A first run of get-help Get-S3Object -detailed will take as long as 10 minutes to run, with CPU, Memory and Disk access often at 99% utilization. A second run is quite quick. 3.3 Does nothing of the sort.
So, is this current build of AWS Tools for Powershell just not ready for prime time? My searches for AWS Tools 4.1 performance have turned up nothing.
For what it is worth, I am using the MSI installer because I need the install to actually work consistently, and the NuGet approach has been very problematic on a number of production workstations. But if there is another option I would love to look at it. The main issue is I need ultimately to do the install and immediately load the modules and work with AWS. I don't have that working with the MSI based install yet, but that's for a different thread.
It looks like they changed the results from Get-S3Object. You will need to add -Select S3Objects.Key to get the results you're looking for (or just -select *). Here's the excerpt from the change notes:
Most cmdlets have a new parameter: -Select. Select can be used to change the value returned by the cmdlet. For example the service API used by Get-S3Object returns a ListObjectsResponse object but the cmdlet is configured to return only the S3Objects field. Now you can specify -Select * to receive the full API response. You can also specify the path to a nested result property like -Select S3Objects.Key. In certain situations it may be useful to return a cmdlet parameter, this can be achieved with -Select ^ParameterName.
Found by going to the Change Notes and doing a CTRL+F for Get-S3Object. Hope this resolves it for you!

Optimus: load-assets not working with Regex

I'm trying to build a static site with stasis and serve my assets with Optimus. Pictures reside under /resources/public/imgs/ . I can serve individual pictures after loading them as follows:
(optimus.assets/load-assets "public"
["/imgs/pic1.jpg"
"/imgs/pic2.jpg"])
The following attempt to serve pictures by regex does not work though:
(optimus.assets/load-assets "public"
[#"/imgs/.*\.jpg"])
I'm getting No files matched regex /imgs/.*\.jpg, which seems implausible.
I've done some digging around the Optimus code and may have found the culprit. When called with a regex the optimus.assets/load-assets function starts building the paths from the return value of (optimus.class-path/file-paths-on-class-path), which – in my case – consists of only the following:
optimus.class-path/file-paths-on-class-path
=> ("boot/" "boot/tag-release.properties" "boot/bin/" "boot/bin/ParentClassLoader.class" "Boot.class" "META-INF/" "META-INF/MANIFEST.MF")
Since resources isn't a subdir of any of these directories I'm not surprised that I don't get a match. So maybe my question ultimately is why I'm getting only these directories here? Is it because I'm using Boot rather than Leiningen which is presupposed by the tutorials on Optimus?
EDIT:
It has definitely something to do with Boot or at least the way I've set it up. Following Alan Thompson's advice I've created a minimal Leiningen project – load-assets worked flawlessly. The very same setup with Boot however does not. Ultimately it boils down to (System/getProperty "java.class.path" ".") returning wildly different things: Boot gives me "/home/phylax/bin/boot", i.e. my boot binary, whereas in Leiningen it gives me a plethora of directories in my actual project… any idea as to what I'm doing wrong? How can I setup Boot to work with Optimus?
Many thanks for any guidance you may give me on this
Oliver

Got error 'invalid UTF-8 string at offset 1' from regexp

I need help, I have tried to find the solution but until now all I have found is stuff related to regex but I think the problem might be in another place.
I have a project locally (Windows 10 --> Xampp Latest version [Apache & Mysql], I use CodeIgniter as Framework, I developed a function which searches in my database using REGEXP (I use query builder)
It works fine and everything. Here I searched for saltarín <-- Note the accent on the letter i
So now that it works I have decided to update the online website but as soon as I was testing the online project I noticed an error jumps when I search something with accented characters or in this case the letter ñ which also works locally.
I checked my database configurations, in database.php I have dbcollat set to utf8_spanish_ci and my online database and tables are set to utf8_spanish_ci too, I think this must be a server configuration but I don't have an idea of what it really could be
In case you need it this is the piece of code which uses regexp
$this->db->where("lower(secret_colum_name) REGEXP", $this->secret_hehe);
Thanks a lot for your time, I really appreciate your help.
EDIT: I forgot to mention I'm using hostinger to host my website
It's me again. It was just as I suspected it was something about the server, After some hours of research I found out that my server didn't have the same extensions and configurations, you can use php -m command to find out your local extensions so you can then enable them on your remote server which in my case I had to do by c-panel but your case could be different.
I also changed my php version in the remote server and I'm not really sure about the next thing but it might have helped.
I had a setter defined in my model which did the next thing
$this->my_var = strtolower($my_var);
I removed strtolower and after all the steps previously mentioned I reloaded my site and now it works

AWS ec2 to run a python program using latex and OpenCV

A friend and I are working on a machine learning project together. We've managed to collect about 5,000 tex documents (we hope to get up to around 100,000 soon). We have a python script that we run on each document to do some text manipulation, extract particular parts of the tex code, compile the parts, convert the compiled parts to cropped PNG images, and search a converted PNG of the full tex for the cropped images using OpenCV. The code takes between 30 seconds and 2 minutes on the documents we've tried so far, so we really need to speed it up.
I've been tasked with gaining access to a computer cluster and figuring out how to implement our code on such a cluster. Someone suggested I look into using AWS, so I've made an account and have been trying to figure out how to use EC2 for the past few hours. Am I on the right track, or is there some other part of AWS or something else entirely that would be better suited to my task?
Whatever I use, it has to have access to the various python libraries in our code and to pdflatex and the full set of tex packages. Is this possible on EC2? I have almost no idea how to go about using EC2 (I've managed to start some instances, but how do I use them to run my script? and do I need to change my python script to accomodate the parallel processing, or does EC2 take care of that somehow? is it as easy as starting a linux instance and installing the programs I need like I would on any other linux machine?). None of the tutorials are immediately useful, and I'm still not even sure if EC2 is capable of doing what I'm looking for. Any advice is appreciated.
I wouldn't normally answer this kind of question but it sounds like you are doing something interesting. So let's have a go
Q1.
"We have a python script that we run on each document to do some text
manipulation, extract particular parts of the tex code, compile the
parts, convert the compiled parts to cropped PNG images, and search a
converted PNG of the full tex for the cropped images using OpenCV.. we
really need to speed it up"
Probably you could split the 100,000 documents into 10 parts and set up
10 instances of the processing software and do the run in parallel.
To set up 10 instances the same, there are many methods but one of the simpler ways is to set up one machine as desired, take a snapshot, make an AMI and then
use the AMI to launch many more copies.
There might be an extra step with putting the results of the search into some
kind of central database.
I don't know anything about OpenCV but there are several suggestions that with a G3 instance type (this has a GPU) it might go faster. Google for "Open CV on AWS"
Q2.
"trying to figure out how to use EC2 for the past few hours. Am I on
the right track, or is there some other part of AWS or something else
entirely that would be better suited to my task?"
EC2 is a general purpose virtual machine, so if you already have code that runs on
some other machine it is easy to move it to EC2
EC2 has many features but one you might find interesting is "spot instances", these are short lived but cheap ( typically 10% of the price ) instance launch
Q3.
Whatever I use, it has to have access to the various python libraries
in our code and to pdflatex and the full set of tex packages. Is this
possible on EC2?
Yes, they will pip install or install from packages just like any other system
Q4.
how do I use them to run my script? and do I need to change my python
script to accomodate the parallel processing, or does EC2 take care of
that somehow? is it as easy as starting a linux instance and
installing the programs I need like I would on any other linux
machine?
As described above your basic task seems to scale well, you may need a step to
collate the results. Yes it is basically the same as any other linux machine