How do I download the artifacts from a Jenkins build to my local machine using Python?
I will assume that you want to download via http.
You might want to use GNU wget for example, but if you really want to use Python - check out How do I download a file over HTTP using Python?. Urllib2 provides an easy way to handle http requests.
This is providing that you will not need to perform any additional actions to get to the file (authentication etc.).
Related
I have a question regarding the possibility of downloading an artifact from Artifactory through Django.
Is it possible to use a get request using requests like :
import requests
r = requests.get(http://localhost:8081/artifactory/libs-release-local/ch/qos/logback/logback-classic/0.9.9/logback-classic-0.9.9.jar?skipUpdateStats=true)
or is there another way to download the artifact in python?
If curl works to download the artifact with the URL then the requests library does have sufficient functions that support the curl request. I would recommend referring to this StackOverflow for more info.
I'm using AWS Lambda to generate pdf file using a ninja2 template. I am trying to use pdfkit to convert my HTML into pdf. I realize pdfkit has an internal dependency - wkhtmltopdf which needs to be used as a binary or installed via a package manager. I am not sure how to make this work on AWS Lambda?
With my current template and python code using pdfkit, I am getting the following error -
{
"errorMessage": "No wkhtmltopdf executable found: \"b''\"\nIf this file exists please check that this process can read it. Otherwise please install wkhtmltopdf - https://github.com/JazzCore/python-pdfkit/wiki/Installing-wkhtmltopdf",
"errorType": "OSError",
.....
.....
}
Any ideas on how can I make pdfkit work on lambda?
Any suggestions for wkhtmltopdf replacements?
Thanks
I've made a simple demo on using PDFKit using Serverless framework using layer. Checkout https://medium.com/#crespo.wang/create-pdf-using-pdfkit-on-serverless-aws-lambda-with-layer-721ca86724b2
AWS Lambda has concept of Layers which allows you to upload your custom dependencies as a zip and then it will be available as it is installed on the box. For more information see here :
https://docs.aws.amazon.com/lambda/latest/dg/configuration-layers.html
In your case you could upload the binaries for wkhtmltopdf as layer and while creating lambda function, you could provide the layer to use.
There are multiple projects on GitHub used to run wkhtmltopdf on Lambda for example:
https://github.com/lubos/aws-lambda-wkhtmltopdf
https://github.com/dimiro1/lambda-wkhtmltopdf
https://github.com/jpaolin/aws-lambda-s3-wkhtmltopdf
Download the wkhtmltopdf binaries required for aws lambda from :https://wkhtmltopdf.org/downloads.html.
Add the zip file as layer to lambda and set the pdf kit config pointing to executable path in the zip file.(/opt/bin/wkhtmltopdf)
I am trying to deploy a c++ Http web server on OepnShift3 then I referred this.
The problem is:
Shall I put the source code on OpenShift or compile it first then put the executable file on OpenShift?
Is possible to access the OpenShift3 server via Xshell or Ftp?
Any way to get the OepnShift2 account?
It is no longer possible to get accounts on OpenShift 2.
For OpenShift 3, if you wanted to use a custom HTTP server you would need to be able to build a Docker image which includes it and any other files you need. If you can get the Docker image built, then you can deploy it to OpenShift 3.
Although you can get an interactive terminal in the container which runs your application, it doesn't work like traditional web hosting. That is, it isn't a shell access account where you would upload files using FTP or some other means.
Can you explain more about what it is you want to host? Depending on what you are doing there may be builder images already supported by OpenShift which can pull down files from a Git repository and build an image for you.
If OpenShift is new to you, I would suggest you try out:
https://learn.openshift.com
so you understand some of what it can do and how you interact with it.
Also grab down the free eBook and read it:
https://www.openshift.com/promotions/for-developers.html
I'm wondering how you can sync your Postman config with a git repository.
I know you can export and import from Postman to a folder - which is OK - but I wondered if there was something more effortless.
I'm not exactly sure how you're trying to use this, but a few options would be:
First Option
to use their addon cli called newman. You can run collections from a URL or Local file with newman using
newman run http://some.url.here
Then if you make the remote url a part of a git repository it would obviously update/change with each commit/pull
Second Option
Try this with extreme caution and only if you feel comfortable with the process, also this may not be compliant with their terms of use so I don't suggest you try it without first some research
If you can find the directory in which the Postman collections are held, you could create a hard link with the command line from a git repository on your machine to the directory or specific file you need to link. Whenever you change the source file the one in the Postman config will change.
The way in which you accomplish this will depend on the system you use and version of Postman.
In addition to exporting and cloud syncing as mentioned in the other answers, there's some other options too.
Postman added a Git sync in Postman app v9 so you can manage version control with forking, merging, and pull requests.
There are also built-in integrations to sync your Postman collections with GitHub, with GitLab, and other services for version control. These integrations are for users on the paid plans.
Postman also has an API so you can GET and run the latest version of your collection, environment, or whatever using Newman or continuous integration tools or to build your own integration.
Postman is not designed for that case. They offer a cloud service which keeps you and your collaborators in sync. You can try their cloud plan for 30 days for free. Check here: https://www.getpostman.com/cloud_trial_faq
You can use Postman integrations (Home > Integrations) to link Postman to your remote git repository.
The following article explains how to integrate your gitlab repo to Postman:
https://learning.postman.com/docs/integrations/available-integrations/gitlab/
Also you can use Postman API versionning to do something similar:
https://learning.postman.com/docs/designing-and-developing-your-api/versioning-an-api/
For non-free plans, Postman now (version 9 and up) supports automatic sync of collections with a git repository on several popular git services.
(Again, it's currently only available for paid plans)
See the documentation for how to integrate Postman with GitHub, GitLab and Bitbucket.
The process is roughly:
create a dedicated repo on your git provider (e.g. my-postman-collections-repo)
create a personal access token for the provider (e.g. GitHub) with the expected scope (e.g. repo and user)
define an integration (using postman UI) for each collection you want to be kept in sync
I'm working with the GitHub integration and it works great.
I am using nginx to serve files via a django view and I need to be able to detect when the file download is complete. Is there any way to reliable read the download status.