It is my goal to get warnings, like actual compile warnings or clang-tidy output, surfaced in a pull request.
The output is properly annotated and in the build the warnings are all displayed prominently. But unfortunately most developers do not check the build, they mostly care if it's green. Making warning errors is not feasible in all cases.
I would like to make the warnings review comments that need to be actively acknowledged. I would implement this as a small service that listens on a web hook and takes the output and posts to the PR via the REST API. (I already have a service that does similar stuff for other reasons.)
To me this sounds like a problem that should already be solved, like in an existing plugin. Is there a simple drop in solution for this?
Is there a simple drop in solution for this?
As far as I know, there is currently no out-of-the-box method(Existing Tasks or Extensions) to send the warning to the pull request comment.
As you said, you can use web hook + Rest API to achieve it
The other way is to use the Rest API:Timeline - Get to get the warning message and use another Rest API :Pull Request Thread Comments - Create to create a comment on Pull Request.
Then in Pipeline (Pull Reuqest Trigger), you could add a Powershell Task to run the two Rest API at the same time.
For example:
- task: PowerShell#2
condition: eq(variables['Build.Reason'], 'PullRequest')
displayName: Post Message to PR
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
inputs:
targetType: filePath
filePath: Comment.ps1
In this case, when the Pipeline is triggered by Pull Reuqest, the task will run and send the Warning message to comment.
Powershell to Get the warning message sample:
$token = "PAT"
$url="https://dev.azure.com/{OrganizationNAME}/{ProjectName}/_apis/build/builds/{Build.buildid}/timeline?api-version=6.0"
$token = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes(":$($token)"))
$response = Invoke-RestMethod -Uri $url -Headers #{Authorization = "Basic $token"} -Method GET -ContentType application/json
echo $response.records.issues.message
..... Send the message to PR Comment....
...
Here is a ticket, you could refer to it.
On the other hand, this requirement is valuable.
You could add your request for this feature on our UserVoice site, which is our main forum for product suggestions. Hope this feature can become a tool out of the box.
Related
I try to get return report from amazon, but my request is always cancelled. I have working request report using
'ReportType' => 'GET_MERCHANT_LISTINGS_DATA',
'ReportOptions' => 'ShowSalesChannel=true'
I modify it by changing ReportType and removing ReportOptions. MWS accept request by its always cancelled. I also try to find any working example of it on google but also without success. Meybe somone have working example of it? I can downolad report when I send request from amazon webpage. I suppose it require ReportOptions, but I dont know what to put in this place (I have only info ReportProcessingStatus CANCELLED). Normally I choose Day,Week,Month. I check on amazon docs but there isnt many informations https://docs.developer.amazonservices.com/en_US/reports/Reports_RequestReport.html
Any ideas?
I am running an npm package: youtube-dl through a Lambda function as I want to create an online convertor.
I have suddenly started to run into the following error message:
{
"errorMessage": "Command failed: /var/task/node_modules/youtube-dl/bin/youtube-dl --dump-json --format=best[ext=mp4] https://www.youtube.com/watch?v=MfTbHITdhEI\nERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n",
"errorType": "Error",
"stackTrace": ["ERROR: Unable to download webpage: HTTP Error 429: Too Many Requests (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.", "", "ChildProcess.exithandler (child_process.js:275:12)", "emitTwo (events.js:126:13)", "ChildProcess.emit (events.js:214:7)", "maybeClose (internal/child_process.js:925:16)", "Process.ChildProcess._handle.onexit (internal/child_process.js:209:5)"]
}
Edit: I have run this a few times when I was testing the other day, but today I only ran it once.
I think that the IP address used by my Lambda function has now been blacklisted. I'm unsure how to proceed as I am a junior and very new to all this.
Is there a way to resolve this? Can I get a new IP address? Is this going to be super costly?
youtube-dl lack of delay (limit of request per time) option.
(see suggestion it the bottom of my post).
NEVER download more than one video with youtube-dl.
You can search youtube-dl author contact (e-mail etc) and write them directly, also as open issue on github page regarding it. as more request they have as fast they be pleased to fix it.
Currenty they have planty same request on this issue in gitlab but they hardly to block discussions and close tickets by this problem.
This is some sort of misbehaviour I believe.
I also found that developer suggest to use proxy instead of introducing delay option in his code - extremely funny.
OK, re to use proxy - but this actually does not solve the problem since it is lack of program design and no matter you use proxy or not YouTube limits is still here.
Please note:
This cause not only subj error but blocking your IP by YouTube.
Once you hit this situation YouTube will block your IP as a suspicious again and again even with a small requests amount. this cause tremendous problems since IP marked as suspicious.
Without limiting request per time option (with safe value by default) I consider youtube-dl as a dangerous software should cause problems and I stopped using it until this option will be introduced.
RECOMENDATIONS:
Use Ctrl+S (suspend) , Ctrl+Q (resume) when youtube-dl collecting digest for many videos (when you already donloaded many videos of channel but new one still there). I suspend it for a few minutes after eatch 10.
And use --limit-rate 150K (or as low as it sane), this may help you to not hit the limit since whole transmission is shaped.
Ok, so I found this response: https://stackoverflow.com/a/45339683/9793169
I am wondering if it's possible that our because our volume is low we just always end up using the same container hence the same IP address?
Yes, that is exactly the reason. A container is only spawned if no containers are already available. After a few minutes of no further demand, excess/unneeded containers are destroyed.
If so is there any way to prevent this?
No, this behavior is by design.
SOLUTION:
I logged out for 20 minutes and went back to the function and ran it again. It worked
Not my solution, it took me a while to understand what he ment (reading is an art). It worked for me.
(see: https://askubuntu.com/questions/1220266/youtube-dl-do-not-working-http-error-429-too-many-requests-how-can-i-solve-this)
You have to use the option --cookies in combination with a current/correct cookie file.
Here the steps I followed
1. if you use Firefox, install addon cookies.txt, enable the addon
2. clear your browser cache, clear you browser cookies (privacy reasons)
3. go to google.com, and log in with your google account
4. go to youtube.com
5. click on the cookies.txt addon, and export the cookies, save it as cookies.txt (in the same directory from where you are going to run youtube-dl)
6. this worked for me ... youtube-dl --cookies cookies.txt https://www.youtube.com/watch?v=....
Hope it helps.
use --force-ipv4 option in command.
youTube-dl --force-ipv4 ...
What you should do is handle that error by retrying the requests that are throttled.
Is there a way to send a POST from a "Code by Zapier" Zap to MailChimp to add a subscriber to a list and have it reliably complete in less than 1.00 second?
I spent the weekend at a volunteer hackathon for non-profit organizations. My non-profit client needs some data parsed out of an email and used to add a subscriber to a list in MailChimp (the Commerce portion of SquareSpace emails the data but doesn't allow setting storage on the purchase form to MailChimp -- even though that works in SquareSpace if you're not in the Commerce area). We found we could do that with Zapier -- except we ran up to the limits of what one can do with a free account on Zapier and the non-profit couldn't purchase a paid account right now (the Zapier discount for non-profits is a 15% reduction).
The first limitation was we couldn't do a 3-step zap (maximum 2 steps for free accounts) to go from (1) a Gmail trigger to (2) "Code by Zapier" to parse the email contents and then (3) to MailChimp. The workaround we came up with was to delete step #3 and send to MailChimp directly via http POST to the MailChimp API from a Python script in "Code by Zapier". This worked in test mode in Zapier.
But once the Zap was turned on and we ran an end-to-end test with the site, the Zap failed. There is a 1.00 second runtime limitation to free Zaps: after that Zapier kills the job. The POST to MailChimp took long enough that the Zap timed out.
I used "Code by Zapier" with Python to send the post. They use Python 2.7.10. I was able to import requests to do the post, and I found several other modules worked too, such as json, httplib, and urllib.
What I'm wondering is whether there's a way to get the POST to happen reliably in under 1 second. For example, is there a way to use an async send and then not wait for the response. And I'm constrained to Python 2.7.10 and the Zapier environment. Zapier also allows JavaScript as an alternative to Python, so that might be another path to investigate if there's no solution in Python.
David here, from the Zapier Platform team.
I can't speak to the speed of Python specifically, but I know that javascript can fire off requests without waiting for a response. We've got a basic example here, which you'd modify to send the request and the immediately end execution (by calling the callback function). This won't be a great experience because errors will happen silently, but it'll almost certainly fit in the 1 second window.
Separately, the whole python stdlib is available, as well as the requests module (docs)
I am using Postman to run a Runner on some specific requests. Is it possible to create a schedule to execute (meaning every day on specific hour)?
You can set up a Postman Monitor on your collection, and schedule it to execute the request each minute/hour/weekly basis.
This article can get you started on creating your monitor. Postman allows 1000 monitoring requests for free per month.
PS: Postman gives you details about the responses as in No. of successful requests, response codes, response size etc. I wanted the actual response for my test. So I just printed the response body as shown below. Hope it helps someone out there :)
Well, if there is no other possibility, you can actually try doing this:
- launch postman runner
- configure the highest possible number of iterations
- configure the delay (in milliseconds) to fit your scheduling requirement
It is absolutely awful, but if the delay variable can be set high enough, it might work.
It implies that postman is continuousely running.
You may do this using a scheduling tool that can launch command lines and use Newman ...
I don't think Postman can do it on its own
Alexandre
EDIT:
You may do this using a scheduling tool that can launch command lines and use Newman ... I don't think Postman can do it on its own
check this postman feature : https://www.getpostman.com/docs/postman/monitors/intro_monitors
from postman v10.2.1 onwards you can schedule your collections to run directly (without using monitors) on the specified times
check out here - https://learning.postman.com/docs/running-collections/scheduling-collection-runs/
I have Rails app which has an Rspec feature with selenium that always passes locally and periodically fails on travis. It fails on click_link("my link"), with a Net::ReadTimeout: error. The stack trace isn't all that helpful and It'd be nice if there was a way to tail the log (tail -f log/test.log), so see if that's helpful...or at least view the log output. Is this possible using travis ci? I'm already waiting for ajax to finish, which suggests something external, so ultimately I'm trying to find out what request it's getting hung up on.
I believe you can cat the logs to the console as the last step of your test task, or you can use Travis' artifact option to get them uploaded to S3 - see http://docs.travis-ci.com/user/uploading-artifacts/