Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
Is there some cron like library that would let me schedule some function to be ran at certain time (15:30 for example, not x hours from now etc)? If there isn't this kind of library how this should be implemented? Should I just set callback to be called every second and check the time and start jobs scheduled for the time or what?
node-cron does just what I described
node-schedule A cron-like and not-cron-like job scheduler for Node.
agenda is a Lightweight job scheduling for node. This will help you.
later.js is a pretty good JavaScript "scheduler" library. Can run on Node.js or in a web browser.
I am using kue: https://github.com/learnboost/kue . It is pretty nice.
The official features and my comments:
delayed jobs.
If you want to let the job run at a specific time, calculate the milliseconds between that time and now. Call job.delay(milliseconds) (The doc says minutes, which is wrong.) Don't forget to add "jobs.promote();" when you init jobs.
job event and progress pubsub.
I don't understand it.
rich integrated UI.
Very useful. You can check the job status (done, running, delayed) in integrated UI and don't need to write any code. And you can delete old records in UI.
infinite scrolling
Sometimes not working. Have to refresh.
UI progress indication
Good for the time-consuming jobs.
job specific logging
Because they are delayed jobs, you should log useful info in the job and check later through UI.
powered by Redis
Very useful. When you restart your node.js app, all job records are still there and the scheduled jobs will execute too!
optional retries
Nice.
full-text search capabilities
Good.
RESTful JSON API
Sound good, but I never use it.
Edit:
kue is not a cron like library.
By default kue does not supports job which runs repeatedly (e.g. every Sunday).
You can use timexe
It's simple to use, light weight, has no dependencies, has an improved syntax over cron, with a resolution in milliseconds and works in the browser.
Install:
npm install timexe
Use:
var timexe = require('timexe');
var res = timexe("* * * 15 30", function(){ console.log("It's now 3:30 pm"); });
(I'm the author)
node-crontab allows you to edit system cron jobs from node.js. Using this library will allow you to run programs even after your main process termintates. Disclaimer: I'm the developer.
I am the auhor of node-runnr . It have a very simple approach to create job. Also its very easy and clear to declare time and interval.
For example, to execute a job at every 10min 20sec,
Runnr.addIntervalJob('10:20', function(){...}, 'myjob')
To do a job at 10am and 3pm daily,
Runnr.addDailyJob(['10:0:0', '15:0:0'], function(){...}, 'myjob')
Its that simple.
For further detail: https://github.com/Saquib764/node-runnr
All these answers and noone has pointed to the most popular NPM package .. cron
https://www.npmjs.com/package/cron
Both node-schedule and node-cron we can use to implement cron-based schedullers.
NOTE : for generating cron expressions , you can use this cron_maker
This won't be suitable for everyone, but if your application is already setup to take commands via a socket, you can use netcat to issue a commands via cron proper.
echo 'mycommand' | nc -U /tmp/myapp.sock
Related
I'm practicing for the Data Engineer GCP certification exam and got the following question:
You have a Google Cloud Dataflow streaming pipeline running with a
Google Cloud Pub/Sub subscription as the source. You need to make an
update to the code that will make the new Cloud Dataflow pipeline
incompatible with the current version. You do not want to lose any
data when making this update.
What should you do?
Possible answers:
Update the current pipeline and use the drain flag.
Update the current pipeline and provide the transform mapping JSON object.
The correct answer according to the website 1 my answer was 2. I'm not convinced my answer is incorrect and these are my reasons:
Drain is a way to stop the pipeline and does not solve the incompatibility issues.
Mapping solves the incompatibility issue.
The only way that I see 1 as the correct answer is if you don't care about compatibility.
So which one is right?
I'm studying for the same exam, and the two cores of this question are:
1- Don't lose data ← Drain, is perfect for this because you process all buffer data and stop reviving messages; normally this message is alive for 7 days of retry, so when you start a new job you will receive all without lose any data.
2- Incompatible new code ← mapping solve some incompatibilities like change name of a ParDO but no a version issue. So launch a new job with the new code, it's the only option.
So, option is A.
I think the main point is that you cannot solve all the incompatibilities with the transform mapping. Mapping can be done for simple pipeline changes (for example, names), but it doesn't generalize well.
The recommended solution is constantly draining the pipeline running a legacy version, as it will stop taking any data from reading components, finish all the work pending on workers, and shutdown.
When you start a new pipeline, you don't have to worry about state compatibility, as workers are starting fresh.
However, the question is indeed ambiguous, and it should be more precise about the type of incompatibility or state something like in general. Arguably you can always try to update the job with mapping, and if Dataflow finds the new job to be incompatible, it will not affect the running pipeline -- then your only choice would be the drain option.
My question sounds pretty simple, but the answer to it is very difficult for me to find myself.
The question is as follows: Is it possible to somehow execute your code / script during the execution of Windows Automatic Repair? If so, how.
Yes, this is possible. You're looking for the Windows Recovery Environment, and you can add tools to that environment.
The basic steps involve:
Create an XML file describing your tool
Add the tool and its XML to the Recovery Image (WinRE.wim)
Add the tool to the Recovery boot menu
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
This is more like a strategy question:
I need to check regularly a long list of websites if they have a code installed. I can't check all of them once. How to do it? I have PHP on my server.
Should I use cURL and regex? or file_get_contents()?
You could use cron jobs ( http://www.webmasters-central.com/article-blog/tutorials/cron-tutorial-managing-cron-tab-or-cron-job-is-easy/ )
First create a table in a database that will keep track when the latest "check" has occurred for each website ( use unix time ).
Figure out how many website "checks" you want each time you run the cron job ( let's say 10 ).
Create a php script that gathers the 10 oldest "checked" websites and use curl to check those websites.
update the table with the current "checked" time for each website.
figure out how often this php script has to run to update the whole list.
Let's say you have 100 websites and you want ALL these sites to be updated every hour.Each time the cron job runs it will check 10 websites. Then this cron-job has to run at least 10 x an hour to accomplish the job....
This will create a loop on all the websites but will limit each request.
You have to do some trial test to see how many websites you can check every time and then implement this with a cron-job...
Good luck!
I think either one is fine. To make sure the code actually runs though, and assuming you have some control over the code installed, just make sure the code requests something on your server, then you can have PHP launch a browser window that hits the website, and then wait for the request.
I've did the following, based on #Paolo_NL_FR answer.
added a flag is_live for each item, default 0
added last_check (date) for each item
create a file that checks latest 10 sites based on last_check date
for each item checking is done through cURL, with a regular expression
after checking I'm marking items as live or not
add a cron to run the file daily
If you have 100 items to check, every item will be checked every 10 days. If you want more often you have multiple choices:
- you run the cron twice a day, every 6 hours etc
- you check more than 10 sites every run
That's it. Cool, thanks guys!
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I am an Electrical Engineer about to start grad school for Com Sci. Currently I work in the defense industry and as a result most services and websites are blocked here. I'm trying to come up with a solution that will allow me to do my homework/projects while at work since they give us 2 hours a day on the clock to do school work if attending grad school. I don't have the necessary software tools on my work computer nor will I be able to get it. I would like to setup my build system on a ubuntu box and the best solution I could think of would be to use email and possibly FTPmail to automate the build process and email me back any errors that the compiler may return.
Has anyone ever done this before or does someone know of a software package that already implements this solution.
I'd suggest you look on some web-based virtual machine/desktop tools. Some I've seen in the wild are icloud and eyeOS.
Also, since installing any software is basically a no-no, you might want to check for Linux live-CDs. You can just pre-configure the disc with the necessary tools (SCM, IDE, etc.) and boot the computer from the Live disk during your 2 hours. Of course, that won't give you a hard drive to save your stuff, but you can just commit whatever you have before that 2 hours expires.
Edit: whatever you do, get this solution approved by your superior(s) before you attempt it.
It sounds like you will be able to access stuff outside of your network, even if you cannot install any software on your work system. One thing you can do:
Install a version control system (CVS, SVN, etc) on your Ubuntu box. You can store your projects/homework there.
Use Hudson (http://hudson-ci.org/) on your Ubunto box as your build system. You can create a job for it to checkout from your version control system and build. Anytime you want to build a project (lets say you made a change to some class), all you have to do is press the "build-now" button.
Hudson itself is almost entirely web-gui so it is easy to configure, and if you open up a port for Hudson, you should be able to access it directly from work (unless they block external websites).
Could you use a virtual machine at work? Even if you don't have administrator access to your work machine, you may be able to use Qemu and something like Puppy Linux. See, for example, http://www.erikveen.dds.nl/qemupuppy/
Along the lines of your original question, if you can host a machine that receives e-mail at home, you could certainly configure procmail (e.g., see http://www.perlcode.org/tutorials/procmail/proctut/) to match for e-mails from you with a certain subject and run a command (say, make). But you'd also need to set up an filters to fetch and submit files, etc.
Can you use something like VNC to remotely control your desktop or do you have restrictions for this kind of Sw too?
http://www.realvnc.com/
If I recall correctly, the client does not need to be installed, it could run from a pendrive...
http://www.pendriveapps.com/portable-vnc-viewer-realvnc/
This is not a remote system, but it might work if you can select a boot medium on the computers you work on. Your employer might not like this.
It is possible to install a linux box on a usb hard disk and then boot from that. In this you can install all sorts of development tools and projects. You would just borrow their hardware a bit...
I wouldn't advise this if you have not worked on linux before though. Linux can be a royal pain in the ass and you might not get your development environment up and running in a year if you only have 2 hours per day to spend...
good luck
Set your project up on github. You can do editing directly there through a web browser.
Then setup continual integration on Jenkins on your home system, or use Travis CI, and/or Appveyor to monitor your github repo and build your project when there are changes. If there are errors, you can set them up to send notifications.
The advantage of Travis or Appveyor is they are web based so you'd be able to look at the console output of broken builds where jenkins running at home probably wouldn't (I don't recall if you can get the whole output by email or not).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
A git tool that meets the specs below is needed. Does one already exist? If not, I will create a script and make it available on GitHub for others to use or contribute. Is there a completely different and better way to solve the need to build/test every commit to a branch in a git repository? Not just to the latest but each one back to a certain starting point.
Background: Our development environment uses a separate continuous integration server which is wonderful. However, it is still necessary to do full builds locally on each developer's PC to make sure the commit won't "break the build" when pushed to the CI server. Unfortunately, with auto unit tests, those build force the developer to wait 10 or 15 minutes for a build every time.
To solve this we have setup a "mirror" git repository on each developer PC. So we develop in the main repository but anytime a local full build is needed. We run a couple commands in the mirror repository to fetch, checkout the commit we want, and build. It's works extremely lovely so we can continue working in the main one with the build going in parallel.
There's only one main concern now. We want to make sure every single commit builds and passes tests. But we often get busy and neglect to build several fresh commits. Then if the build fails you have to do a bisect or manually figure build each interim commit to figure out which one broke.
Requirements for this tool:
The tool will look at another repo, origin by default, fetch and compare all commits that are in branches to 2 lists of commits. One list must hold successfully built commits and the other lists commits that failed.
It identifies any commits not yet in either list and begins to build them in a loop in the order that they were committed. It stops on the first one that fails.
The tool appropriately adds each commit to either the successful or failed list after it as attempted to build each one.
The tool will ignore any "legacy" commits which are prior to the oldest commit in the success list. This logic makes the starting point possible in the next point.
Starting Point. The tool building a specific commit so that, if successful it gets added to the success list. If it is the earliest commit in the success list, it becomes the "starting point" so that none of the commits prior to that are examined for builds.
Only linear tree support? Much like bisect, this tool works best on a commit tree which is, at least from it's starting point, linear without any merges. That is, it should be a tree which was built and updated entirely via rebase and fast forward commits.
If it fails on one commit in a branch it will stop without building the rest that followed after that one. Instead if will just move on to another branch, if any.
The tool must do these steps once by default but allow a parameter to loop with an option to set how many seconds between loops. Other tools like Hudson or CruiseControl could do more fancy scheduling options.
The tool must have good defaults but allow optional control.
Which repo? origin by default.
Which branches? all of them by default.
What tool? by default an executable file to be provided by the user named "buildtest", "buildtest.sh" "buildtest.cmd", or buildtest.exe" in the root folder of the repository.
Loop delay? run once by default with option to loop after a number of seconds between iterations.
I wrote a tool I called git test-sequence a while back to do exactly this.
I was just considering blogging about it because a friend was telling me it saved him a ton of work this week.