Crontab email through msmtp -> Amazon SES - amazon-web-services

I have a crontab that includes a MAILTO=my.email#example.com. My server uses msmtp to forward the email to Amazon Simple Email Service. My problem is that output from cron commands never arrives in my mailbox. This is what the msmtp log says:
Mar 06 14:26:02 host=email-smtp.us-east-1.amazonaws.com tls=on auth=on user=MY.SES.USER from=my.email#example.com recipients=my.email#example.com smtpstatus=554 smtpmsg='554 Transaction failed: User name is missing: ?Cron Daemon ?.' errormsg='the server did not accept the mail' exitcode=EX_UNAVAILABLE
What do I need to do in order to make Amazon SES accept the cron emails?

The suggested solution from the AWS Developer Forums:
It turns out that cron has the "from" address hard-coded in the source (q.v. "do_command.c" in the cron source), so one does not have influence over what cron transmits to sendmail (which in our case is symlinked to "/usr/bin/msmtp").
However, due to the magic of Linux, we do have the ability to alter the stream of text that goes into sendmail.
The way I worked around this cron limitation was to move the "msmtp" binary to"msmtp.bin" and then create "/usr/bin/msmtp" that was a shell script:
#! /bin/bash
sed -e 's/root .Cron Daemon./user#example.com/' | /usr/bin/msmtp.bin "$#"
This is also, AFAIK, the only means one has to set the "debug" flag to msmtp when used in a "global" setting (such as cron, or other cases where sendmail is invoked with arguments you don't control).
While the script above is rather simplistic, you can also conditionally alter the text by checking the input arguments for the magic "-FCronDaemon" which is also hard coded in the cron binary. I would be stunned if any other program calls sendmail with "-FCronDaemon".

Had exact same problem but the answer above didn't work for me so here's what I did...
As the problem is isolated to cron, and symlinking sendmail to msmtp works everywhere else, I didn't want to change the msmtp command globally for everything. So I first created /usr/bin/msmtp_cron.bin and made it executable.
Next I had to tell cron to use this as its mail path, by editing /etc/sysconfig/crond to read:
CRONDARGS="-m '/usr/bin/msmtp_cron.bin -t'"
Then not forgetting to restart crond (reloading alone isn't enough):
$ sudo systemctl restart crond.service
Back to /usr/bin/msmtp_cron.bin, in this file, I first had to find out what cron was actually streaming to sendmail/msmtp to know what substitution I could make:
sed '' > /tmp/cron-mail-capture.txt
This yielded these headers:
From: "(Cron Daemon)" <root>
It's slightly different from the other answer so my sed script ended up looking like so:
sed -e 's/..Cron Daemon.* <root>/me#mydomain.com/' | /usr/bin/msmtp -t "$#"
Now it successfully sends cron messages via AWS SES to my email account.

Related

AWS Cron is not working for removing files [duplicate]

I have set up a cronjob for root user in ubuntu environment as follows by typing crontab -e
34 11 * * * sh /srv/www/live/CronJobs/daily.sh
0 08 * * 2 sh /srv/www/live/CronJobs/weekly.sh
0 08 1 * * sh /srv/www/live/CronJobs/monthly.sh
But the cronjob does not run. I have tried checking if the cronjob is running using pgrep cron and that gives process id 3033. The shell script calls a python file and is used to send an email. Running the python file is ok. There's no error in it but the cron doesn't run. The daily.sh file has the following code in it.
python /srv/www/live/CronJobs/daily.py
python /srv/www/live/CronJobs/notification_email.py
python /srv/www/live/CronJobs/log_kpi.py
WTF?! My cronjob doesn't run?!
Here's a checklist guide to debug not running cronjobs:
Is the Cron daemon running?
Run ps ax | grep cron and look for cron.
Debian: service cron start or service cron restart
Is cron working?
* * * * * /bin/echo "cron works" >> /tmp/file
Syntax correct? See below.
You obviously need to have write access to the file you are redirecting the output to. A unique file name in /tmp which does not currently exist should always be writable.
Probably also add 2>&1 to include standard error as well as standard output, or separately output standard error to another file with 2>>/tmp/errors
Is the command working standalone?
Check if the script has an error, by doing a dry run on the CLI
When testing your command, test as the user whose crontab you are editing, which might not be your login or root
Can cron run your job?
Check /var/log/cron.log or /var/log/messages for errors.
Ubuntu: grep CRON /var/log/syslog
Redhat: /var/log/cron
Check permissions
Set executable flag on the command: chmod +x /var/www/app/cron/do-stuff.php
If you redirect the output of your command to a file, verify you have permission to write to that file/directory
Check paths
Check she-bangs / hashbangs line
Do not rely on environment variables like PATH, as their value will likely not be the same under cron as under an interactive session. See How to get CRON to call in the correct PATHs
Don't suppress output while debugging
Commonly used is this suppression: 30 1 * * * command > /dev/null 2>&1
Re-enable the standard output or standard error message output by removing >/dev/null 2>&1 altogether; or perhaps redirect to a file in a location where you have write access: >>cron.out 2>&1 will append standard output and standard error to cron.out in the invoking user's home directory.
If you don't redirect output from a cron job, the daemon will try to send you any output or error messages by email. Check your inbox (maybe simply more $MAIL if you don't have a mail client). If mail is not available, maybe check for a file named dead.letter in your home directory, or system log entries saying that the output was discarded. Especially in the latter case, probably edit the job to add redirection to a file, then wait for the job to run, and examine the log file for error messages or other useful feedback.
If you are trying to figure out why something failed, the error messages will be visible in this file. Read it and understand it.
Still not working? Yikes!
Raise the cron debug level
Debian
in /etc/default/cron
set EXTRA_OPTS="-L 2"
service cron restart
tail -f /var/log/syslog to see the scripts executed
Ubuntu
in /etc/rsyslog.d/50-default.conf
add or comment out line cron.* /var/log/cron.log
reload logger sudo /etc/init.d/rsyslog restart
re-run cron
open /var/log/cron.log and look for detailed error output
Reminder: deactivate log level, when you are done with debugging
Run cron and check log files again
Cronjob Syntax
# Minute Hour Day of Month Month Day of Week User Command
# (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat)
0 2 * * * root /usr/bin/find
This syntax is only correct for the root user. Regular user crontab syntax doesn't have the User field (regular users aren't allowed to run code as any other user);
# Minute Hour Day of Month Month Day of Week Command
# (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat)
0 2 * * * /usr/bin/find
Crontab Commands
crontab -l
Lists all the user's cron tasks.
crontab -e, for a specific user: crontab -e -u agentsmith
Starts edit session of your crontab file.
When you exit the editor, the modified crontab is installed automatically.
crontab -r
Removes your crontab entry from the cron spooler, but not from crontab file.
Another reason crontab will fail: Special handling of the % character.
From the manual file:
The entire command portion of the line, up to a newline or a
"%" character, will be executed by /bin/sh or by the shell specified
in the SHELL variable of the cronfile. A "%" character in the
command, unless escaped with a backslash (\), will be changed into
newline characters, and all data after the first % will be sent to
the command as standard input.
In my particular case, I was using date --date="7 days ago" "+%Y-%m-%d" to produce parameters to my script, and it was failing silently. I finally found out what was going on when I checked syslog and saw my command was truncated at the % symbol. You need to escape it like this:
date --date="7 days ago" "+\%Y-\%m-\%d"
See here for more details:
http://www.ducea.com/2008/11/12/using-the-character-in-crontab-entries/
Finally I found the solution. Following is the solution:-
Never use relative path in python scripts to be executed via crontab.
I did something like this instead:-
import os
import sys
import time, datetime
CLASS_PATH = '/srv/www/live/mainapp/classes'
SETTINGS_PATH = '/srv/www/live/foodtrade'
sys.path.insert(0, CLASS_PATH)
sys.path.insert(1,SETTINGS_PATH)
import other_py_files
Never supress the crontab code instead use mailserver and check the mail for the user. That gives clearer insights of what is going.
I want to add 2 points that I learned:
Cron config files put in /etc/cron.d/ should not contain a dot (.). Otherwise, it won't be read by cron.
If the user running your command is not in /etc/shadow. It won't be allowed to schedule cron.
Refs:
http://manpages.ubuntu.com/manpages/xenial/en/man8/cron.8.html
https://help.ubuntu.com/community/CronHowto
To add another point, a file in /etc/cron.d must contain an empty new line at the end. This is likely related to the response by Luciano which specifies that:
The entire command portion of the line, up to a newline or a "%"
character, will be executed
I found useful debugging information on an Ubuntu 16.04 server by running:
systemctl status cron.service
In my case I was kindly informed I had left a comment '#' off of a remark line:
Aug 18 19:12:01 is-feb19 cron[14307]: Error: bad minute; while reading /etc/crontab
Aug 18 19:12:01 is-feb19 cron[14307]: (*system*) ERROR (Syntax error, this crontab file will be ignored)
It might also be a timezone problem.
Cron uses the local time.
Run the command timedatectl to see the machine time and make sure that your crontab is in this same timezone.
https://askubuntu.com/a/536489/1043751
I had a similar problem to the link below.
similar to my problem
my original post
My Issue
My issue was that cron / crontab wouldn't execute my bash script. that bash script executed a python script.
original bash file
#!/bin/bash
python /home/frosty/code/test_scripts/test.py
python file (test.py)
from datetime import datetime
def main():
dt_now = datetime.now()
string_now = dt_now.strftime('%Y-%m-%d %H:%M:%S.%f')
with open('./text_file.txt', 'a') as f:
f.write(f'wrote at {string_now}\n')
return None
if __name__ == '__main__':
main()
the error I was getting
File "/home/frosty/code/test_scripts/test.py", line 7
string_to_write = f'wrote at {string_now}\n'
^
SyntaxError: invalid syntax
this error didn't make sense because the code executed without error from the bash file and the python file.
** Note -> ensure in the crontab -e file you don't suppress the output. I sent the output to a file by adding >>/path/to/cron/output/file.log 2>&1 after the command. below is my crontab -e entry
*/5 * * * * /home/frosty/code/test_scripts/echo_message_sh >>/home/frosty/code/test_scripts/cron_out.log 2>&1
the issue
cron was using the wrong python interpreter, probably python 2 from the syntax error.
how I solved the problem
I changed my bash file to the following
#!/bin/bash
conda_shell=/home/frosty/anaconda3/etc/profile.d/conda.sh
conda_env=base
source ${conda_shell}
conda activate ${conda_env}
python /home/frosty/code/test_scripts/test.py
And I changed my python file to the following
from datetime import datetime
def main():
dt_now = datetime.now()
string_now = dt_now.strftime('%Y-%m-%d %H:%M:%S.%f')
string_file = '/home/frosty/code/test_scripts/text_file.txt'
string_to_write = 'wrote at {}\n'.format(string_now)
with open(string_file, 'a') as f:
f.write(string_to_write)
return None
if __name__ == '__main__':
main()
No MTA installed, discarding output
I had a similar problem with a PHP file executed as a CRON job.
When I manually execute the file it works, but not with CRON tab.
I got the output message: "No MTA installed, discarding output"
Postfix is the default Mail Transfer Agent (MTA) in Ubuntu and can be installed it using
sudo apt-get install postfix
But this same message can be also output when you add a log file as below and it does not have proper write permission to /path/to/logfile.log
/path/to/php -f /path/to/script.php >> /path/to/logfile.log
The permission issue can occur if you create the cron-log file manually using a command like touch while you are logged in as a different user and you add CRONs in the tab of another user(group) like www-data using: sudo crontab -u www-data -e. Then CRON daemon tries to write to the log file and fail, then tries to send the output as an email using Ubuntu's MTA and when it's not found, outputs "No MTA installed, discarding output".
To prevent this:
Create the file with proper permission.
Avoid creating the relevant CRON log file manually, add the log in CRON tab and let the log file get created automatically when the cron is run.
I've found another reason for user's crontab not running: the hostname is not present on the hosts file:
user#ubuntu:~$ cat /etc/hostname
ubuntu
Now the hosts file:
user#ubuntu:~$ cat /etc/hosts
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
This is on a Ubuntu 14.04.3 LTS, the way to fix it is adding the hostname to the hosts file so it resembles something like this:
user#ubuntu:~$ cat /etc/hosts
127.0.0.1 ubuntu localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
For me, the solution was that the file cron was trying to run was in an encrypted directory, more specifcically a user diretory on /home/. Although the crontab was configured as root, because the script being run exisited in an encrypted user directory in /home/ cron could only read this directory when the user was actually logged in. To see if the directory is encrypted check if this directory exists:
/home/.ecryptfs/<yourusername>
if so then you have an encrypted home directory.
The fix for me was to move the script in to a non=encrypted directory and everythig worked fine.
As this is becoming a canonical for troubleshooting cron issues, allow me to add one specific but rather complex issue: If you are attempting to run a GUI program from cron, you are probably Doing It Wrong.
A common symptom is receiving error messages about DISPLAY being unset, or the cron job's process being unable to access the display.
In brief, this means that the program you are trying to run is attempting to render something on an X11 (or Wayland etc) display, and failing, because cron is not attached to a graphical environment, or in fact any kind of input/output facility at all, beyond being able to read and write files, and send email if the system is configured to allow that.
For the purposes of "I'm unable to run my graphical cron job", let's just point out in broad strokes three common scenarios for this problem.
Probably identify the case you are trying to implement, and search for related questions about that particular scenario to learn more, and find actual solutions with actual code.
If you are trying to develop an interactive program which communicates with a user, you want to rethink your approach. A common, but nontrivial, arrangement is to split the program in two: A back-end service which can run from cron, but which does not have any user-visible interactive facilities, and a front-end client which the user runs from their GUI when they want to communicate with the back-end service.
Probably your user client should simply be added to the user(s)' GUI startup script if it needs to be, or they want to, run automatically when they log in.
I suppose the back-end service could be started from cron, but if it requires a GUI to be useful, maybe start it from the X11 server's startup scripts instead; and if not, probably run it from a regular startup script (systemd these days, or /etc/rc.local or a similar system startup directory more traditionally).1
If you are trying to run a GUI program without interacting with a real user 2, you may be able to set up a "headless" X11 server 3 and run a cron job which starts up that server, runs your job, and quits.
Probably your job should simply run a suitable X11 server from cron (separate from any interactive X11 server which manages the actual physical display(s) and attached graphics card(s) and keyboard(s) available to the system), and pass it a configuration which runs the client(s) you want to run once it's up and running. (See also the next point for some practical considerations.)
You are running a computer for the sole purpose of displaying a specific application in a GUI, and you want to start that application when the computer is booted.
Probably your startup scripts should simply run the GUI (X11 or whatever) and hook into its startup script to also run the client program once the GUI is up and running. In other words, you don't need cron here; just configure the startup scripts to run the desktop GUI, and configure the desktop GUI to run your application as part of the (presumably automatic, guest?) login sequence.4
There are ways to run X11 programs on the system's primary display (DISPLAY=:0.0) but doing that from a cron job is often problematic, as that display is usually reserved for actual interactive use by the first user who logs in and starts a graphical desktop. On a single-user system, you might be able to live with the side effects if that user is also you, but this tends to have inconvenient consequences and scale very poorly.
An additional complication is deciding which user to run the cron job as. A shared system resource like a back-end service can and probably should be run by root (though ideally have a dedicated system account which it switches into once it has acquired access to any privileged resources it needs) but anything involving a GUI should definitely not be run as root at any point.
A related, but distinct problem is to interact in any meaningful way with the user. If you can identify the user's active session (to the extent that this is even well-defined in the first place), how do you grab their attention without interfering with whatever else they are in the middle of? But more fundamentally, how do you even find them? If they are not logged in at all, what do you do then? If they are, how do you determine that they are active and available? If they are logged in more than once, which terminal are they using, and is it safe to interrupt that session? Similarly, if they are logged in to the GUI, they might miss a window you spring up on the local console, if they are actually logged in remotely via VNC or a remote X11 server.
As a further aside: On dedicated servers (web hosting services, supercomputing clusters, etc) you might even be breaking the terms of service of the hosting company or institution if you install an interactive graphical desktop you can connect to from the outside world, or even at all.
1
The #reboot hook in cron is a convenience for regular users who don't have any other facility for running something when the system comes up, but it's just inconvenient and obscure to hide something there if you are root anyway and have complete control over the system. Use the system facilities to launch system services.
2
A common use case is running a web browser which needs to run a full GUI client, but which is being controlled programmatically and which doesn't really need to display anything anywhere, for example to scrape sites which use Javascript and thus require a full graphical browser to render the information you want to extract.
Another is poorly designed scientific or office software which was not written for batch use, and thus requires a GUI even when you just want to run a batch job and then immediately quit without any actual need to display anything anywhere.
(In the latter case, probably review the documentation to check if there isn't a --batch or --noninteractive or --headless or --script or --eval option or similar to run the tool without the GUI, or perhaps a separate utility for noninteractive use.)
3
Xvfb is the de facto standard solution; it runs a "virtual framebuffer" where the computer can spit out pixels as if to a display, but which isn't actually connected to any display hardware.
4
There are several options here.
The absolutely simplest is to set up the system to automatically log in a specific user at startup without a password prompt, and configure that user's desktop environment (Gnome or KDE or XFCE or what have you) to run your script from its "Startup Items" or "Login Actions" or "Autostart" or whatever the facility might be called. If you need more control over the environment, maybe run bare X11 without a desktop environment or window manager at all, and just run your script instead. Or in some cases, maybe replace the X11 login manager ("greeter") with something custom built.
The X11 stack is quite modular, and there are several hooks in various layers where you could run a script either as part of a standard startup process, or one which completely replaces a standard layer. These things tend to differ somewhat between distros and implementations, and over time, so this answer is necessarily vague and incomplete around these matters. Again, probably try to find an existing question about how to do things for your specific platform (Ubuntu, Raspbian, Gnome, KDE, what?) and scenario. For simple scenarios, perhaps see Ubuntu - run bash script on startup with visible terminal
I experienced same problem where crons are not running.
We fixed by changing permissions and owner by
Crons made root owner as we had mentioned in crontab AND
Cronjobs 644 permission given
There is already a lot of answers, but none of them helped me so I'll add mine here in case it's useful for somebody else.
In my situation, my cronjobs were working find until there was a power shortage that cut the power to my Raspberry Pi. Cron got corrupted. I think it was running a long python script exactly when the shortage happened. Nothing in the main answer above worked for me. The solution was however quite simple. I just had to force reinstallation of cron with:
sudo apt-get --reinstall install cron
It work right away after this.
Copying my answer for a duplicated question here.
cron may not know where to find the Python interpreter because it doesn't share your user account's environment variables.
There are 3 solutions to this:
If Python is at /usr/bin/python, you can change the cron job to use an absolute path: /usr/bin/python /srv/www/live/CronJobs/daily.py
Alternatively you can also add a PATH value to the crontab with PATH=/usr/bin.
Another solution would be to specify an interpreter in the script file, make it executable, and call the script itself in your crontab:
a. Put shebang at the top of your python file: #!/usr/bin/python.
b. Set it to executable: $ chmod +x /srv/www/live/CronJobs/daily.py
c. Put it in crontab: /srv/www/live/CronJobs/daily.py
Adjust the path to the Python interpreter if it's different on your system.
Reference
CRON uses a different TIMEZONE
A very common issue is: cron time settings may is different than your. In particular, the timezone could be not be the same:
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
You can run:
* * * * * echo $(date) >> /tmp/test.txt
This should generate a file like:
# cat test.txt
Sun 03 Apr 2022 09:02:01 AM UTC
Sun 03 Apr 2022 09:03:01 AM UTC
Sun 03 Apr 2022 09:04:01 AM UTC
Sun 03 Apr 2022 09:05:01 AM UTC
Sun 03 Apr 2022 09:06:01 AM UTC
If you are using a TZ other than UTC, you can try:
timedatectl set-timezone America/Sao_Paulo
replace America/Sao_Paulo according to you settings.
I'm not sure if it is actually necessary, but you can run:
sudo systemctl restart cron.service
After that, cron works as I expected:
# cat test.txt
Sun 03 Apr 2022 09:02:01 AM UTC
Sun 03 Apr 2022 09:03:01 AM UTC
Sun 03 Apr 2022 09:04:01 AM UTC
Sun 03 Apr 2022 09:05:01 AM UTC
Sun 03 Apr 2022 09:06:01 AM UTC
Sun 03 Apr 2022 09:07:01 AM UTC
Sun 03 Apr 2022 09:08:01 AM UTC
Sun 03 Apr 2022 09:09:01 AM UTC
Sun 03 Apr 2022 09:10:01 AM UTC
Sun 03 Apr 2022 06:11:01 AM -03
Sun 03 Apr 2022 06:12:01 AM -03
Sun 03 Apr 2022 06:13:01 AM -03
Sun 03 Apr 2022 06:14:01 AM -03
Try
service cron start
or
systemctl start cron
In my case I was trying to run cron locally.
I checked status:
service cron status
It showed me:
* cron is not running
Then I simply started the service:
service cron start
Sometimes the command that cron needs to run is in a directory where cron has no access, typically on systems where users' home directories' permissions are 700 and the command is in that directory.
Although answer has been accepted for this question, I will like to add what worked for me.
it's a good idea to quote the URL, if it contains a query it may not work without everything being quoted.
DONT FORGET TO PUT YOUR URL WHICH CONTAINS "?, =, #, %" IN A QUOTE.
Example.
https://paystack.com/indexphp?docs/api/#transaction-charge-authorization&date=today
should be in a quote like so
"https://paystack.com/indexphp?docs/api/#transaction-charge-authorization&date=today"

Cannot assign instance name to concurrent workflow in Informatica

In Informatica, I can start a workflow but cannot get it to recognize my instance name in the session log and Workflow Monitor.
The workflow starts but in the session log it displays this:
Workflow wf_Tenter image description hereemp started with run id [22350], run instance name [], run type [Concurrent Run with Un[enter image description here][1]ique Instance Name]
Instance name is blank.
My command is:
pmcmd startworkflow -sv <service> -d <domain> -u <user> -p <password> -f <folder> -rin INST1 -paramfile <full param file path name> wf_Temp
I have edited the workflow and selected the checkbox Configure Current Execution. Inside Configure Concurrent Execution button, I have created three instances: INST1, INST2, INST3, but without any associated parameter files. All parameter files are blank.
I understand, I think, that in order to start a workflow with PMCMD I must pass in one of the configured instance names (i.e. INST1, INST2, INST3, etc.)
If I execute the PMCMD command from Putty a second time to see the second instance run, I receive a message that workflow is still running and I have to wait? Why? I have checked the Concurrent Workflow box in the workflow.
ERROR: Workflow [wf_Temp]: Could not start execution of this workflow because the current run on this Integration Service has not completed yet.
Disconnecting from Integration Service
So, I think I'm close, but am missing something. The workflow runs with the parameter file I pass in PMCMD but the instance name seems to be ignored.
Further. Do, I have to pre-configure instance names in the Workflow manager? Is the PMCMD instance and parameter file parameters enough? It doesn't seem quite so dynamic if Instances have to be pre-defined in the workflows.
Thanks.
#MacieJG
Here's the screenshots from Putty when I run the command. You can see the instance name DALLAS is being passed through the PMCMD OK. No combination ever gets the Instance name. I did not include the pics of your suggested Test 1, but results were same.. still no instance.
Here's my complete test as requested in a comment above. I tried my best to put everything you may need here, but if I missed anything, just let me know. So here goes...
I've created a very simple workflow to run with instance name. It uses a timer to wait and a command tast to write the instance name to a file:
The concurrent execution has been set up in the most simple way:
Now, I've prepared the followig batch to run the workflow (just user & password removed):
SET "PMCMD=C:\Informatica\9.5.1\clients\PowerCenterClient\CommandLineUtilities\PC\server\bin\pmcmd"
%PMCMD% startworkflow -sv Dev_IS -d Domain_vic-vpc -u ####### -p ####### -f Dev01 -rin GLASGOW wf_Instance_Test
%PMCMD% startworkflow -sv Dev_IS -d Domain_vic-vpc -u ####### -p ####### -f Dev01 -rin FRANKFURT wf_Instance_Test
%PMCMD% startworkflow -sv Dev_IS -d Domain_vic-vpc -u ####### -p ####### -f Dev01 -rin GLASGOW wf_Instance_Test
It runs three instances, two of them with same name, just to test it. I run the batch the following way to capture the output:
pmStartTestWF.bat > c:\MG\pmStartTestWF.log
Once I execute it, here what I see in workflow monitor:
Just as expected, three instances executed and properly displayed. File output looks fine as well:
The output of pmcmd can be found here. Full definition of my test workflow is available here.
I really hope this will help you somehow. Feel free to let me know if you'd find anything missing here. Good luck!
You don't need to pre-configure instance names in workflow. Passing the instance name in pmcmd along with parameter filename is enough.
try this: pmcmd startworkflow -sv (service) -d (domain) -u (user) -p (password) -f (folder) -paramfile (full param file path name) -rin INST1 wf_Temp
To be precise: when you configure Concurrent Execution, you can specify if you:
allow concurrent run with same instance name
allow concurrent run only with unique instance name
In addition to that you may, but don't have to, indicate which instance should use which parameter file, so it won't be need to mention it while executing. But that's a separate feature.
Now, if you've chosen the first one, you will be able to invoke the WF multiple times with the very same command. If you've chosen the second one and try this, you will get the 'WF is already running' error.
The trouble is that your example seems correct at first glance. As per the log message:
Workflow wf_Temp started with run id [22350], run instance name [], run type [Concurrent Run with Unique Instance Name]
So you're allowing unique instances only. It seems that the instance name has not been used. First execution does not set the instance name, so similar second execution won't use it either and will get rejected as this is the same instance name (i.e. None).
You may try to change the setting to Allow concurrent run with same instance name, this shall allow the secon execution, but does not solve the main issue. For some reason the instance name does not get passed.
Please verify your command against the docs referenced below. Try to match the order perhaps. Please share some more info if it still fails.
Looking at the docs:
pmcmd StartWorkflow
<<-service|-sv> service [<-domain|-d> domain] [<-timeout|-t> timeout]>
<<-user|-u> username|<-uservar|-uv> userEnvVar>
<<-password|-p> password|<-passwordvar|-pv> passwordEnvVar>
[<<-usersecuritydomain|-usd> usersecuritydomain|<-usersecuritydomainvar|-usdv>
userSecuritydomainEnvVar>]
[<-folder|-f> folder]
[<-startfrom> taskInstancePath]
[<-recovery|-norecovery>]
[<-paramfile> paramfile]
[<-localparamfile|-lpf> localparamfile]
[<-osprofile|-o> OSUser]
[-wait|-nowait]
[<-runinsname|-rin> runInsName]
workflow

xcodebuild running tests headless?

As we all know by now, the only way to run tests on iOS is by using the simulator. My problem is that we are running jenkins and the iOS builds are running on a slave (via SSH), as a result running xcodebuild can't start the simulator (as it runs headless). I've read somewhere that it should be possible to get this to work with SimLauncher (gem sim_launcher). But I can't find any info on how to set this up with xcodebuild. Any pointers are welcome.
Headless and xcodebuild do not mix well. Please consider this alternative:
You can configure the slave node to launch via jnlp (webstart). I use a bash script with the .command extension as a login item (System Preferences -> Users -> Login Items) with the following contents:
#!/bin/bash
slave_url="https://gardner.company.com/jenkins/jnlpJars/slave.jar"
max_attempts=40 # ten minutes
echo "Waiting to try again. curl returneed $rc"
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -ne 0 -a $max_attempts -gt 0 ]; then
echo "Waiting to try again. curl returneed $rc"
sleep 5
curl -fO "${slave_url}" >>slave.log
rc=$?
if [ $rc -eq 0 ]; then
zip -T slave.jar
rc=$?
fi
let max_attempts-=1
fi
# Simulator
java -jar slave.jar -jnlpUrl https://gardner.company.com/jenkins/computer/buildmachine/slave-agent.jnlp -secret YOUR_SECRET_KEY
The build user is set to automatically login. You can see the arguments to the slave.jar app by executing:
gardner:~ buildmachine$ java -jar slave.jar --help
"--help" is not a valid option
java -jar slave.jar [options...]
-auth user:pass : If your Jenkins is security-enabled, specify
a valid user name and password.
-connectTo HOST:PORT : make a TCP connection to the given host and
port, then start communication.
-cp (-classpath) PATH : add the given classpath elements to the
system classloader.
-jar-cache DIR : Cache directory that stores jar files sent
from the master
-jnlpCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP requests.
-jnlpUrl URL : instead of talking to the master via
stdin/stdout, emulate a JNLP client by
making a TCP connection to the master.
Connection parameters are obtained by
parsing the JNLP file.
-noReconnect : Doesn't try to reconnect when a communication
fail, and exit instead
-proxyCredentials USER:PASSWORD : HTTP BASIC AUTH header to pass in for making
HTTP authenticated proxy requests.
-secret HEX_SECRET : Slave connection secret to use instead of
-jnlpCredentials.
-slaveLog FILE : create local slave error log
-tcp FILE : instead of talking to the master via
stdin/stdout, listens to a random local
port, write that port number to the given
file, then wait for the master to connect to
that port.
-text : encode communication with the master with
base64. Useful for running slave over 8-bit
unsafe protocol like telnet
gardner:~ buildmachine$
For a discussion about OSX slaves and how the master is launched please see this Jenkins bug: https://issues.jenkins-ci.org/browse/JENKINS-21237
Erik - I ended up doing the items documented here:
Essentially:
The first problem, is that you do have to have the user that runs the builds also logged in to the console on that Mac build machine. It needs to be able to pop up the simulator, and will fail if you don’t have a user logged in — as it can’t do this entirely headless without a display.
Secondly, the XCode Developer tools requires elevated privileges in order to execute all of the tasks on the Unit tests. Sometimes you may miss seeing it, but without these, the Simulator will give you an authentication prompt that never clears.
A first solution to this (on Mavericks) is to run:
sudo security authorizationdb write system.privilege.taskport allow
This will eliminate one class of these authentication popups. You’ll also need to run:
sudo DevToolsSecurity --enable
Per Apple’s man page on this tool:
On normal user systems, the first time in a given login session that
any such Apple-code-signed debugger or performance analysis tools are
used to examine one of the user’s processes, the user is queried for
an administator password for authorization. DevToolsSecurity tool to
change the authorization policies, such that a user who is a member of
either the admin group or the _developer group does not need to enter
an additional password to use the Apple-code-signed debugger or
performance analysis tools.
Only issue is that these same things seem to be broken once I upgraded to Xcode 6. Back to the drawing board....

Where do I set the maximum attachment size for mutt?

I have a script which emails me a database backup on a daily basis, but it recently stopped working.
Here's an example command line...
/usr/bin/mutt -s 'Backup' -f /dev/null -e 'set copy=no' -e 'set from="noreply#email.com"' -a 'backup/db.sql.gz' 'admin#email.com' </dev/null 2>&1;
When I run it manually, it shows...
Error sending message, child exited 1 ().
Could not send message.
I tried running it attaching the backup from a few days ago (when it was working), and it worked fine. It seems to be an issue with the size of the backup.
The last one that worked was 77,464 KB. A more recent one is 79,386 KB - and that fails.
I'm running on CENTOS 5.8.
I suspect it's just a setting from mutt, or possibly the mail service that mutt is using, but I'm not even sure where to find the config files to look at.
Any help appreciated!
Mutt doesn't set a limit, but is affected by the limit of the mail service being used. In this case EXIM.
For EXIM you can increase the limit like so...
Log in WHM.
Go to Home » Service Configuration » Exim Configuration Manager.
Select Advanced Editor.
Change message_size_limit to 200M (it was 100M).
Save.
After these changes my backup worked.
I tried a different approach. I deleted all files bigger than a certain size.
You can delete larger files, move them or rename them.
With a command similar to this:
find . -name "*.tif" -size +160k -delete
160k = 160 kilobytes, 160c = 160 bytes..

Building project from cron task

When I build project from terminal by using 'xcodebuild' command I succeed, but when I try to do run same script from cron task I receive error
"Code Sign error: The identity '****' doesn't match any valid certificate/private key pair in the default keychain"
I think problem is in settings and permissions of crontab utility, it seems crontab does not see my keychain
Can anyone provide me terminal command how to make my keychain visible for crontab
I encountered a similar issue with trying to build nightly via cron. The only resolution I found was to create a plist in /Library/LaunchDaemons/ and load it via launchctl. The key necessary is "SessionCreate" otherwise you will quickly run in to problems similar to what was encountered with trying to use cron -- namely that your user login.keychain is not available to the process. "SessionCreate" is similar to "su -l" in that (as far as I understand) it simulates a login and thus default keychains you expect will be available; otherwise, you are stuck with only the System keychain despite the task running as your user.
I found the answers (though not the top answer currently) here useful in troublw shooting this issue: Missing certificates and keys in the keychain while using Jenkins/Hudson as Continuous Integration for iOS and Mac development
You execute your cron job with which account ?
most probably the problem !!
You can add
echo `whoami`
at the beginning of your script to see with which user the script is launched.
Also when a Bash script is launched from cron, it don't use the same environment variable (non login shell) as when you launch it as a user.
When the script launches from cron, it doesn't load your $HOME/.profile (or .bash_profile). Anything you run from cron has to be 100% self-sufficient in terms of it's environment. I'd suggest you make yourself a file called something like "set_build_env.sh" It should contain everything from your .profile that you need to build, such as $PATH, $HOME, $CLASSPATH etc. Then in your build script, load set_build_env.sh using the dot notation or source cmd as ericc said. You should also remove the build-specific lines from your.profile and then source set_build_env from there too so only one place to maintain. Example:
source /home/dmitry/set_build_env.sh #absolute path
. /home/dmitry/set_build_env.sh #dot-space notation same as "source"