Challenge:
I have a groovy script running a few AWS instances as nodes in parallel with security group enabled SSH and HTTP. The public IPv4 is available from instance via http://169.254.169.254/latest/meta-data/public-ipv4. I need to save the output from this request to a variable for further usage.
That is, how my current code looks like (on node):
String ip = sh([returnStdout: true, label: 'save ip', script: 'curl -s http://169.254.169.254/latest/meta-data/public-ipv4']).toString().trim()
println("Status:", ip)
The build exits with ERROR: script returned exit code 1.
Failed Experiments:
I tried:
sh label: 'save ip', script: 'wget -qO ipv4.txt http://169.254.169.254/latest/meta-data/public-ipv4'
String ip = readFile 'ipv4.txt'
println("Status:", ip)
and now it fails with ERROR: script returned exit code 1, too, but in readFile. I even tried moving the readFile part downwards in code, and he passes all steps after sh until readFile. If I only run the sh part without saving output to variable or reading the file, the build finishes successful (or as successful as it can be without this information). Example console output below:
+ curl -s http://169.254.169.254/latest/meta-data/public-ipv4
18.202.253.1[Pipeline] unstash
[Pipeline] sh
I checked:
if the file exists (it does)
the content of the file (something like: 34.244.77.254 without line break)
the output of the URL (something like: 34.244.77.254 without line break)
the output of sh with returnStatus (crazy thing is, I got the IP on console log in this case and build still just fails with exit 1)
I also tried (just for check, suspecting some error in having an IP in return value):
sh label: 'save ip', script: 'wget -qO ipv4.txt http://169.254.169.254/latest/meta-data/ami-id'
String ip = readFile 'ipv4.txt'
println("Status:", ip)
and it behaved as with public-ipv4.
Additional setup information:
Jenkins 2.150.3 LTS
Amazon EC2 plugin 1.42
AMI images started Ubuntu 16.04LTS, Ubuntu 18.04LTS, Debian 9.7
I'm thankful for any ideas. I'm a bit stuck here and neither StackOverflow nor Google did return any results.
Edit
Regarding #JRichardsz comment, I tried:
String ip = sh([returnStatus: true, label: 'save ip', script: 'curl -s ipv4.txt https://www.google.com']).toString().trim()
println("Status:", ip)
and got ERROR: script returned exit code 1, too.
Solution
tl;dr: The devil was in the details. This is the working snippet:
String ip = sh([returnStdout: true, label: 'save ip', script: 'curl -s http://169.254.169.254/latest/meta-data/public-ipv4']).toString().trim()
println(ip)
The watchful observer will notice, that only the second line changed. Yeah, println in Groovy wants only one argument and doesn't concatenate them, as e.g. Python does.
The long version: I tried again with this snippet:
def ip = sh returnStdout: true, script: "date"
println("Status:", ip)
on a local machine - just to be sure, returnStdout works at all - and got:
java.lang.NoSuchMethodError: No such DSL method 'println' found among steps [<java stack trace goes here ... ]
So the error wasn't inside sh at all.
Finally I can say, that the Java stack-trace appearing also while executing Groovy on AWS nodes, might have been helpful in this matter.
Thank you to all that have read this question and have tried to find an answer!
Related
I need to create AWS CentOS 7 instance images for a customer, and need it to automatically send the ip and instance id to our AWS server every time the instance boots. For example, this is the very basic test version of the script I need to run:
#!/bin/bash
$serverIP=""
curl "https://$serverIP"/myphp.php?id='sentid'&ip='sentip'"
If the script is run directly, it works fine and is received by the server and processed there. But I can't get it to run at boot. I cannot put the script in the "User Data" directly due to security concerns as the customer can then see it easily, it needs to be in a script in the filesystem of the image.
I've tried several things that work fine on a physical Linux server, but not on AWS. I know profile.d runs every time someone logs in but over-sending like that is fine.
/etc/profile.d/myscript.sh
This stops the AWS instance from booting. Even just
#!/bin/bash/
echo "hello world"
prevents it from booting. The instance starts, but when you go to ssh into it you get 'Network Error: connection timed out', which is the standard error if you put a wrong ip in, or upset it by leaving a service like httpd enabled.
However, a blank bash script with just #!/bin/bash will allow the instance to start. Removing the script via user data usually makes it boot, sometimes it just dies.
The first thing I tried was crontab. I did:
crontab -e
#reboot /var/ook/myscript.sh
systemctl enable crond.service
But the instance wouldn't start. So I put "systemctl disable crond.service" in the User Data and one booted, but another still stayed dead. Myscript.sh was just another echo "doob" >> file which worked fine when run directly.
I tried putting in /etc/systemd/system/my-startup.service:
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/var/ook/writedood.sh
[Install]
WantedBy=multi-user.target
then:
systemctl enable my-startup.service
But this did nothing. My script "writedood.sh" was just echo "doob" >> ./file.txt ensuring file.txt was chmod 777. At least it didn't prevent the instance from starting.
To give context, an instance won't start if httpd is left enabled on shutdown, but will if you disable it in User Data.
I wanted to have a go at putting something in init.d but I'm not sure how to simply tell it to run a script once in the background, and given the plethora of success I've had so far with the instance not restarting, I'm not holding out much hope that that would work.
Thanks in advance!
EDIT::: I realised that sometimes AWS EC2 Instances Console is causing the problem where I can't ssh in after stopping and starting. It blanks the public ipv4 address when I click stop, but when I start, it puts the old address up and hangs. If I refresh the page, or uncheck/check the instance; the ip changes to the new address. This has caused much consternation.
Crontab worked if I placed the scripts and output file in different folders. It's very finicky; any errors, such as it not being able to write to the output file, and the instance won't start. I put startscript.sh in /usr/local/src, and output.out to /tmp/ to ensure there were no permissions problems, and now the instance starts and runs the script on boot.
I then realised that sometimes AWS EC2 Instances Console is causing the problem where I can't ssh in after stopping and starting. It blanks the public ipv4 address when I click stop, but when I start, it puts the old address up and hangs. If I refresh the page, or uncheck/check the instance; the ip changes to the new address. This has caused much consternation.
I am using cloudbuild.yml file.
I am trying to grab the build output from inside the cloud build and push it to a file. This is how my step looks like:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args: ['gcloud', 'builds', 'log', '$BUILD_ID', '>buildlog.log']
id: 'fetch-build-log'
This throws me an error saying ERROR: (gcloud.builds.log) unrecognized arguments: >buildlog.log
If I execute that command in cloud shell, it works fine: gcloud builds log xxxxx-xxxx-xxxx-xxxx-xxxxxxx >guildlog.log
I am not sure why cloud build considers >buildlog.log an argument when it is to redirect the output to the file.
Am I missing something here or is there another way of doing it?
In Cloud Build each builder has a default entrypoint, which typically correlates to that builder’s purpose.
In your example, you are using cloud-sdk default entrypoint and the positional args syntax, so each index should be a single argument.
That's why you receive the error: ERROR: (gcloud.builds.log) unrecognized arguments: >buildlog.log
I put together a working example changing the entrypoint to /bin/bash:
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
entrypoint: '/bin/bash'
args: ['-c',
'gcloud builds log $BUILD_ID > buildlog.log']
id: 'fetch-build-log'
- name: 'alpine'
id: 'OUTPUT_LOG'
args: ['sh',
'-c',
'cat /workspace/buildlog.log']
In that example I'm using the -c command, in case you want to understand why:
Quoting from man bash:
-c string If the -c option is present, then commands are read from
string. If there are arguments after the string, they are
assigned to the positional parameters, starting with $0.
Let me know if it works for you.
At the shell prompt, when you run the command:
gcloud builds logs XXXX > buildlog.log
The actual command you are running is
gcloud builds log XXXX
with the additional instruction to the shell that you want any output from the command to be re-directed to a local file. The gcloud binary (the application that you are actually running) isn't passed the trailing > buildlog.log and, if it were, would give you the error message you reported. It is the shell that is interpreting the file output redirection.
What I think you want to do is simply remove the > buildlog.log from your arguments.
Since you are no longer passing in this parameter, the next question becomes "Where is the output from the command going?" ... the answer to that should be GCP Cloud Logging where you should be able to see the output from the commands.
Should you really want to create a local file of output, consider using the Cloud Build entrypoint:
https://cloud.google.com/cloud-build/docs/build-config#entrypoint
and specify a value for that of bash. This should start a shell and then pass your parameters to the shell rather than a raw fork/exec.
Finally, it appears that your question may be a variant of:
How can I save google cloud build step text output to file
I’m setting up a patch process for EC2 servers running a web application.
I need to build an automated process that installs system updates but, reverts back to the last working ec2 instance if the web application fails a status check.
I’ve been trying to do this using an Automation Document in EC2 Systems Manager that performs the following steps:
Stop EC2 instance
Create AMI from instance
Launch new instance from newly created AMI
Run updates
Run status check on web application
If check fails, stop new instance and restart original instance
The Automation Document runs the first 5 steps successfully, but I can't identify how to trigger step 6? Can I do this within the Automation Document? What output would I be able to call from step 5? If it uses aws:runCommand, should the runCommand trigger a new automation document or another AWS tool?
I tried the following to solve this, which more or less worked:
Included an aws:runCommand action in the automation document
This ran the DocumentName "AWS-RunShellScript" with the following parameters:
Downloaded the script from s3:
sudo aws s3 cp s3://path/to/s3/script.sh /tmp/script.sh
Set the file to executable:
chmod +x /tmp/script.sh
Executed the script using variables set in, or generated by the automation document
bash /tmp/script.sh -o {{VAR1}} -n {{VAR2}} -i {{VAR3}} -l {{VAR4}} -w {{VAR5}}
The script included the following getopts command to set the inputted variables:
while getopts o:n:i:l:w: option
do
case "${option}"
in
n) VAR1=${OPTARG};;
o) VAR2=${OPTARG};;
i) VAR3=${OPTARG};;
l) VAR4=${OPTARG};;
w) VAR5=${OPTARG};;
esac
done
The bash script used the variables to run the status check, and roll back to last working instance if it failed.
I am having a problem with getting a simple cron task set up on Elastic Beanstalk. I have found some of the other questions on here useful, but i still can't seem to get the cron to execute. I am unsure if it is an AWS issue, or if the script itself is not executing. The script is set up inside YII as a Console Command. I am not finding any PHP errors, and the ec2 instance is loaded without errors. Here is what i have done so far:
I have created a folder on the root of my application called .ebextensions.
Within that folder i have created a configuration file with the contents
# Installing dos2unix in case files are edited on windows PC
packages:
yum:
dos2unix: []
container_commands:
01-command:
command: dos2unix -k cron_setup.sh
02-command:
command: chmod 700 cron_setup.sh
03-command:
command: "cat .ebextensions/cron_task.txt > /etc/cron.d/cron_task && chmod 644 /etc/cron.d/cron_task"
# leader_only prevents problems when EB auto-scales
leader_only: true
the file cron_task.txt exists inside the .ebextensions folder with the contents
# The newline at the end of this file is extremely important. Cron won't run without it.
* * * * * /bin/php /var/www/html/crons.php test > /dev/null
Crons.php is a file at the root of the application that includes the Yii framework
defined('YII_DEBUG') or define('YII_DEBUG',true);
// including Yii
require_once(dirname(__FILE__).$yii.'/yii.php');
// we'll use a separate config file
$configFile=dirname(__FILE__).'/protected/config/cron.php';
// creating and running console application
Yii::createConsoleApplication($configFile)->run();
the config/cron.php file is a setup file for the framework, includes database connection and model inclusions, etc
and the cron script being referenced in the cron_task.txt file is a console command that looks like this
class TestCommand extends CConsoleCommand {
public function run($args) {
$message = new Crontasks();
$message->timestamp = date("Y-m-d H:i:s");
$message->message = "test";
$message->insert();
}
}
here i am just trying to get a record into the database to prove the cron was executed successfully. And i can't seem to get a record added.
The problem is, i don't know where this is failing. I am not getting any instance errors. And i took a snapshot log and cant seem to find any relevant errors in there either. Should php errors be logged here? OR do i have to set it up myself to log errors? The problem, i am also having trouble getting into ec2 via SSH. I am getting a permission denied (public key) error!! Even though i have set up the security group/key pair and using the correct public DNS for the instance!
If anyone can see anything obvious is what im doing wrong here, please let me know! Otherwise could you give any advice on where to look for any errors that might be preventing this cron task to execute? Many thanks!!
While creating a new AWS EC2 instance using the EC2 command line API, I passed some user data to the new instance.
How can I know whether that user data executed or not?
You can verify using the following steps:
SSH on launch EC2 instance.
Check the log of your user data script in:
/var/log/cloud-init.log and
/var/log/cloud-init-output.log
You can see all logs of your user data script, and it will also create the /etc/cloud folder.
Just for reference, you can check if the user data executed by taking a look at the system log from the EC2 console. Right click on your instance -
In the new interface: Monitor and Troubleshoot > Get System Log
In the old interface: Instance Settings > Get System log
This should open a modal window with the system logs
It might also be useful for you to see what the userdata looks like when it's being executed during the bootstrapping of the instance. This is especially true if you are passing in environmental variables or flags from the CloudFormation template. You can see how the UserData is being executed in two different ways:
1. From within the instance:
# Get instance ID
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
# Print user data
sudo cat /var/lib/cloud/instances/$INSTANCE_ID/user-data.txt
2. From outside the instance
Note: this will only work if you have configured the UserData shell in such a way that it will output the commands it runs.
For bash, you can do this like as follows:
"#!/bin/bash\n",
"set -x\n",
Right click on the EC2 instance from the EC2 console -> Monitor and Troubleshoot -> Get system log. Download the log file and look for something a section that looks like this:
ip-172-31-76-56 login: 2021/10/25 17:13:47Z: Amazon SSM Agent v3.0.529.0 is running
2021/10/25 17:13:47Z: OsProductName: Ubuntu
2021/10/25 17:13:47Z: OsVersion: 20.04
[ 45.636562] cloud-init[856]: Cloud-init v. 21.2-3...
[ 47.749983] cloud-init[896]: + echo hello world
this is what you would see if the UserData was configured like this:
"#!/bin/bash\n",
"set -x\n",
"echo hello world"
Debugging user data scripts on Amazon EC2 is a bit awkward indeed, as there is usually no way to actively hook into the process, so one ideally would like to gain Real time access to user-data script output as summarized in Eric Hammond's article Logging user-data Script Output on EC2 Instances:
The recent Ubuntu AMIs still send user-data script to the console
output, so you can view it remotely, but it is no longer available in
syslog on the instance. The console output is only updated a few
minutes after the instance boots, reboots, or terminates, which forces
you to wait to see the output of the user-data script as well as not
capturing output that might come out after the snapshot.
Depending on your setup you might want to ship the logs to a remote logging facility like Loggly right away, but getting this installed early enough can obviously be kind of a chicken/egg problem (though it works great if the AMI happens to be configured like so already).
Enable logging for your user data
Eric Hammond, in "Logging user-data Script Output on EC2 Instances (2010, Hammond)", suggests:
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
Take care to put a space between the two > > characters at the beginning of the statement.
Here’s a complete user-data script as an example:
#!/bin/bash -ex
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
echo BEGIN
date '+%Y-%m-%d %H:%M:%S'
echo END
Put this in userdata
touch /tmp/file2.txt
Once the instance is up you can check whether the file is created or not. Based on this you can tell if the userdata is executed or not.
Have your user data create a file in your ec2's /tmp directory to see if it works:
bob.txt:
#!/bin/sh
echo 'Woot!' > /home/ec2-user/user-script-output.txt
Then launch with:
ec2-run-instances -f bob.txt -t t1.micro -g ServerPolicy ami-05cf5c6d -v