I am trying to use the puppet file function (not the type) in the following way
class iop_users {
include 's3file::curl'
include 'stdlib'
$secretpath=file('/etc/secret','dev/null')
notify { 'show secretpath':
message =>"secretpath is $secretpath"
}
s3file { '/opt/utab.yaml':
source => "mybucket/$secretpath/utab.yaml",
ensure => 'latest',
}
exec { 'fix perms':
command => '/bin/chmod 600 /opt/utab.yaml',
require => S3file['/opt/utab.yaml']
}
if ( $::virtual == 'xenhvm' and defined(S3file['/opt/utab.yaml']) ) {
$uhash=loadyaml('/opt/utab.yaml')
create_resources(iop_users::usercreate, $uhash)
}
}
If I run this then here is some typical output. The manifest fails as the initial "secret" used to find the path is not loaded
https_proxy=https://puppet:3128 puppet agent -t
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for ip-10-40-1-68.eu-west-1.compute.internal
Info: Applying configuration version '1431531382'
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: % Total % Received % Xferd Average Speed Time Time Time Current
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: Dload Upload Total Spent Left Speed
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml] 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Notice: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: curl: (56) Received HTTP code 404 from proxy after CONNECT
Error: curl -L -o /opt/utab.yaml https://s3-eu-west.amazonaws.com/mybucket//utab.yaml returned 56 instead of one of [0]
Error: /Stage[main]/Iop_users/S3file[/opt/utab.yaml]/Exec[fetch /opt/utab.yaml]/returns: change from notrun to 0 failed: curl -L -o /opt/utab.yaml https://s3-eu-west.amazonaws.com/mybucket//utab.yaml returned 56 instead of one of [0]
Notice: /Stage[main]/Iop_users/Exec[fix perms]: Dependency Exec[fetch /opt/utab.yaml] has failures: true
Warning: /Stage[main]/Iop_users/Exec[fix perms]: Skipping because of failed dependencies
Notice: secretpath is
Notice: /Stage[main]/Iop_users/Notify[show secretpath]/message: defined 'message' as 'secretpath is '
Notice: Finished catalog run in 1.28 seconds
However on the same host that the above puppet agent run fails on, if I use "apply" to try it outside of the context of a manifest, it works fine
puppet apply -e '$z=file("/etc/secret") notify { "z": message => $z}'
Notice: Compiled catalog for ip-x.x.x.x.eu-west-1.compute.internal in environment production in 0.02 seconds
Notice: wombat
Notice: /Stage[main]/Main/Notify[z]/message: defined 'message' as 'wombat
'
Notice: Finished catalog run in 0.03 seconds
What am I doing wrong? Are there any better alternative approaches I could make?
As usual I was confused about the way puppet works
Apparently, functions are always executed on the master
So any files being loaded in this way must be on the master
As soon as I added a "/etc/secret" file to the puppetmaster it all worked
Related
Elastic Beanstalk is adding & removing instances one after the other. Googling around points to checking the "State transition message" which is coming up as "Client.UserInitiatedShutdown: User initiated shutdown" for which https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/troubleshooting-launch.html#troubleshooting-launch-internal states some possible reasons but none of these apply. No one has touched any setting, etc. Any ideas?
UPDATE: Did a bit more digging and found out that app deployment is failing. Relevant log errors are below:
eb-engine.log
2021/08/05 15:46:29.272215 [INFO] Executing instruction: PreBuildEbExtension
2021/08/05 15:46:29.272220 [INFO] Starting executing the config set Infra-EmbeddedPreBuild.
2021/08/05 15:46:29.272235 [INFO] Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f -r AWSEBAutoScalingGroup --region us-east-1 --configsets Infra-EmbeddedPreBuild
2021/08/05 15:50:44.538818 [ERROR] An error occurred during execution of command [app-deploy] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
2021/08/05 15:50:44.540438 [INFO] Executing cleanup logic
2021/08/05 15:50:44.581445 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1628178644,"severity":"ERROR"}]}]}
2021/08/05 15:50:44.620394 [INFO] Platform Engine finished execution on command: app-deploy
2021/08/05 15:51:22.196186 [ERROR] An error occurred during execution of command [self-startup] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
2021/08/05 15:51:22.196215 [INFO] Executing cleanup logic
eb-cfn-init.log
[2021-08-05T15:42:44.199Z] Completed executing cfn_init.
[2021-08-05T15:42:44.226Z] finished _OnInstanceReboot
+ RESULT=1
+ [[ 1 -ne 0 ]]
+ sleep_delay
+ (( 2 < 3600 ))
+ echo Sleeping 2
Sleeping 2
+ sleep 2
+ SLEEP_TIME=4
+ true
+ curl https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922/lib/UserDataScript.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 4627 100 4627 0 0 24098 0 --:--:-- --:--:-- --:--:-- 24098
+ RESULT=0
+ [[ 0 -ne 0 ]]
+ SLEEP_TIME=2
+ /bin/bash /tmp/ebbootstrap.sh 'https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A345470085661%3Astack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f/AWSEBInstanceLaunchWaitHandle?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200528T171102Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86399&X-Amz-Credential=AKIAIIT3CWAIMJYUTISA%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=57c7da0aec730af1b425d1aff68517c333cf9d5432c984d775419b415cac8513' arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f 65c52bb7-0376-4d43-b304-b64890a34c1c https://elasticbeanstalk-health.us-east-1.amazonaws.com '' https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922 us-east-1
[2021-08-05T15:46:07.683Z] Started EB Bootstrapping Script.
[2021-08-05T15:46:07.739Z] Received parameters:
TARBALLS =
EB_GEMS =
SIGNAL_URL = https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A345470085661%3Astack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f/AWSEBInstanceLaunchWaitHandle?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20200528T171102Z&X-Amz-SignedHeaders=host&X-Amz-Expires=86399&X-Amz-Credential=AKIAIIT3CWAIMJYUTISA%2F20200528%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=57c7da0aec730af1b425d1aff68517c333cf9d5432c984d775419b415cac8513
STACK_ID = arn:aws:cloudformation:us-east-1:345470085661:stack/awseb-e-mecfm5qc8z-stack/317924c0-a106-11ea-a8a3-12498e67507f
REGION = us-east-1
GUID =
HEALTHD_GROUP_ID = 65c52bb7-0376-4d43-b304-b64890a34c1c
HEALTHD_ENDPOINT = https://elasticbeanstalk-health.us-east-1.amazonaws.com
PROXY_SERVER =
HEALTHD_PROXY_LOG_LOCATION =
PLATFORM_ASSETS_URL = https://elasticbeanstalk-platform-assets-us-east-1.s3.amazonaws.com/stalks/eb_php74_amazon_linux_2_1.0.1153.0_20210728213922
Is this some corrupted AMI?
Turned out that there was a config script in the .ebextension directory that was not behaving.
connect-script = /app/connect.sh
disconnect-script = /app/disconnect.sh
I have the above configuration in my ocserv.conf in the docker container, but ocserv fails to execute /app/connect.sh when there is a connection. I cann't find the real cause from the following log, has anyone had the same issue?
ocserv[26]: main[test]:xxx.xxx.179.135:57352 user of group 'Route' authenticated (using cookie)
ocserv[29]: main[test]:xxx.xxx.179.135:57352 executing script up /app/connect.sh
ocserv[29]: main[test]:xxx.xxx.179.135:57352 main-user.c:379: Could not execute script /app/connect.sh
ocserv[26]: main[test]:xxx.xxx.179.135:57352 connect-script exit status: 1
ocserv[26]: main[test]:xxx.xxx.179.135:57352 failed authentication attempt for user 'test'
The content of /app/connect.sh:
#!/bin/bash
echo "$(date) [info] User ${USERNAME} Connected - Server: ${IP_REAL_LOCAL} VPN IP: ${IP_REMOTE} Remote IP: ${IP_REAL} Device:${DEVICE}"
Well, I figured it out myself that the docker container I created doesn't have bash, and one solution is to substitute #!/bin/bash with #!/bin/sh.
I have an gitlab ci yaml file. and 2 jobs. My .gitlab-ci.yaml file is:
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- trigger_IT_service
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
trigger_IT_service_job:
stage: trigger_IT_service
script:
- 'curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer'
And It's my trigger_IT_service job report:
Running on DIGITALIZATION...
00:00
Fetching changes with git depth set to 50...
00:05
Reinitialized existing Git repository in D:/GitLab-Runner/builds/c11pExsu/0/personalname/newproject/.git/
Checking out 24be087a as master...
Removing Output/
git-lfs/2.5.2 (GitHub; windows amd64; go 1.10.3; git 8e3c5c93)
Skipping Git submodules setup
$ curl http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer
00:02
StatusCode : 200
StatusDescription : 200
Content : {"status":200,"message":"SAP transfer started. Please
check in db","errorCode":0,"timestamp":"2020-03-25T13:53:05
.722+0300","responseObject":null}
RawContent : HTTP/1.1 200 200
Keep-Alive: timeout=10
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: application/json;charset=UTF-8
Date: Wed, 25 Mar 2020 10:53:05 GMT
Server: Apache
I have to control the this report "Content" part in gitlab ci yaml
If "message" is "SAP transfer started. Please check in db" the pipeline should pass otherwise must be failed.
Actually my question is:
how to parse Http json response and fail or pass job based on that
Thank you for all your helps.
Best way would be to install some tool to parse json and use it, different examples here
Given json example from comment:
{
"status": 200,
"message": "SAP transfer started. Please check in db",
"errorCode": 0,
"timestamp": "2020-03-25T17:06:43.430+0300",
"responseObject": null
}
If you can install python3 on your runner you could achieve it all with script:
import requests; # note this might require additional install with pip install requests
message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']
if message != 'SAP transfer started. Please check in db':
print('Invalid message: ' + message)
exit(1)
else:
print('Message ok')
So trigger_IT_service stage in your yaml would be:
trigger_IT_service_job:
stage: trigger_IT_service
script: >
python -c "import requests; message = requests.get('http://webapps.xxx.com.tr/dataBus/runTransfer/ctDigiTransfer').json()['message']; (print('Invalid message: ' + message), exit(1)) if message != 'SAP transfer started. Please check in db' else (print('Message ok'), exit(0))"
I have a yaml file and I have a respond for yaml file.Respond file have a message part.and I need if message is true return Job Succeed for yaml file if message writes wrong yaml file is not succeed
variables:
NUGET_PATH: 'C:\Tools\Nuget\nuget.exe'
MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Current\Bin\amd64\msbuild.exe'
SOLUTION_PATH: 'Textbox_ComboBox.sln'
stages:
- build
- job1
- job2
before_script:
- "cd Source"
build_job:
stage: build
except:
- schedules
script:
- '& "$env:NUGET_PATH" restore'
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
job1:
stage: job1
script:
- 'curl adress1'
- - if [ "$message" == "SAP transfer started. Please check in db" ]; then exit 0; else exit 1; fi
job2:
stage: trigger_SAP_service
when: delayed
start_in: 5 minutes
only:
- schedules
script:
- 'curl adress2'
It is yaml file respond.It should be job succeed.Because respond message and if command message is same.
Skipping Git submodules setup
Authenticating with credentials from job payload (GitLab Registry)
$ cd Source
$ curl adress1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 146 0 146 0 0 877 0 --:--:-- --:--:-- --:--:-- 879
{"status":200,"message":"SAP transfer started. Please check in db","errorCode":0,"timestamp":"2019-10-04T07:59:58.436+0300","responseObject":null}$ if ( [ '$message' == 'SAP transfer started. Please check in db' ] ); then exit 0; else exit 1; fi
ERROR: Job failed: exit code 1
The message variable you use in your condition is empty.
You need to assign the curl response to your message variable :
message=$(curl -Ss adress1)
and then test the content of $message
the below observation is not always the case, but after some time accessing the SUT several times with ssh with root user and correct password the python code gets into trouble with:
Apr 25 05:51:56 SUT sshd[31570]: pam_tally2(sshd:auth): user root (0) tally 83, deny 10
Apr 25 05:52:16 SUT sshd[31598]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.10.10.13 user=root
Apr 25 05:52:21 SUT sshd[31568]: error: PAM: Authentication failure for root from 10.10.10.13
Apr 25 05:52:21 SUT sshd[31568]: Connection closed by 10.10.10.13 [preauth]
This is the below python code:
COMMAND_PROMPT = '.*:~ #'
SSH_NEWKEY = '(?i)are you sure you want to continue connecting'
def scp(source, dest, password):
cmd = 'scp ' + source + ' ' + dest
try:
child = pexpect.spawn('/bin/bash', ['-c', cmd], timeout=None)
res = child.expect([pexpect.TIMEOUT, SSH_NEWKEY, COMMAND_PROMPT, '(?i)Password'])
if res == 0:
print('TIMEOUT Occurred.')
if res == 1:
child.sendline('yes')
child.expect('(?i)Password')
child.sendline(password)
child.expect([pexpect.EOF], timeout=60)
if res == 2:
pass
if res == 3:
child.sendline(password)
child.expect([pexpect.EOF], timeout=60)
except:
print('File not copied!!!')
self.logger.error(str(self.child))
When the ssh is unsuccessful, this is the pexpect printout:
version: 2.3 ($Revision: 399 $)
command: /usr/bin/ssh
args: ['/usr/bin/ssh', 'root#100.100.100.100']
searcher: searcher_re:
0: re.compile(".*:~ #")
buffer (last 100 chars): :
Account locked due to 757 failed logins
Password:
before (last 100 chars): :
Account locked due to 757 failed logins
Password:
after: <class 'pexpect.TIMEOUT'>
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 2284
child_fd: 5
closed: False
timeout: 30
delimiter: <class 'pexpect.EOF'>
logfile: None
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0
delayafterclose: 0.1
delayafterterminate: 0.1
Any clue maybe what could it be, is it maybe anything missing or wrong configured for pam authentication on my SUT? The problem is that when the SUT starts with this pam failures then python code will always have the problem and only a reboot of the SUT seems to help :(
Manually accessing the SUT via ssh root#... is always working, even if pexpect can't!!! The account seems not to be locked according to:
SUT:~ # passwd -S root
root P 04/24/2017 -1 -1 -1 -1
I have looked into some other questions but no real solution is mentioned or could work with my python code.
Thanks in adv.
My work around is to modify for testing purpose the pam_tally configuration files. It seems that the SUT acknowledge the multiple access as a threat and locks even the root account!
By removing this entry even_deny_root root_unlock_time=5 in the several pam_tally configuration files:
/etc/pam.d/common-account:account required pam_tally2.so deny=10 onerr=fail unlock_time=600 even_deny_root root_unlock_time=5 file=/home/test/faillog
/etc/pam.d/common-auth:auth required pam_tally2.so deny=10 onerr=fail unlock_time=600 even_deny_root root_unlock_time=5 file=/home/test/faillog
Those changes will be activated dynamically no restart of service needed!
Note: after reboot those entries will be most likely back!