SSM Automation Run Command is not longer than default 3600 seconds - amazon-web-services

I have been working with AWS Systems Manager and I have created a Document to run a command, But it appears there is no way to overwrite the timeout for a run command in an SSM
I have changed the execution timeout here in the parameters but does not work.
also, I added a timeoutSeconds in my Document and it doesn't work either.
This is my Document (I'm using schema version 2.2):
schemaVersion: "2.2"
description: "Runs a Python command"
parameters:
Params:
type: "String"
description: "Params after the python3 keyword."
mainSteps:
- action: "aws:runShellScript"
name: "Python3"
inputs:
timeoutSeconds: '300000'
runCommand:
- "sudo /usr/bin/python3 /opt/python/current/app/{{Params}}"

1: The setting that’s displayed in your screenshot in the Other parameters section is the Delivery Timeout, which is different from the execution timeout.
You must specify the execution timeout value in the Execution Timeout field, if available. Not all SSM documents require that you specify an execution timeout. If a Systems Manager document doesn't require that you explicitly specify an execution timeout value, then Systems Manager enforces the hard-coded default execution timeout.
2: In your document, the timeoutSeconds attribute is in the wrong place. It needs to be on the same level as the action.
...
mainSteps:
- action: "aws:runShellScript"
timeoutSeconds: 300000
name: "Python3"
inputs:
runCommand:
- "sudo /usr/bin/python3 /opt/python/current/app/{{Params}}"

timeoutSeconds: '300000'
Isn't this string but not integer?

Related

getting logs from a file with Ops Agent

I have a python script on a vm that writes logs to a file and I want to use them in the google logging.
I tried this config yaml:
logging:
receivers:
syslog:
type: files
include_paths:
- /var/log/messages
- /var/log/syslog
etl-error-logs:
type: files
include_paths:
- /home/user/test_logging/err_*
etl-info-logs:
type: files
include_paths:
- /home/user/test_logging/out_*
processors:
etl_log_processor:
type: parse_regex
field: message
regex: "(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})\s(?<severity>INFO|ERROR)\s(?<message>.*)"
time_key: time
time_format: "%Y-%m-%d %H:%M:%S"
service:
pipelines:
default_pipeline:
receivers: [syslog]
error_pipeline:
receivers: [etl-error-logs]
processors: [etl_log_processor]
log_level: error
info_pipeline:
receivers: [etl-info-logs]
processors: [etl_log_processor]
log_level: info
metrics:
receivers:
hostmetrics:
type: hostmetrics
collection_interval: 60s
processors:
metrics_filter:
type: exclude_metrics
metrics_pattern: []
service:
pipelines:
default_pipeline:
receivers: [hostmetrics]
processors: [metrics_filter]
error_pipeline:
receivers: [hostmetrics]
processors: [metrics_filter]
info_pipeline:
receivers: [hostmetrics]
processors: [metrics_filter]
and this is an example of the logs: 2021-11-22 11:15:44 INFO testing normal
I didn't fully understand the google docs so I created the yaml as best as I understood and with a reference to their main example but I have no idea why it doesn't work
environmen:GCE VM
You want to use those logs in GCP Log Viewer: yes
Which docs did you follow: https://cloud.google.com/stackdriver/docs/solutions/agents/ops-agent/configuration#logging-receivers
How did you install OpsAgent: in gce I entered each vm instance went to observability and there was the option to install ops agent in cloud shell
What logs you want to save: I want to save all of the logs that are being written to my log file live.
specific applications logs: its an etl process that runs in python and saves its logs to a local file on the vm
sudo journalctl -xe | grep "google_cloud_ops_agent_engine"
Try out this command it should show you the exact(almost) error

Is it possible to use Ref function on option_settings in AWS?

I am using Elastic Beanstalk to deploy a worker tier environment using SQS.
In my .ebextensions I have the following file:
option_settings:
aws:elasticbeanstalk:sqsd:
WorkerQueueURL:
Ref: WorkerQueue
HttpPath: "/sqs/"
InactivityTimeout: 1650
VisibilityTimeout: 1680
MaxRetries: 1
Resources:
WorkerQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: "tpc-clients-aws-queue"
VisibilityTimeout: 1680
However, this fails with the following error:
"option_settings" in one of the configuration files failed validation. More details to follow.
Invalid option value: 'Ref=WorkerQueue' (Namespace: 'aws:elasticbeanstalk:sqsd', OptionName: 'WorkerQueueURL'): Value does not satisfy regex: '^$|^http(s)?://.+$' [Valid non empty URL starting with http(s)]
It seems that the AWSCloudFormation Ref function cannot be used in the option_settings. Can someone confirm if this is the case?
I have seen some code snippets here on StackOverflow using intrinsic functions in the option_settings, such as in the mount-config.config of this answer and also on this question. So, are these examples using an invalid syntax? Or there are some intrinsic functions or specific resources that can be used on the option_settings?
And lastly, if I cannot use the Ref function, how can I go about this?
Yes, you can reference in .ebextentions, but the syntax is a bit strange. It is shown in the docs here.
You can try something along these lines (note the various quotations marks):
option_settings:
aws:elasticbeanstalk:sqsd:
WorkerQueueURL: '`{"Ref" : "WorkerQueue"}`'
HttpPath: "/sqs/"
InactivityTimeout: 1650
VisibilityTimeout: 1680
MaxRetries: 1
Resources:
WorkerQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: "tpc-clients-aws-queue"
VisibilityTimeout: 1680
You can also use ImportValue, if you export the WorkerQueue in outputs.
Update
To check the value obtained, you can set it as an env variable, and inspect in EB console:
option_settings:
aws:elasticbeanstalk:application:environment:
SQS_NAME: '`{"Ref" : "WorkerQueue"}`'
After digging further in this issue I made some discoveries I would like to share with future readers.
Ref can be used on option_settings
As #Marcin answer states, the Ref intrinsic function can be used in the option_settings. The syntax is different though:
'`{"Ref" : "ResourceName"}`'
Using Ref on aws:elasticbeanstalk:application:environment (environment variable)
An use case of the above is to store the queue URL in an environment variable, as follows:
option_settings:
aws:elasticbeanstalk:application:environment:
QUEUE_URL: '`{"Ref" : "WorkerQueue"}`'
This will let your .sh script access the URL of the queue:
Note that if you check the Elastic Beanstalk console (Environment > Config > Software), you won't see the actual value:
Using Ref on aws:elasticbeanstalk:sqsd:WorkerQueueURL
If you try to use the following setting:
option_settings:
aws:elasticbeanstalk:sqsd:
WorkerQueueURL: '`{"Ref" : "WorkerQueue"}`'
HttpPath: "/sqs/"
It will fail:
Invalid option value: '`{"Ref" : "WorkerQueue"}`' (Namespace: 'aws:elasticbeanstalk:sqsd', OptionName: 'WorkerQueueURL'): Value does not satisfy regex: '^$|^http(s)?://.+$' [Valid non empty URL starting with http(s)]
It seems that this configuration option don't accept a reference.
Instead of creating a new queue and assign it to the sqs daemon, you can just update the queue that Elastic Beanstalk creates:
option_settings:
# SQS daemon will use default queue created by EB (AWSEBWorkerQueue)
aws:elasticbeanstalk:sqsd:
HttpPath: "/sqs/"
Resources:
# Update the queue created by EB
AWSEBWorkerQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: "tpc-clients-aws-queue"

How to check if the latest Cloud Run revision is ready to serve

I've been using Cloud Run for a while and the entire user experience is simply amazing!
Currently I'm using Cloud Build to deploy the container image, push the image to GCR, then create a new Cloud Run revision.
Now I want to call a script to purge caches from CDN after the latest revision is successfully deployed to Cloud Run, however $ gcloud run deploy command can't tell you if the traffic is started to pointing to the latest revision.
Is there any command or the event that I can subscribe to to make sure no traffic is pointing to the old revision, so that I can safely purge all caches?
#Dustin’s answer is correct, however "status" messages are an indirect result of Route configuration, as those things are updated separately (and you might see a few seconds of delay between them). The status message will still be able to tell you the Revision has been taken out of rotation if you don't mind this.
To answer this specific question (emphasis mine) using API objects directly:
Is there any command or the event that I can subscribe to to make sure no traffic is pointing to the old revision?
You need to look at Route objects on the API. This is a Knative API (it's available on Cloud Run) but it doesn't have a gcloud command: https://cloud.google.com/run/docs/reference/rest/v1/namespaces.routes
For example, assume you did 50%-50% traffic split on your Cloud Run service. When you do this, you’ll find your Service object (which you can see on Cloud Console → Cloud Run → YAML tab) has the following spec.traffic field:
spec:
traffic:
- revisionName: hello-00002-mob
percent: 50
- revisionName: hello-00001-vat
percent: 50
This is "desired configuration" but it actually might not reflect the status definitively. Changing this field will go and update Route object –which decides how the traffic is splitted.
To see the Route object under the covers (sadly I'll have to use curl here because no gcloud command for this:)
TOKEN="$(gcloud auth print-access-token)"
curl -vH "Authorization: Bearer $TOKEN" \
https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/GCP_PROJECT/routes/SERVICE_NAME
This command will show you the output:
"spec": {
"traffic": [
{
"revisionName": "hello-00002-mob",
"percent": 50
},
{
"revisionName": "hello-00001-vat",
"percent": 50
}
]
},
(which you might notice is identical with Service’s spec.traffic –because it's copied from there) that can tell you definitively which revisions are currently serving traffic for that particular Service.
You can use gcloud run revisions list to get a list of all revisions:
$ gcloud run revisions list --service helloworld
REVISION ACTIVE SERVICE DEPLOYED DEPLOYED BY
✔ helloworld-00009 yes helloworld 2019-08-17 02:09:01 UTC email#email.com
✔ helloworld-00008 helloworld 2019-08-17 01:59:38 UTC email#email.com
✔ helloworld-00007 helloworld 2019-08-13 22:58:18 UTC email#email.com
✔ helloworld-00006 helloworld 2019-08-13 22:51:18 UTC email#email.com
✔ helloworld-00005 helloworld 2019-08-13 22:46:14 UTC email#email.com
✔ helloworld-00004 helloworld 2019-08-13 22:41:44 UTC email#email.com
✔ helloworld-00003 helloworld 2019-08-13 22:39:16 UTC email#email.com
✔ helloworld-00002 helloworld 2019-08-13 22:36:06 UTC email#email.com
✔ helloworld-00001 helloworld 2019-08-13 22:30:03 UTC email#email.com
You can also use gcloud run revisions describe to get details about a specific revision, which will contain a status field. For example, an active revision:
$ gcloud run revisions describe helloworld-00009
...
status:
conditions:
- lastTransitionTime: '2019-08-17T02:09:07.871Z'
status: 'True'
type: Ready
- lastTransitionTime: '2019-08-17T02:09:14.027Z'
status: 'True'
type: Active
- lastTransitionTime: '2019-08-17T02:09:07.871Z'
status: 'True'
type: ContainerHealthy
- lastTransitionTime: '2019-08-17T02:09:05.483Z'
status: 'True'
type: ResourcesAvailable
And an inactive revision:
$ gcloud run revisions describe helloworld-00008
...
status:
conditions:
- lastTransitionTime: '2019-08-17T01:59:45.713Z'
status: 'True'
type: Ready
- lastTransitionTime: '2019-08-17T02:39:46.975Z'
message: Revision retired.
reason: Retired
status: 'False'
type: Active
- lastTransitionTime: '2019-08-17T01:59:45.713Z'
status: 'True'
type: ContainerHealthy
- lastTransitionTime: '2019-08-17T01:59:43.142Z'
status: 'True'
type: ResourcesAvailable
You'll specifically want to check the type: Active condition.
This is all available via the Cloud Run REST API as well: https://cloud.google.com/run/docs/reference/rest/v1/namespaces.revisions
By default, the traffic is routed to the latest revision. You can see this into the logs.
Deploying container to Cloud Run service [SERVICE_NAME] in project [YOUR_PROJECT] region [YOUR_REGION]
✓ Deploying... Done.
✓ Creating Revision...
✓ Routing traffic...
Done.
Service [SERVICE_NAME] revision [SERVICE_NAME-00012-yic] has been deployed and is serving 100 percent of traffic at https://SERVICE_NAME-vqg64v3fcq-uc.a.run.app
If you want to be sure, you can explicitly call the update traffic command
gcloud run services update-traffic --platform=managed --region=YOUR_REGION --to-latest YOUR_SERVICE

Configuring Concourse CI to use AWS Secrets Manager

I have been trying to figure out how to configure the docker version of Concourse (https://github.com/concourse/concourse-docker) to use the AWS Secrets Manager and I added the following environment variables into the docker-compose file but from the logs it doesn't look like it ever reaches out to AWS to fetch the creds. Am I missing something or should this automatically happen when adding these environment variables under environment in the docker-compose file? Here are the docs I have been looking at https://concourse-ci.org/aws-asm-credential-manager.html
version: '3'
services:
concourse-db:
image: postgres
environment:
POSTGRES_DB: concourse
POSTGRES_PASSWORD: concourse_pass
POSTGRES_USER: concourse_user
PGDATA: /database
concourse:
image: concourse/concourse
command: quickstart
privileged: true
depends_on: [concourse-db]
ports: ["9090:8080"]
environment:
CONCOURSE_POSTGRES_HOST: concourse-db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_EXTERNAL_URL: http://XXX.XXX.XXX.XXX:9090
CONCOURSE_ADD_LOCAL_USER: test: test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
CONCOURSE_WORKER_BAGGAGECLAIM_DRIVER: overlay
CONCOURSE_AWS_SECRETSMANAGER_REGION: us-east-1
CONCOURSE_AWS_SECRETSMANAGER_ACCESS_KEY: <XXXX>
CONCOURSE_AWS_SECRETSMANAGER_SECRET_KEY: <XXXX>
CONCOURSE_AWS_SECRETSMANAGER_TEAM_SECRET_TEMPLATE: /concourse/{{.Secret}}
CONCOURSE_AWS_SECRETSMANAGER_PIPELINE_SECRET_TEMPLATE: /concourse/{{.Secret}}
pipeline.yml example:
jobs:
- name: build-ui
plan:
- get: web-ui
trigger: true
- get: resource-ui
- task: build-task
file: web-ui/ci/build/task.yml
- put: resource-ui
params:
repository: updated-ui
force: true
- task: e2e-task
file: web-ui/ci/e2e/task.yml
params:
UI_USERNAME: ((ui-username))
UI_PASSWORD: ((ui-password))
resources:
- name: cf
type: cf-cli-resource
source:
api: https://api.run.pivotal.io
username: ((cf-username))
password: ((cf-password))
org: Blah
- name: web-ui
type: git
source:
uri: git#github.com:blah/blah.git
branch: master
private_key: ((git-private-key))
When storing parameters for concourse pipelines in AWS Secrets Manager, it must follow this syntax,
/concourse/TEAM_NAME/PIPELINE_NAME/PARAMETER_NAME`
If you have common parameters that are used across the team in multiple pipelines, use this syntax to avoid creating redundant parameters in secrets manager
/concourse/TEAM_NAME/PARAMETER_NAME
The highest level that is supported is concourse team level.
Global parameters are not possible. Thus these variables in your compose environment will not be supported.
CONCOURSE_AWS_SECRETSMANAGER_TEAM_SECRET_TEMPLATE: /concourse/{{.Secret}}
CONCOURSE_AWS_SECRETSMANAGER_PIPELINE_SECRET_TEMPLATE: /concourse/{{.Secret}}
Unless you want to change the prefix /concourse, these parameters shall be left to their defaults.
And, when retrieving these parameters in the pipeline, no changes required in the template. Just pass the PARAMETER_NAME, concourse will handle the lookup in secrets manager as per the team and pipeline name.
...
params:
UI_USERNAME: ((ui-username))
UI_PASSWORD: ((ui-password))
...

can we pass parameters to appsec.yml hook ApplicationStart phase?

I want to deploy my application to aws, I am using codeDeploy for this.
following is my appsec.yml file:
version: 0.0
os: linux
files:
- source: /
destination: /home/ubuntu/project
permissions:
- object: /home/ubuntu/project
owner: root
mode: 777
type:
- directory
hooks:
BeforeInstall:
- location: scripts/install_dependencies.sh
timeout: 900
runas: root
AfterInstall:
- location: ./scripts/after-install.sh
timeout: 900
ApplicationStart:
- location: ./scripts/application-start.sh parameter1 parameter2
timeout: 900
ValidateService:
- location: ./scripts/validate-service.sh
timeout: 900
I am not able to pass paramerters to scripts.
Currently this is not possible.
As a workaround, you can design your Hook Scripts to consume System Environment Variables which can be defined on a instance at launch (through user-data) or you can also retrieve the parameters from AWS SSM Parameter Store (specially if they are secrets) using AWS CLI:
https://docs.aws.amazon.com/systems-manager/latest/userguide/sysman-paramstore-cli.html
You can create an if clause using the predefined environment variables. Based on the situation you configure the required values predefined environment variables