How to create a file without any content using CloudFormation Init? - amazon-web-services

I want an empty file to be created by CloudFormation.
I tried providing an empty string in the 'content' field in my CloudFormation template yaml file.
Resources:
Instance:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
files:
/var/log/app.log:
content: ""
mode: '000646'
I get the following error: File specified without source or content
2023-01-10 14:32:55,132 [DEBUG] Writing content to /var/log/app.log
2023-01-10 14:32:55,132 [ERROR] Error encountered during build of config: File specified without source or content
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 579, in run_config
CloudFormationCarpenter(config, self._auth_config, self.strict_mode).build(worklog)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 267, in build
self._config.files, self._auth_config)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/file_tool.py", line 131, in apply
self._write_file(f, attribs, auth_config)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/file_tool.py", line 221, in _write_file
raise ToolError("File specified without source or content")
cfnbootstrap.construction_errors.ToolError: File specified without source or content
2023-01-10 14:32:55,137 [ERROR] -----------------------BUILD FAILED!------------------------
2023-01-10 14:32:55,137 [ERROR] Unhandled exception during build: File specified without source or content
Traceback (most recent call last):
File "/opt/aws/bin/cfn-init", line 181, in <module>
worklog.build(metadata, configSets, strict_mode)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 137, in build
Contractor(metadata, strict_mode).build(configSets, self)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 567, in build
self.run_config(config, worklog)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 579, in run_config
CloudFormationCarpenter(config, self._auth_config, self.strict_mode).build(worklog)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 267, in build
self._config.files, self._auth_config)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/file_tool.py", line 131, in apply
self._write_file(f, attribs, auth_config)
File "/usr/lib/python3.7/site-packages/cfnbootstrap/file_tool.py", line 221, in _write_file
raise ToolError("File specified without source or content")
cfnbootstrap.construction_errors.ToolError: File specified without source or content
I get the same error if I omit the 'content' field.
Can you tell me the proper way to instruct CloudFormation to create an empty file?

You need to specify a content. If you really want to create an empty file, then I suggest to use commands.

As brushtakopo pointed out, CloudFormation Init doesn't support creating empty files so the only option appears to be to use a command.
I just wanted to include the corresponding command since he didn't include it in his answer.
Resources:
Instance:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
commands:
a_createLogFile:
command: "touch /var/log/app.log"
test: "test ! -e /var/log/app.log"
b_chmodLogFile:
command: "chmod 646 /var/log/app.log"
test: "test -e /var/log/app.log"
Note: The command prefixed with a_ creates the file and the command prefixed with b_ changes its access permissions. The prefixes are to ensure the first command is executed before the second command since CloudFormation executes commands in alphabetical order.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
The commands are processed in alphabetical order by name.

Related

AWS ElasticBeanstalk ebextensions command fails during deployment but works fine when I run it manually

I'm using .ebextensions to run some automation commands, but some commands are failing without a explaining the cause issue or printing any informative stacktrace, but when I run the same command manually it works like a charm with no issues:
the command:
aws logs put-retention-policy --log-group-name `{"Fn::Join":["/", ["/aws/eb", { "Ref":"AWSEBEnvironmentName" }, "var/log/app-log"]]}` --retention-in-days 7 --region `{"Ref":"AWS::Region"}`
the stack trace:
[ERROR] Command 03 (aws logs put-retention-policy --log-group-name /aws/log-group-name --retention-in-days 7 --region us-west-2) failed
2021-04-19 21:33:50,248 [ERROR] Error encountered during build of prebuild_2_squirrel: Command 03 failed
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 542, in run_config
CloudFormationCarpenter(config, self._auth_config).build(worklog)
File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 260, in build
changes['commands'] = CommandTool().apply(self._config.commands)
File "/usr/lib/python2.7/site-packages/cfnbootstrap/command_tool.py", line 117, in apply
raise ToolError(u"Command %s failed" % name)
ToolError: Command 03 failed
2021-04-19 21:33:50,250 [ERROR] -----------------------BUILD FAILED!------------------------
2021-04-19 21:33:50,250 [ERROR] Unhandled exception during build: Command 03 failed
Traceback (most recent call last):
File "/opt/aws/bin/cfn-init", line 171, in <module>
worklog.build(metadata, configSets)
File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 129, in build
Contractor(metadata).build(configSets, self)
File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 530, in build
self.run_config(config, worklog)
File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 542, in run_config
CloudFormationCarpenter(config, self._auth_config).build(worklog)
File "/usr/lib/python2.7/site-packages/cfnbootstrap/construction.py", line 260, in build
changes['commands'] = CommandTool().apply(self._config.commands)
File "/usr/lib/python2.7/site-packages/cfnbootstrap/command_tool.py", line 117, in apply
raise ToolError(u"Command %s failed" % name)
Anyone seen this before or knows what it could be the issue ?
Thanks in advance.
after checking the /var/log/cfn-init-cmd.log, I saw details about the issue, the log group gets created few seconds after, so I just need to wait

sam build --use-container failed mounting directory

There was no issue in building the project a little while back, but it started throwing below error.
RuntimeError: Container does not exist. Cannot get logs for this
container
Normally this happens when docker cannot mount the shared directory, but in this case even adding the lambda directory manually in the docker interface didn't help!
Complete debug log of sam build --use-container
Building function 'SAListManagerUrlLambda'
Fetching lambci/lambda:build-python3.7 Docker container image......
Mounting C:\Users\xxxx\xxxx\xxxx\xxxx\functions\xxxx-xxxx\xxxx-xxxx as /tmp/samcli/source:ro,delegated inside runtime container
Container was not created. Skipping deletion
Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'duration': 1292, 'exitReason': 'RuntimeError', 'exitCode': 255, 'requestId': 'cbfcd29c-16ae-xxxx-xxxx-b9ffec8de75a', 'installationId': 'fece8ccc-cb84-xxxx-xxxx-ac72820ef0c3', 'sessionId': 'e1cbc287-1850-xxxx-xxxx-3a235769f7fb', 'executionEnvironment': 'CLI', 'pyversion': '3.7.6', 'samcliVersion': '0.53.0'}}]}
HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Traceback (most recent call last):
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\runpy.py", line 193, in _run_module_as_main
File "D:\obj\windows-release\37amd64_Release\msi_python\zip_amd64\runpy.py", line 85, in _run_code
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\__main__.py", line 12, in <module>
cli(prog_name="sam")
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 782, in main
rv = self.invoke(ctx)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\decorators.py", line 73, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\click\core.py", line 610, in invoke
return callback(*args, **kwargs)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metrics.py", line 96, in wrapped
raise exception # pylint: disable=raising-bad-type
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\telemetry\metrics.py", line 62, in wrapped
return_value = func(*args, **kwargs)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\command.py", line 129, in cli
mode,
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\commands\build\command.py", line 194, in do_cli
artifacts = builder.build()
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\build\app_builder.py", line 117, in build
function.metadata)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\build\app_builder.py", line 271, in _build_function
options)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\lib\build\app_builder.py", line 369, in _build_function_on_container
container.wait_for_logs(stdout=stdout_stream, stderr=stderr_stream)
File "C:\Amazon\AWSSAMCLI\runtime\lib\site-packages\samcli\local\docker\container.py", line 197, in wait_for_logs
raise RuntimeError("Container does not exist. Cannot get logs for this container")
RuntimeError: Container does not exist. Cannot get logs for this container
In my case the reason was different, Action Center's Focus Assist was set to Alarms Only.
This caused the share directory notification to fail, causing the build failure.
So, make sure your Focus Assist is set to OFF.
It seems that many situations can trigger the same error. For more information the --debug option can be used like this:
sam build --use-container --debug
I see that you are using it, because you got extra information like this:
Sending Telemetry: {'metrics': [{'commandRun': {'awsProfileProvided': False, 'debugFlagProvided': True, 'region': '', 'commandName': 'sam build', 'duration': 1292, 'exitReason': 'RuntimeError', 'exitCode': 255, 'requestId': 'cbfcd29c-16ae-xxxx-xxxx-b9ffec8de75a', 'installationId': 'fece8ccc-cb84-xxxx-xxxx-ac72820ef0c3', 'sessionId': 'e1cbc287-1850-xxxx-xxxx-3a235769f7fb', 'executionEnvironment': 'CLI', 'pyversion': '3.7.6', 'samcliVersion': '0.53.0'}}]}
HTTPSConnectionPool(host='aws-serverless-tools-telemetry.us-west-2.amazonaws.com', port=443): Read timed out. (read timeout=0.1)
Traceback (most recent call last):
In my case I did suppose that the error was sending the telemetry.
My guess is that somehow the build process need pass the region. In my case it is not us-west-2.
Anyway, I disabled it as specified in the documentation and it now works.
In my case local disk in my cloud9 was almost full, so I had to delete some docker images that comes pre-installed with cloud9.
To remove an image use
docker rmi Image
This will clear up space and your build will not fail the next time.

Program Fails for Spark_Home at foreachPartition with change in location of database load utility from one module to another

I am working on a python spark project, where initially i had written a script to load dataframe to postgres for a particular client which include a utility function which loads data to postgres.
df.rdd.repartition(self.__max_conn).foreachPartition(
lambda iterator: load_utils.load_tab_postgres(conn_prop=conn_prop,
tab_name=<tablename>,
iterator=iterator))
Initially the entire code was in single module including the above code snippet and load_utils(), which was working perfectly fine.
Later i had to extract common code including load_utils into a base module that could be used in different client modules. This is when the same code failed with below error:
File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 764, in foreachPartition File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1004, in count File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 995, in sum File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 869, in fold File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 771, in collect File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in call File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py",
line 45, in deco File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py",
line 308, in get_return_value py4j.protocol.Py4JJavaError: An error
occurred while calling
z:org.apache.spark.api.python.PythonRDD.collectAndServe. :
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 18 in stage 126.0 failed 4 times, most recent failure: Lost task
18.3 in stage 126.0 (TID 24028, tbsatad6r15g24.company.co.us, executor 242): org.apache.spark.api.python.PythonException: Traceback (most
recent call last): File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 98, in main
command = pickleSer._read_with_length(infile) File "/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py",
line 164, in _read_with_length
return self.loads(obj) File "/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py",
line 422, in loads
return pickle.loads(obj) File "build/bdist.linux-x86_64/egg/basemodule/init.py", line 12, in
import basemodule.entitymodule.base File "build/bdist.linux-x86_64/egg/basemodule/entitymodule/base.py", line
12, in File
"build/bdist.linux-x86_64/egg/basemodule/contexts.py", line 17, in
File
"/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/conf.py",
line 104, in init
SparkContext._ensure_initialized() File "/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/context.py", line 245, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway() File "/opt/cloudera/parcels/CDH-5.13.1-1.cdh5.13.1.p0.2/lib/spark/python/lib/pyspark.zip/pyspark/java_gateway.py", line 48, in launch_gateway
SPARK_HOME = os.environ["SPARK_HOME"] File "/dhcommon/dhpython/python/lib/python2.7/UserDict.py", line 23, in
getitem
raise KeyError(key) KeyError: 'SPARK_HOME'
Below is the spark_submit command to run the code:
spark-submit --master yarn --deploy-mode cluster --driver-class-path
postgresql-42.2.4.jre6.jar --jars
spark-csv_2.10-1.4.0.jar,commons-csv-1.4.jar,postgresql-42.2.4.jre6.jar
--py-files project.egg driver_file.py
In both above scenarios the load_utils file containing "load_tab_postgres" method will be bundled in project.egg.

"xml.sax._exceptions.SAXReaderNotAvailable: No parsers found" when run in jenkins

So I'm working towards having automated staging deployments via Jenkins and Ansible. Part of this is using a script called ec2.py from ansible in order to dynamically retrieve a list of matching servers to deploy to.
SSH-ing into the Jenkins server and running the script from the jenkins user, the script runs as expected. However, running the script from within jenkins leads to the following error:
ERROR: Inventory script (ec2/ec2.py) had an execution error: Traceback (most recent call last):
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 1262, in <module>
Ec2Inventory()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 159, in __init__
self.do_api_calls_update_cache()
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 386, in do_api_calls_update_cache
self.get_instances_by_region(region)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/ec2/ec2.py", line 417, in get_instances_by_region
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 585, in get_all_instances
max_results=max_results)
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/ec2/connection.py", line 681, in get_all_reservations
[('item', Reservation)], verb='POST')
File "/opt/bitnami/apps/jenkins/jenkins_home/jobs/Deploy API/workspace/deploy/.local/lib/python2.7/site-packages/boto/connection.py", line 1181, in get_list
xml.sax.parseString(body, h)
File "/usr/lib/python2.7/xml/sax/__init__.py", line 43, in parseString
parser = make_parser()
File "/usr/lib/python2.7/xml/sax/__init__.py", line 93, in make_parser
raise SAXReaderNotAvailable("No parsers found", None)
xml.sax._exceptions.SAXReaderNotAvailable: No parsers found
I don't know too much about python, so I'm not sure how to debug this issue further.
So it turns out the issue was to do with Jenkins overwriting the default LD_LIBRARY_PATH variable. By unsetting that variable before running python, I was able to make the python app work!

Elastic Beanstalk: dictionary update sequence element #0 has length 1; 2 is required

We had a working Elastic Beanstalk environment, but after adding a config file, we've started getting this error during deploy:
Error occurred during build: dictionary update sequence element #0 has length 1; 2 is required
The config file looks like this:
# .ebextentions/seed.config
container_commands:
01_set_tenant_gateway:
command: rake db:seed:set_tenant_gateway
leader_only: true
env: our-env
The log gives us this:
2015-04-09 18:27:48,220 [DEBUG] Running command seed_tenant_gateway
2015-04-09 18:27:48,220 [DEBUG] Generating defaults for command
seed_tenant_gateway <<<
2015-04-09 18:27:48,405 [DEBUG] Running test for command
seed_tenant_gateway 2015-04-09 18:27:48,406 [ERROR] Unhandled
exception during build: dictionary update sequence element #0 has
length 1; 2 is required Traceback (most recent call last): File
"/opt/aws/bin/cfn-init", line 122, in
worklog.build(detail.metadata, configSets) File "/usr/lib/python2.6/site-packages/cfnbootstrap/construction.py", line
117, in build
Contractor(metadata).build(configSets, self) File "/usr/lib/python2.6/site-packages/cfnbootstrap/construction.py", line
502, in build
self.run_config(config, worklog) File "/usr/lib/python2.6/site-packages/cfnbootstrap/construction.py", line
511, in run_config
CloudFormationCarpenter(config, self._auth_config).build(worklog) File "/usr/lib/python2.6/site-packages/cfnbootstrap/construction.py",
line 247, in build
changes['commands'] = CommandTool().apply(self._config.commands) File "/usr/lib/python2.6/site-packages/cfnbootstrap/command_tool.py",
line 84, in apply
testResult = ProcessHelper(test, env=env, cwd=cwd).call() File "/usr/lib/python2.6/site-packages/cfnbootstrap/util.py", line 397, in
init
self._env = dict(env) ValueError: dictionary update sequence element #0 has length 1; 2 is required
What might be going on here?
1) Syntax for env key:
env:
<variable name>: <variable value>
2) Also, you need to enclose command in quotes:
command: "<command to run>"
from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html