I'm trying to install a custom compiled package that I have in S3 as a zip file. I added this on my Cloudformation template:
"sources" : {
"/opt" : "https://s3.amazonaws.com/mybucket/installers/myapp-3.2.1.zip"
},
It downloads and unzip it on /opt without issues, but all the "executables" files don't have the "x" permission. I mean "-rw-r--r-- 1 root root 220378 Dec 4 18:23 myapp".
If I download the zip and unzip it in any directory, the permissions are Ok.
I already read the Cloudformation documentation and there is no clue there.
Someone can help me figuring this out? Thanks in advance.
Maybe you can combine a "configSets" (to guarantee the execution order) and a "command" element to write something like :
"AWS::CloudFormation::Init" : {
"configSets" : {
"default" : [ "download", "fixPermissions" ]
},
"download" : {
"sources" : {
"/opt" : "https://s3.amazonaws.com/mybucket/installers/myapp-3.2.1.zip"
},
},
"fixPermissions" : {
"commands" : {
"fixMyAppPermissions" : {
"command" : "chmod +x /opt/myapp-3.2.1/myapp"
}
}
}
}
Source :
https://s3.amazonaws.com/cloudformation-examples/BoostrappingApplicationsWithAWSCloudFormation.pdf
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-init.html
Related
Giving the following GCP services:
BigQuery
Cloud Storage
Cloud Shell
What is the easiest way to create a BigQuery table with the following 2-columns structure ?
Column
Description
Type
Primary key
tzid
Time zone identifier
STRING
x
bndr
Boundaries
GEOGRAPHY
For example:
tzid
bndr
Africa/Abidjan
POLYGON((-5.440683 4.896553, -5.303699 4.912035, -5.183637 4.923927, ...))
Africa/Accra
POLYGON((-0.136231 11.13951, -0.15175 11.142384, -0.161168 11.14698, ...))
Pacific/Wallis
MULTIPOLYGON(((-178.350043 -14.384951, -178.344628 -14.394109, ...)))
Download and unzip timezones.geojson.zip from #evan-siroky repository on your computer.
Coordinates are structured as follows (geojson format):
{
"type": "FeatureCollection",
"features":
[
{
"type":"Feature",
"properties":
{
"tzid":"Africa/Abidjan"
},
"geometry":
{
"type":"Polygon",
"coordinates":[[[-5.440683,4.896553],[-5.303699,4.912035], ...]]]
}
},
{
"type":"Feature",
"properties": ...
}
]
}
BigQuery does not accept geojson but jsonl (new line delimited json) format to load tables. Steps 3 to 5 aim to convert to jsonl format.
Upload the file timezones_geojson.json to Cloud Storage gs://your-bucket/.
Move the file in the Cloud Shell Virtual Machine
gsutil mv gs://your-bucket/timezones_geojson.json .
Parse the file timezones_geojson.json, filter on "features" and return one line per element (see jq command):
cat timezones_geojson.json | jq -c ".features[]" > timezones_jsonl.json
The previous format will be transformed to:
{
"type":"Feature",
"properties":
{
"tzid":"Africa/Abidjan"
},
"geometry":
{
"type":"Polygon",
"coordinates":[[[-5.440683,4.896553],[-5.303699,4.912035], ... ]]]
}
}
{
"type":"Feature",
"properties":...
"geometry":...
}
Move the jsonl on Cloud Storage
gsutil mv timezones_jsonl.json gs://your-bucket/
Load the jsonl to BigQuery
bq load --autodetect --source_format=NEWLINE_DELIMITED_JSON --json_extension=GEOJSON your_dataset.timezones gs://your-bucket/timezones_jsonl.json
Working through adding some cfn-init to request data from an S3 bucket.
I believe I've got a syntax problem with the cfn-init.exe call from powershell but cannot seem to find where. This structure was taken from the Bootstrapping AWS CloudFormation Windows Stacks AWS Example. I've also tried adapting from the bash structure from AWS cfn-init documentation with no success.
"UserData": {"Fn::Base64": {"Fn::Join": ["\n", [
"<powershell>",
...
"cfn-init.exe -v -s", { "Ref" : "AWS::StackName" },
" -r EC2Instance",
"</powershell>"
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config": {
"files" : {
"C:\\chef\\validator.pem" : {
"source" : "https://s3.amazonaws.com/dtcfstorage/validator.pem",
"authentication" : "s3creds"
}
}
},
"AWS::CloudFormation::Authentication" : {
"s3creds" : {
"type" : "S3",
"roleName" : "awss3chefkeyaccess"
}
}
}
}
The cfn-init.exe is being run but errors out as the arguments are passing to new lines:
2018/05/21 15:35:08Z: Message: The errors from user scripts: Usage: cfn-init.exe [options]
or: cfn-init.exe [options]
or: cat | cfn-init.exe [options] -
cfn-init.exe: error: -s option requires an argument
cloudinittest : The term 'cloudinittest' is not recognized as the name of a cmdlet, function, script file, or operable
program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At C:\Windows\TEMP\UserScript.ps1:30 char:1
+ cloudinittest
+ ~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (cloudinittest:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
-r : The term '-r' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the
spelling of the name, or if a path was included, verify that the path is correct and try again.
At C:\Windows\TEMP\UserScript.ps1:31 char:2
+ -r EC2Instance
+ ~~
+ CategoryInfo : ObjectNotFound: (-r:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
It's because you have joined using \n at the top. Every arg to the join function will separate by a newline event if you type some on the same line!
Therefore, your command cfn-init has been interpreted as:
cfn-init.exe -v -s
stack-name
-r EC2Instance
...
Since the line is broken, the command doesn't get run properly.
As such, you can join by a space character. You can try replacing the above by this:
{"Fn::Join": [" ", ["cfn-init.exe -v -s", {"Ref":"AWS::StackName"},
"-r EC2Instance"]}
I am trying to create a windows VM with chef client by ARM (Azure resource manager) template. I find an example template in github:
https://github.com/Azure/azure-quickstart-templates/tree/master/chef-extension-windows-vm
{
"name": "[concat(variables('vmName'),'/',variables('chefClientName'))]",
"type": "Microsoft.Compute/virtualMachines/extensions",
"apiVersion": "2015-05-01-preview",
"location": "[variables('location')]",
"dependsOn": [
"[concat('Microsoft.Compute/virtualMachines/', variables('vmName'))]"
],
"properties": {
"publisher": "Chef.Bootstrap.WindowsAzure",
"type": "ChefClient",
"typeHandlerVersion": "1201.12",
"settings": {
"client_rb": "[parameters('client_rb')]",
"runlist": "[parameters('runlist')]"
},
"protectedSettings": {
"validation_key": "[parameters('validation_key')]"
}
}
}
I deploy this template in powershell, storageAcount/vNet/IP/NIC/VM are created successfully. But the chef extension create fail with the following error:
New-AzureResourceGroupDeployment : 3:44:51 PM - Resource Microsoft.Compute/virtualMachines/extensions
'myVM/chefExtension' failed with message 'Extension with publisher 'Chef.Bootstrap.WindowsAzure', type 'ChefClient',
and type handler version '1201.12' could not be found in the extension repository.'
At line:1 char:1
+ New-AzureResourceGroupDeployment -Name $deployName -ResourceGroupName $RGName -T ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [New-AzureResourceGroupDeployment], Exception
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.Resources.NewAzureResourceGroupDeploymentCommand
How can I create a VM with chef by ARM template ?
Thanks.
The failure is cause by wrong "typeHandlerVersion", "1201.12" is no longer available. "1207.12" works fine. To get the available extension information, use the following powershell command:
Get-AzureVMAvailableExtension | select ExtensionName,Publisher,Version,PublishedDate
we use MongoDB GridFS pluging to store upload file, it work
But we can upload the beginning, usually can not be upload to upload 8M ?
check status in mongodb, they create two collection
db.fs.chunks
db.fs.files
type command
> db.fs.chunks.stats()
{
"ns" : "db.fs.chunks",
"count" : 376,
"size" : 84212168,
"avgObjSize" : 223968.53191489363,
"storageSize" : 84250624,
"numExtents" : 8,
"nindexes" : 2,
"lastExtentSize" : 20594688,
"paddingFactor" : 1,
"flags" : 1,
"totalIndexSize" : 49056,
"indexSizes" : {
"id" : 24528,
"files_id_1_n_1" : 24528
},
"ok" : 1
}
storageSize is there a limit ?
thank all to help
Todd
the following storage limits are in place on CloudFoundry.com;
mysql: 128MB
redis: 16MB
mongo: 240MB
It may be that the connection is timing out when actually uploading the data, what actually happens when you are trying to perform the upload?
I am using ExtJS 4.1 and I am deploying my simple HelloExt program on GlassFish V3.1.
I am trying to create a build from Sencha SDK.
I have used the following two commands...
C:\>sencha create jsb -a http://localhost:8080/HelloExt/index.jsp -p appname.jsb
3 -v
C:\>sencha build -p appname.jsb3 -v -d .
As per the documentation, it will create app-all.js file. But where does it create the file?
How can I know IF build are created successfully or not?
Where are the generated JS files?
I made a search but I can not found anything like app-all.js.
For more information:
I am using JDK 1.6.0_12 and GlassFish V3.1 application server.
Here are the edited content of the question ....
And when I am trying to use the sencha SDK, It generates a .dpf file into the class path.
The contents of the .dpf file as as below ...
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE glassfish-web-app PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 Servlet 3.0//EN" "http://glassfish.org/dtds/glassfish-web-app_3_0-1.dtd">
<glassfish-web-app error-url="">
<context-root>/HelloExt</context-root>
<class-loader delegate="true"/>
<jsp-config>
<property name="keepgenerated" value="true">
<description>Keep a copy of the generated servlet class' java code.</description>
</property>
</jsp-config>
</glassfish-web-app>
Can anyone tell me Why here it generated .DPF file ? Why its not generating the app-all.js file ?
Try running the command from inside the app root directory and then using a relative path:
0) open cmd window
1) run in cmd window: "cd C:\[webserver_webapp_root]\[app_name]"
In other words change the cmd directory to the app root. Fill in the bracketed text above with the correct paths.
2) run in cmd window: "sencha create jsb -a index.html -p app.jsb3 -v"
The app.jsb3 should be created in your app's root directory (C:\[webserver_webapp_root]\[app_name]). Open it up and make sure it contains all of your app classes, it should look something like this:
{
"projectName": "Project Name",
"licenseText": "Copyright(c) 2012 Company Name",
"builds": [
{
"name": "All Classes",
"target": "all-classes.js",
"options": {
"debug": true
},
"files": [
{
"clsName": "YourApp.view.Viewport",
"name": "Viewport.js",
"path": "app/view/"
},
// plus ALOT more classes...
]
},
{
"name": "Application - Production",
"target": "app-all.js",
"compress": true,
"files": [
{
"path": "",
"name": "all-classes.js"
},
{
"path": "",
"name": "app.js"
}
]
}
],
"resources": []
}
If everything looks fine then you can go onto the next step, if not then there is something wrong with your app directory structure and you need to fix it per Sencha recommended ExtJS application architecture.
You can also use any error messages to help identify the problem.
3) update placeholders ("Project Name", etc) at the top of app.jsb3
4) run in cmd window: "sencha build -p app.jsb3 -d . -v"
The app-all.js file should also be created in the app's root directory. If the cmd window doesn't give any errors before it says "Done Building!" then you are all done. You can now change your index.html script link to point to app-all.js instead of app.js.
If there are errors then you have to fix those and run this again.
Other things you can try:
In response to your last comment, your -p switch parameter should be a jsb3 file not jsb.
Make sure that the web server is running and that your app runs without any errors before you try to use the SDK Tools.
Then try these:
C:\Projects\HelloExt\build\web>sencha create jsb -a index.jsp -p HelloExt.jsb3 -v
C:\Projects\HelloExt>sencha create jsb -a index.jsp -p HelloExt.jsb3 -v
C:\>sencha create jsb -a [actual IP address]:8080/HelloExt/index.jsp -p HelloExt.jsb3 -v
Fill in your actual IP address where the brackets are (not localhost).
This should produce the jsb3 file shown in #2 above then you can move on to step #3 above.