I'm using an array of snippets with following format
{
name: 'response',
trigger: 'resp|rp',
path: ['paths', '.', '.', '.'],
content: [
'${1:code}:',
' description: ${2}',
' schema: ${3}',
'${4}'
].join('\n')
},
How can I use a RegEx for the trigger? I tried regex key with no luck.
It's not possible to do via public api see register method of snippetManager, you can make it to work by accessing snippetNameMap directly, but it would be better to create feature request on Aces issue tracker.
Related
I am trying to use stage variables, but I always get this error:
{
"logref": "some_uid",
"message": "Invalid stage variable value: null. Please use values with alphanumeric characters and the symbols ' ', -', '.', '_', ':', '/', '?', '&', '=', and ','."
}
My goal is to call SNS from API gateway without the need from the caller to specify the TopicArn and the Message in the query string.
So in the Integration Request I am mapping the query string TopicArn to stageVariables.TopicArn (I have tried '$stageVariables.TopicArn' as well).
And then in the Stage variables section in AWS console I input the Name TopicArn and the Value arn:aws:sns:my_region:my_account_id:test-topic
After I deployed my API I test it from the AWS console and I get this error:
{
"logref": "some_uid",
"message": "Invalid stage variable value: null. Please use values with alphanumeric characters and the symbols ' ', -', '.', '_', ':', '/', '?', '&', '=', and ','."
}
What am I doing wrong, it his achievable?
We are trying to figure out how we can create a compute engine template and set some information like passwords with the help of variables in the moment when the final instance is generated by deployment manager, not in the base image.
When deploying something from marketplace you can see that passwords are generated by "password.py" and stored as metadata in the VMs template. But i can't find the code that writes this data into the VMs disk image.
Could someone explain how this can be achieved?
Edit:
I found out that startup scripts are able to read the instance's metadata: https://cloud.google.com/compute/docs/storing-retrieving-metadata Is this how they do it in marketplace click-to-deploy scripts like https://console.cloud.google.com/marketplace/details/click-to-deploy-images/wordpress ? Or is there an even better way to accomplish this?
The best way is to use the metadata server.
In a star-up script, use this to recover all the attributes of your VM.
curl -H "Metadata-Flavor: Google" "http://metadata.google.internal/computeMetada
ta/v1/instance/attributes/"
Then, do what you want with it
Don't forget to delete secret from metadata after their use. Or change them on the compute. Secrets must be stay secret.
By the way, I would to recommand you to have a look to another things: berglas. Berglas is made by a Google Developer Advocate, specialized in security: Seth Vargo. In summary the principle:
Bootstrap a bucket with Berglas
Create a secret in this Bucket ith Berglas
Pass the reference to this secret in your compute Metadata (berglas://<my_bucket>/<my secret name>)
Use berglas in start up script to resolve secret.
All this action are possible in command line, thus an integration in a script is possible.
You can use python templates , this give you more flexibility. In your YAML you can call the python script to fill the necessary information, from documentation:
imports:
- path: vm-template.py
resources:
- name: vm-1
type: vm-template.py
- name: a-new-network
type: compute.v1.network
properties:
routingConfig:
routingMode: REGIONAL
autoCreateSubnetworks: true
Where vm-template.py it's a python script:
"""Creates the virtual machine."""
COMPUTE_URL_BASE = 'https://www.googleapis.com/compute/v1/'
def GenerateConfig(unused_context):
"""Creates the first virtual machine."""
resources = [{
'name': 'the-first-vm',
'type': 'compute.v1.instance',
'properties': {
'zone': 'us-central1-f',
'machineType': ''.join([COMPUTE_URL_BASE, 'projects/[MY_PROJECT]',
'/zones/us-central1-f/',
'machineTypes/f1-micro']),
'disks': [{
'deviceName': 'boot',
'type': 'PERSISTENT',
'boot': True,
'autoDelete': True,
'initializeParams': {
'sourceImage': ''.join([COMPUTE_URL_BASE, 'projects/',
'debian-cloud/global/',
'images/family/debian-9'])
}
}],
'networkInterfaces': [{
'network': '$(ref.a-new-network.selfLink)',
'accessConfigs': [{
'name': 'External NAT',
'type': 'ONE_TO_ONE_NAT'
}]
}]
}
}]
return {'resources': resources}
Now for the password it depends which VM you are using, Windows or Linux.
Linux you can add a startup script which inject a ssh public key.
Windows you can first prepare the proper key, see this Automate password generation
I am using a
https://developers.google.com/chart/interactive/docs/gallery/geochart on a website to create a really simple representation of distributions around the world. For example, we have 3 people in the USA, 4 people in Sweden and so on.
Everything works fine, but my browser sometimes warns me that I need to specify an API key.
Now my question is: Why do I need an API key when I only use static data?
Thank you very much for your answer in advance
{
type: 'GeoChart',
columnNames: ['Country', 'No'],
data: [
["United States", 2], ["Sweden", 4],...]
],
options: {
region: 'world',
}
}
Trying to format my yaml to download a script in S3 bucket to run in SSM.
I've tried many different formats, but all examples seem to be JSON formatted
- action: aws:downloadContent
name: downloadContent
inputs:
sourceType: "S3"
sourceInfo:
path: https://bucket-name.s3.amazonaws.com/scripts/script.ps1
destinationPath: "C:\\Windows\\Temp"
Fails with the following message:
standardError": "invalid format in plugin properties map[destinationPath:C:\\Windows\\Temp sourceInfo:map[path:https://bucket-name.s3.amazonaws.com/scripts/script.ps1] sourceType:S3]; \nerror json: cannot unmarshal object into Go struct field DownloadContentPlugin.sourceInfo of type string"
This is what ended up working for me:
- action: aws:downloadContent
name: downloadContent
inputs:
sourceType: S3
sourceInfo: "{\"path\":\"https://bucket-name.s3.amazonaws.com/scripts/script.ps1\"}"
destinationPath: "C:\\Windows\\Temp"
I needed that exact JSON syntax embedded in the YAML.
Posting JSON example as well, as we struggled to find examples in json that worked. Hoping this will help someone in the future.
Our error was related to "sourceInfo" key:
> invalid format in plugin properties map[destinationPath:C:\PATHONTARGETSYSTEM sourceInfo:map[path:https://S3BUCKETNAME.s3.amazonaws.com/SCRIPTNAME.ps1] sourceType:S3]; error json: cannot unmarshal object into Go struct field DownloadContentPlugin.sourceInfo of type string
Solution was ultimately the wrong S3 url format + the wrong json formatting. Should look like so:
"sourceInfo": "{\"path\": \"https://s3.amazonaws.com/S3BUCKETNAME/SCRIPTNAME.ps1\"}",
I would like to merge the incoming events into one event based on one of the fields.
Input Events:
{
ID: '123',
eventType: 'a',
eventCode: 1
},
{
ID: '123',
eventType: 'b',
eventCode: 2
},
{
ID: '123',
eventType: 'c',
eventCode: 3
}
Expected Output:
{
ID: '123',
events: [{
eventType: 'a',
eventCode: 1
},
{
eventType: 'b',
eventCode: 2
},
{
eventType: 'c',
eventCode: 3
}]
}
I am grouping the events based on a window of 4. So, I need to process the 4 events, merge them and pass it onto the next step.
Use Case:
I would like to use the generated output to be stored in MongoDB OR pass it onto an external service.
Is this possible using Siddhi?
NOTE: I see that a similar question has already been asked, but the response is from 5 years ago, and Siddhi has come long way since then.
You can use below Siddhi apps to achieve your requirement. I have utilized string extension to do this. But please note generated output is exactly the one you requested. If you want a proper JSON output you might have to utilize execution json extention as well. Follow the readme for details on extension usage.
#App:name("testJsonConcat")
#App:description("Description of the plan")
-- Please refer to https://docs.wso2.com/display/SP400/Quick+Start+Guide on getting started with SP editor.
define stream inputStream(id string, eventType string, eventCode int);
partition with (id of inputStream)
begin
from inputStream
select id, str:concat("{eventType: '", eventType, "' , eventCode :",eventCode,"}") as jsonString
insert into #formattedStream;
from #formattedStream#window.lengthBatch(4)
select str:concat("{ ID : '", id, "',events: [", str:groupConcat(jsonString),"]}") as result
insert into concatStream;
end;
from concatStream#log()
select *
insert into temp;