I am trying to update a Lex bot using the python SDK put_bot function. I need to pass a checksum value of the function as specified here. How do I get this value?
So far I have tried using below functions and the checksum values returned from these
The get_bot with the prod alias
The get_bot_alias function
The get_bot_aliases function
Is the checksum value
Lex Model Building Service:
checksum (string) --
Identifies a specific revision of the $LATEST version.
When you create a new bot, leave the checksum field blank. If you specify a checksum you get a BadRequestException exception.
When you want to update a bot, set the checksum field to the checksum of the most recent revision of the $LATEST version. If you don't specify the checksum field, or if the checksum does not match the $LATEST version, you get a PreconditionFailedException exception.
You should first retrieve the checksum of your bot if you want to update it.
You should be able to use the same checksum that is returned from get_bot_aliases().
This is the example response from get_bot_aliases() function.
{
'BotAliases': [
{
'name': 'string',
'description': 'string',
'botVersion': 'string',
'botName': 'string',
'lastUpdatedDate': datetime(2015, 1, 1),
'createdDate': datetime(2015, 1, 1),
'checksum': 'string', --checksum here
'conversationLogs': {
'logSettings': [
{
'logType': 'AUDIO'|'TEXT',
'destination': 'CLOUDWATCH_LOGS'|'S3',
'kmsKeyArn': 'string',
'resourceArn': 'string',
'resourcePrefix': 'string'
},
],
'iamRoleArn': 'string'
}
},
],
'nextToken': 'string'
}
If you are trying to update intent, first do get_intent and save the checksum and use it in put_intent, If you are using the put_bot api, then first get_bot and save the checksum in that , use it in put_bot api
Related
I'm trying to use Boto3 to get the number of vulnerabilities from my images in my repositories. I have a list of repository names and image IDs that are getting passed into this function. Based off their documentation
I'm expecting a response like this when I filter for ['imageScanFindings']
'imageScanFindings': {
'imageScanCompletedAt': datetime(2015, 1, 1),
'vulnerabilitySourceUpdatedAt': datetime(2015, 1, 1),
'findingSeverityCounts': {
'string': 123
},
'findings': [
{
'name': 'string',
'description': 'string',
'uri': 'string',
'severity': 'INFORMATIONAL'|'LOW'|'MEDIUM'|'HIGH'|'CRITICAL'|'UNDEFINED',
'attributes': [
{
'key': 'string',
'value': 'string'
},
]
},
],
What I really need is the
'findingSeverityCounts' number, however, it's not showing up in my response. Here's my code and the response I get:
main.py
repo_names = ['cftest/repo1', 'your-repo-name', 'cftest/repo2']
image_ids = ['1.1.1', 'latest', '2.2.2']
def get_vuln_count(repo_names, image_ids):
container_inventory = []
client = boto3.client('ecr')
for n, i in zip(repo_names, image_ids):
response = client.describe_image_scan_findings(
repositoryName=n,
imageId={'imageTag': i}
)
findings = response['imageScanFindings']
print(findings)
Output
{'findings': []}
The only thing that shows up is findings and I was expecting findingSeverityCounts in the response along with the others, but nothing else is showing up.
THEORY
I have 3 repositories and an image in each repository that I uploaded. One of my theories is that I'm not getting the other responses, such as findingSeverityCounts because my images don't have vulnerabilities? I have inspector set-up to scan on push, but they don't have vulnerabilities so nothing shows up in the inspector dashboard. Could that be causing the issue? If so, how would I be able to generate a vulnerability in one of my images to test this out?
My theory was correct and when there are no vulnerabilities, the response completely omits certain values, including the 'findingSeverityCounts' value that I needed.
I created a docker image using python 2.7 to generate vulnerabilities in my scan to test out my script properly. My work around was to implement this if statement- if there's vulnerabilities it will return them, if there aren't any vulnerabilities, that means 'findingSeverityCounts' is omitted from the response, so I'll have it return 0 instead of giving me a key error.
Example Solution:
response = client.describe_image_scan_findings(
repositoryName=n,
imageId={'imageTag': i}
)
if 'findingSeverityCounts' in response['imageScanFindings']:
print(response['imageScanFindings']['findingSeverityCounts'])
else:
print(0)
I made a Serverless API backend on AWS console which uses API Gateway, DynamoDB, Lambda functions.
Upon creation I can add the data in dynamoDB online by adding a JSON file, which looks like this:
{
"id": "4",
"k": "key1",
"v": "value1"
}
But when I try to add this using "Postman", by adding the above JSON data in the body of POST message, I get a Positive return (i.e. no errors) but only the "id" field is added in the database and not the "k" or "v".
What is missing?
I think that you need to check on your Lambda function.
As you are using Postman to do the API calls, received event's body will be as follows:
{'resource':
...
}, 'body': '{\n\t"id": 1,\n\t"name": "ben"\n
}', 'isBase64Encoded': False
}
As you can see:
'body': '{\n\t"id": 1,\n\t"name": "ben"\n}'
For example, I will use Python 3 for this case, what I need to do is to load the body into JSON format then we are able to use it.
result = json.loads(event['body'])
id = result['id']
name = result['name']
Then update them into DynamoDB:
item = table.put_item(
Item={
'id': str(id),
'name': str(name)
}
)
I am trying to use boto3 to run a set of queries and don't want to save the data to s3. Instead I just want to get the results and want to work with those results. I am trying to do the following
import boto3
client = boto3.client('athena')
response = client.start_query_execution(
QueryString='''SELECT * FROM mytable limit 10''',
QueryExecutionContext={
'Database': 'my_db'
}.
ResultConfiguration={
'OutputLocation': 's3://outputpath',
}
)
print(response)
But here I don't want to give ResultConfiguration because I don't want to write the results anywhere. But If I remove the ResultConfiguration parameter I get the following error
botocore.exceptions.ParamValidationError: Parameter validation failed:
Missing required parameter in input: "ResultConfiguration"
So it seems like giving s3 output location for writing is mendatory. So what could the way to avoid this and get the results only in response?
The StartQueryExecution action indeed requires a S3 output location. The ResultConfiguration parameter is mandatory.
The alternative way to query Athena is using JDBC or ODBC drivers. You should probably use this method if you don't want to store results in S3.
You will have to specify an S3 temp bucket location whenever running the 'start_query_execution' command. However, you can get a result set (a dict) by running the 'get_query_results' method using the query id.
The response (dict) will look like this:
{
'UpdateCount': 123,
'ResultSet': {
'Rows': [
{
'Data': [
{
'VarCharValue': 'string'
},
]
},
],
'ResultSetMetadata': {
'ColumnInfo': [
{
'CatalogName': 'string',
'SchemaName': 'string',
'TableName': 'string',
'Name': 'string',
'Label': 'string',
'Type': 'string',
'Precision': 123,
'Scale': 123,
'Nullable': 'NOT_NULL'|'NULLABLE'|'UNKNOWN',
'CaseSensitive': True|False
},
]
}
},
'NextToken': 'string'
}
For more information, see boto3 client doc: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena.html#Athena.Client.get_query_results
You can then delete all files in the S3 temp bucket you've specified.
You still need to provide s3 as temporary location for Athena to save the data although you want to process the data using python. But you can page through the data as tuple using Pagination API. please refer to the example here. Hope that helps
I am creating a Cloudfront Service for my organization. I am trying to create a job where a user can execute a Jenkins Job to update a distribution.
I would like the ability for the user to input a Distribution ID and then have Jenkins Auto-Fill a secondary set of parameters. Jenkins would need to grab the configuration for that Distribution (via Groovy or other means) to do that auto-fill. The user then would select which configuration options they would like to change and hit submit. The job would then make the requested updates (via a python script).
Can this be done through some combination of plugins(or any other means?)
// the first input requests the DistributionID from a user
stage 'Input Distribution ID'
def distributionId = input(
id: 'distributionId', message: "Cloudfront Distribution ID", parameters: [
[$class: 'TextParameterDefinition',
description: 'Distribution ID', name: 'DistributionID'],
])
echo ("using DistributionID=" + distributionId)
// Second
// Sample data - you'd need to get the real data from somewhere here
// assume data will be in distributionData after this
def map = [
"1": [ name: "1", data: "data_1"],
"2": [ name: "2", data: "data_2"],
"other": [ name: "other", data: "data_other"]
]
def distributionData;
if(distributionId in map.keySet()) {
distributionData = map[distributionId]
} else {
distributionData = map["other"]
}
// The third stage uses the gathered data, puts these into default values
// and requests another user input.
// The user now has the choice of altering the values or leave them as-is.
stage 'Configure Distribution'
def userInput = input(
id: 'userInput', message: 'Change Config', parameters: [
[$class: 'TextParameterDefinition', defaultValue: distributionData.name,
description: 'Name', name: 'name'],
[$class: 'TextParameterDefinition', defaultValue: distributionData.data,
description: 'Data', name: 'data']
])
// Fourth - Now, here's the actual code to alter the Cloudfront Distribution
echo ("Name=" + userInput['name'])
echo ("Data=" + userInput['data'])
Create a new pipeline and copy/paste this into the pipeline script section
Play around with it
I can easily imagine this code could be implemented in a much better way, but at least, it's a start.
Is there a better way to return the record(s) after DS.Store#pushPayload is called? This is what I'm doing...
var payload = { id: 1, title: "Example" }
store.pushPayload('post', payload);
return store.getById('post', payload.id);
But, with regular DS.Store#push you get the inserted record returned. The only difference between the two, from what I can tell, is that DS.Store#pushPayload serializes the payload data with the correct serializers.
DS.Store#pushPayload is able to take an array of items, not just one, and may contain side-loaded data. It processes a full payload and expects root keys in the payload:
{
"posts": [{
"id": 1,
"title": "title",
"comments": [1]
}],
"comments": [
//.. and so on ...
]
}
DS.Store#push expects a single record which has been normalized and contains no side loaded data (notice there is no root key):
{
"id": 1,
"title": "title",
"comments": [1]
}
For this reason, it makes sense for push to return the record, but for pushPayload to return nothing.
When you use pushPayload, a second lookup of store.find('post', 1) (or store.getById('post', 1)) is the way to go, I don't believe there is a better way.
As of this PR pushPayload can now return an array of all the records pushed into the store, once the 'ds-pushpayload-return' feature flag has been enabled.
At the moment, this feature isn't available in a standard or beta release-- you'll have to use
"ember-data": "emberjs/data#master",
(i.e. Canary) in your package.json in order to access it. I'm not sure when the feature will be generally available.