Django-wkhtmltopdf crashes on server - django

I have an app using django-wkhtmltopdf. It is working locally, but when I run on a VPS I'm getting the following error.
Command '['wkhtmltopdf', '--allow', 'True',
'--enable-local-file-access', '--encoding', 'utf8',
'--javascript-delay', '2000', '--page-height', '465mm',
'--page-width', '297mm', '--quiet', '/tmp/wkhtmltopdffyknjyrk.html',
'-']' returned non-zero exit status 1.
I'm guessing I need to set some permissions on the VPS to allow the temp file to be created (or set a directory for it) but I'm not sure how to do this.
Within settings.py I have
WKHTMLTOPDF_CMD_OPTIONS = {
'quiet': True,
'allow' : '/tmp',
'javascript-delay' : 1000,
'enable-local-file-access' : True,
}
And within the django view I've got:
cmd_options = {
#'window-status' : 'ready',
'javascript-delay': 2000,
'page-height' : '465mm',
'page-width' : '297mm',
'allow' : True,
#'T' : 0, 'R' : 0, 'B' : 0, 'L': 0,
}

Related

OpenSearch Dashboards health status red, everything else green

We have an OpenSearch domain on AWS.
Sometimes Cluster status and OpenSearch Dashboards health status goes into yellow for a few minutes which is fine I guess.
But today OpenSearch Dashboards health status went into red and is there for a few hours now. Everything else works except the dashboards, which gives error 503: {"Message":"Http request timed out connecting"}
{
"cluster_name" : "779754160511:telemetry",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 2,
"number_of_data_nodes" : 2,
"discovered_master" : true,
"discovered_cluster_manager" : true,
"active_primary_shards" : 166,
"active_shards" : 332,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
I finally tried updating to an instance with more RAM, but the status is still red.
How could I solve this? Is there a way to restart the domain or debug in someway?

AWS update request, variable parameters

I am kind of new into the AWS console and what not, I just wrote my own API (with the help of tutorials) and was wondering the following, can I update my product conditionally? My current param from the call looks like:
TableName: "product",
Key: {
product_id: event.pathParameters.id
},
UpdateExpression: "SET price_buyin = :price_buyin, price_sell = :price_sell, price_grit = :price_grit, note = :note, status_code = :status_code, producttype_id = :producttype_id, handled_by = :handled_by, handled_number = :handled_number, productsize_id = :productsize_id, processed_date = :processed_date",
ExpressionAttributeValues: {
":price_buyin": data.price_buyin ? data.price_buyin : 0,
":price_sell": data.price_sell ? data.price_sell : 0,
":price_grit": data.price_grit ? data.price_grit : 0,
":status_code": data.status_code ? data.status_code : 0,
":processed_date": data.processed_date ? data.processed_date : 0,
":producttype_id": data.producttype_id ? data.producttype_id : 0,
":handled_number": data.handled_number ? data.handled_number : 0,
":productsize_id": data.productsize_id ? data.productsize_id : 0,
":note": data.note ? data.note : "empty",
":handled_by": data.handled_by ? data.handled_by : "empty"
},
ReturnValues: "ALL_NEW"
I want to be able to update, lets say; price_buyin, but when I dont add the other values, they automatically become 0. Is there some way to say to only update price_buyin?
Thank you in advance!
Greetings,
Bram

AWS Elasticsearch - Query Cache

I am using AWS Elasticsearch and the cluster receives ~ 600 search queries per second. This is causing periodic bursts of 503 Service not available response from Elasticsearch. As, a result I wanted to turn on the cache query for the index (Verified that is it actually turned on by looking at <ES_DOMAIN>/<INDEX_NAME>
However, when I check the query cache stats at <ES_DOMAIN>/_stats/query_cache?pretty&human, this is what I get
"<index_name>" : {
"primaries" : {
"query_cache" : {
"memory_size" : "0b",
"memory_size_in_bytes" : 0,
"evictions" : 0,
"hit_count" : 0,
"miss_count" : 0
}
},
"total" : {
"query_cache" : {
"memory_size" : "0b",
"memory_size_in_bytes" : 0,
"evictions" : 0,
"hit_count" : 0,
"miss_count" : 0
}
}
}
Any suggestions on how I can turn on the cache ?
Based on my reading and similar experience (even after setting index.cache.query.enable: true in the index mapping) I can only guess that AWS has disabled query caching. Probably by setting indices.cache.query.size: 0% in config/elasticsearch.yml
UPDATE
After leaving the cluster running for a while and doing some heavy aggregations I am seeing that the query_cache is starting to get used, although not sure why I am not seeing any cache hits
GET _nodes/stats/indices/query_cache?pretty&human
{
"cluster_name": "XXXXXXXXXXXX:xxxxxxxxxxx",
"nodes": {
"q59YfHDdRQupousO9vh6KQ": {
"timestamp": 1465589579698,
"name": "Mongoose",
"indices": {
"query_cache": {
"memory_size": "37.2kb",
"memory_size_in_bytes": 38151,
"evictions": 0,
"hit_count": 0,
"miss_count": 45
}
}
},
"K3olMnkkRZW53tTw05UVhA": {
"timestamp": 1465589579692,
"name": "Meggan Braddock",
"indices": {
"query_cache": {
"memory_size": "47.3kb",
"memory_size_in_bytes": 48497,
"evictions": 0,
"hit_count": 0,
"miss_count": 53
}
}
}
}
}

SPOCK - All #Shared Variables are NULL

Here is my test class:
import grails.test.mixin.*
import spock.lang.Specification
import spock.lang.Unroll
import spock.lang.*
import groovy.util.logging.Slf4j
#Slf4j
#Mock([PromoRule, PromoCode, SecUser])
#TestFor(PromoService)
class PromoServiceSpec extends Specification {
#Shared testUser
#Shared testCode
#Shared testRule
def setup() {
}
#Unroll
def 'valid promo codes - #user, #code'() {
given:
testRule = new PromoRule(
name : "ZTEST",
receiverAmount : 5,
receiverAmountType : PromoRule.AmountType.DOLLARS,
senderAmount : 0,
senderAmountType : PromoRule.AmountType.DOLLARS,
receiverPointsAmount : null,
receiverPointsAmountType : null,
receiverMaxUse : null,
)
testRule.save(flush:true, failOnError:true)
testUser = new SecUser(
id: 1,
version: 0,
accountExpired: false,
accountLocked: false,
age: 9000,
balance: 100,
dateCreated: new Date(),
emailVerified: true,
enabled: true,
firstName: 'Sir',
lastName: 'Buttocks',
lastUpdated: new Date(),
lockedBalance: 0,
username: "1",
staff: false,
displayName: 'sir_buttocks',
usernameChosen: true,
depositMade: true,
depositOfferRecentlySeen: false,
pin: null
)
testUser.save(flush: true, failOnError: true)
testCode = new PromoCode(
rule : testRule,
code : "3",
senderId : 1,
)
testCode.save(flush:true, failOnError:true)
expect:
service.isValidPromoCode(user, code) == value
where:
user | code || value
testUser | testCode || true
}
}
When I run this test, I get the following:
| Failure: valid promo codes - null, null(skillz.PromoServiceSpec)
| Condition not satisfied:
service.isValidPromoCode(user, code) == value
| | | | | |
| false null null | true
skillz.PromoService#20e0e9d5 false
I have tried a ton of different configurations and layouts, all of them either getting me a null pointer exception (to the variable itself) or a null value for the variable.
Doing all the definitions at static variables didn't change anything either, same result as using #Shared.
I've tried mocking these as well, but I always get null exceptions when trying to execute .Mock() for the class...
Thanks!!
I'm not exactly sure what you are trying to achieve here, but the where block is evaluated before (the rest of) the method is first entered, and at that time, your shared variables are null. You'd have to set them earlier, e.g. in a setupSpec (not setup) method.

Mongo regex index matching - multiple start strings

I want to match multiple start strings in mongo. explain() shows that it's using the indexedfield index for this query:
db.mycol.find({indexedfield:/^startstring/,nonindexedfield:/somesubstring/});
However, the following query for multiple start strings is really slow. When I run explain I get an error. Judging by the faults I can see in mongostat (7k a second) it's scanning the entire collection. It's also alternating between 0% locked and 90-95% locked every few seconds.
db.mycol.find({indexedfield:/^(startstring1|startstring2)/,nonindexedfield:/somesubstring/}).explain();
JavaScript execution failed: error: { "$err" : "assertion src/mongo/db/key.cpp:421" } at src/mongo/shell/query.js:L128
Can anyone shed some light on how I can do this or what is causing the explain error?
UPDATE - more info
Ok, so I managed to get explain to work on the more complex query by limiting the number of results. The difference is this:
For a single substring, "^/BA1/" (yes it's postcodes)
"cursor" : "BtreeCursor pc_1 multi",
"isMultiKey" : false,
"n" : 10,
"nscannedObjects" : 10,
"nscanned" : 10,
"nscannedObjectsAllPlans" : 19,
"nscannedAllPlans" : 19,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 0,
"nChunkSkips" : 0,
"millis" : 0,
"indexBounds" : {
"indexedfield" : [
[
"BA1",
"BA2"
],
[
/^BA1/,
/^BA1/
]
]
}
For multiple substrings "^(BA1|BA2)/"
"cursor" : "BtreeCursor pc_1 multi",
"isMultiKey" : false,
"n" : 10,
"nscannedObjects" : 10,
"nscanned" : 1075276,
"nscannedObjectsAllPlans" : 1075285,
"nscannedAllPlans" : 2150551,
"scanAndOrder" : false,
"indexOnly" : false,
"nYields" : 5,
"nChunkSkips" : 0,
"millis" : 4596,
"indexBounds" : {
"indexedfield" : [
[
"",
{
}
],
[
/^(BA1|BA2)/,
/^(BA1|BA2)/
]
]
}
which doesn't look very good.
$or solves the problem in terms of using the indexes (thanks EddieJamsession). Queries are now lightening fast.
db.mycoll.find({$or: [{indexedfield:/^startstring1/},{indexedfield:/^startstring2/],nonindexedfield:/somesubstring/})
However, I would still like to do this with a regex if possible so I'm leaving the question open. Not least because I now have to refactor my application to take these types of queries into account.