I am an error ReferenceError: weakly-referenced object no longer exists in my code, I have tried to debug it that I don't know why am I getting this.
I am using mongodb and python 3.6.10
here is my code, please help
a = 't1'
b = ['v1', 'v2', 'v3']
services = dict()
for value in b:
record = MyModel.objects.filter(myid=id, a=a, value=value).first()
keys = record['services'].keys()
for key in keys:
key_value = record['services'][key]
if key in services:
services[key].extend(key_value) # Getiing error here in this line
else:
services.update({key: key_value})
print(services)
MyModel looks like
{
"myid" : "1",
"a" : "t1",
"b" : "v1",
"services" : {
"service_1" : [
{
"serviceid" : "1012",
"service_type" : "service_1"
}
]
}
{
"myid" : "1",
"a" : "t1",
"b" : "v2",
"services" : {
"service_2" : [
{
"serviceid" : "1013",
"service_type" : "service_2"
}
]
}
code works fine if there is only one value in b, but if code iterate the second time and tries to perform services[key].extend(key_value), code generates the error.
I don't think it is related to this code phrase. It can be caused by your db connector. You may try to close the connection without closing the cursor.
It generally happens when you use a destructor __del__ for a weak referenced object. When your destructor runs before the garbage collector, it throws that kind of exception. You can read more about weakref here.
after a lot of try and error, I have found that if I put values in empty list then code works fine so I have updated my code. I am still don't know why the above code is giving me an error, this is just an alternative to the above code.
Hope this will help someone facing the same problem.
a = 't1'
b = ['v1', 'v2', 'v3']
services = dict()
for value in b:
record = MyModel.objects.filter(myid=id, a=a, value=value).first()
keys = record['services'].keys()
for key in keys:
key_value = record['services'][key]
if not key in services:
services[key] = list()
services[key].extend(key_value)
print(services)
Related
I am creating a resource by using count in it. When I use split function in the output it gives me error while normal output where split is not used just works fine.
I am running on stack=dev right now. I expect the same resource to not get created on dev stack but it should get created in prod stack. I am trying to write a code in such a way
Below is the piece of code which gives error
data "aws_cloudformation_stack" "some_name" {
count = (local.stack == "dev" ? 0 : 1)
name = "${local.stack}_some_name"
}
output "public_alb_subnets" {
value = split(",", "${data.aws_cloudformation_stack.some_name[*].outputs["PublicElbSubnets"]}")
}
It gives me error
Error: Invalid function argument
on managed_alb.tf line 138, in output "public_alb_subnets":
138: value = split(",", "${data.aws_cloudformation_stack.some_name[*].outputs["PublicElbSubnets"]}")
|----------------
| data.aws_cloudformation_stack.some_name is empty tuple
Invalid value for "str" parameter: string required.
However below works
output "public_alb_security_groups" {
value = [
data.aws_cloudformation_stack.some_name[*].outputs["PublicElbSecurityGroup"],
data.aws_cloudformation_stack.some_name[*].outputs["InternalElbSecurityGroup"]
]
}
I tried many different options on the web but none of them worked. What I am doing wrong here. Even using count.index or 0 in place of * doesn't work
You have to make your output also conditional, based on your dev or prod environments:
output "public_alb_subnets" {
value = length(data.aws_cloudformation_stack.some_name) > 0 ? split(",", "${data.aws_cloudformation_stack.some_name[*].outputs["PublicElbSubnets"]}") : null
}
I'm looking for way to define a list of ssh keys in a variables file so that I could retrieve them in the tf module code for my compute instance like this :
metadata = {
ssh-keys = join("\n", [for user, key in var.ssh_keys : "${user}:${key}"])
}
Here is the content of the variables file I wrote to achieve that :
variable "ssh_keys" {
type = "map"
default = {
{
user = "amary"
key = "${file("/Users/nixmind/.ssh/amary.pub")}"
}
{
user = "nixmind"
key = "${file("/Users/nixmind/.ssh/nixmind.pub")}"
}
}
}
But I'm having this error :
Error: Missing attribute value
on variables.tf line 8, in variable "ssh_keys":
4:
5:
6:
7:
8:
9:
Expected an attribute value, introduced by an equals sign ("=").
I'm not sure to really get what to do there.
There are a few different problems here. I'll talk about them one at a time.
The first is that your default expression is not using correct map syntax. Here's a corrected version:
variable "ssh_keys" {
type = map(string)
default = {
amary = file("/Users/nixmind/.ssh/amary.pub")
nixmind = file("/Users/nixmind/.ssh/nixmind.pub")
}
}
The second problem is that a variable default value cannot include function calls, so the calls to file above are invalid. There are a few different options here about how to deal with this, but if this is a variable in a root module then I expect it would be most convenient to have the variable be a map of filenames rather than a map of the contents of those files, and then the module itself can read the contents of those files in a later step:
variable "ssh_key_files" {
type = map(string)
default = {
amary = "/Users/nixmind/.ssh/amary.pub"
nixmind = "/Users/nixmind/.ssh/nixmind.pub"
}
}
Your for expression for building the list of "user:key" strings was correct with how you had the variable defined before, but with the adjustment I've made above to use filenames instead of contents we'll need an extra step to actually read the files:
locals {
ssh_keys = { for u, fn in var.ssh_key_files : u => file(fn) }
}
We can then use local.ssh_keys to get the map from username to key needed for the metadata expression:
metadata = {
ssh-keys = join("\n", [for user, key in local.ssh_keys : "${user}:${key}"])
}
If you do want this module to accept already-loaded SSH key data rather than filenames then that is possible but the variable will need to be required rather than having a default, because it'll be up to the calling module to load the files.
The definition without the default value will look like this:
variable "ssh_keys" {
type = map(string)
}
Then your calling module, if there is one (that is, if this isn't a root module) can be the one to call file to load those in:
module "example" {
source = "./modules/example"
ssh_keys = {
amary = file("/Users/nixmind/.ssh/amary.pub")
nixmind = file("/Users/nixmind/.ssh/nixmind.pub")
}
}
The above is a reasonable interface for a shared module that will be called from another module like this, but it's not a convenient design for a root module because the "caller" in that case is the person or script running the terraform program, and so providing the data from those files would require reading them outside of Terraform and passing in the results.
My problem was semantic not syntactic, cause I can't use a map of map as tried to. Instead I used a liste and the error desapeared.
variable ssh_keys {
type = list(object({
user=string
key=string
}))
default = [
{
"user" = "amaret93"
"key" = "/Users/valerietchala/.ssh/amaret93.pub"
},
{
"user" = "nixmind"
"key" = "/Users/valerietchala/.ssh/nixmind.pub"
}
]
}
But the Martin's answer above is also a more good approach
I have an app written in C++ with 16 threads which reads from the output of wireshark/tshark. Wireshark/tshark dissects pcap files which are gsm_map signalling captures.
Mongodb is 2.6.7
The structure I need for my documents are like this:
Note "packet" is an array, it will become apparent why later.
For all who don't know TCAP, the TCAP layer is transaction-oriented, this means, all packets include:
Transaction State: begin/continue/end
Origin transaction ID (otid)
Destination transaction ID (dtid)
So for instance, you might see a transaction comprising 3 packets, which looking at the TCAP layer would be roughly this
Two packets, one "begin", one "end".
{
"_id" : ObjectId("54ccd186b8ea19c89ee8f231"),
"deleted" : "0",
"packet" : {
"datetime" : ISODate("2015-01-31T12:58:11.939Z"),
"signallingType" : "M2PA",
"opc" : "326",
"dpc" : "6406",
"transState" : "begin",
"otid" : "M2PA0400435B",
"dtid" : "",
"sccpCalling" : "523332075100",
"sccpCalled" : "523331466304",
"operation" : "mo-forwardSM (46)",
...
}
}
/* 1 */
{
"_id" : ObjectId("54ccd1a1b8ea19c89ee8f7c5"),
"deleted" : "0",
"packet" : {
"datetime" : ISODate("2015-01-31T12:58:16.788Z"),
"signallingType" : "M2PA",
"opc" : "6407",
"dpc" : "326",
"transState" : "end",
"otid" : "",
"dtid" : "M2PA0400435B",
"sccpCalling" : "523331466304",
"sccpCalled" : "523332075100",
"operation" : "Not Found",
...
}
}
Because of the network architecture, we're tracing in two (2) points, and the traffic is balanced amongst these two points. This means sometimes we see "continue"s or "end"s BEFORE a "begin". Conversely, we might see a "continue" BEFORE a "begin" or "end". In short, transactions are not ordered.
Moreover, multiple end-points are "talking" amongst themselves, and transactionIDs might get duplicated, 2 endpoints could be using the same tid and other 2 endpoints at the same time, though this doesn't happen all the time, it does happen.
Because of the later, I also need to use the SCCP layer's "calling" and "called" Global titles (like phone numbers).
Bear in mind that I don't know which way a given packet is going, so this is what I'm doing:
Whenever I get a new packet I must find whether the transaction already exists in mongodb, I'm using upsert to do this.
I do this by searching the current's packet otid or dtid in either otid or dtid of existing packets
If it does: push the new packet into the existing document.
If it doesn't: create a new document with the packet.
As an example, this is a upsert for an "end" which should find a "begin":
db.runCommand(
{
update: "packets",
updates:
[
{ q:
{ $and:
[
{
$or: [
{ "packet.otid":
{ $in: [ "M2PA042e3918" ] }
},
{ "packet.dtid":
{ $in: [ "M2PA042e3918" ] }
}
]
},
{
$or: [
{ "packet.sccpCalling":
{ $in: [ "523332075151", "523331466305" ] }
},
{ "packet.sccpCalled":
{ $in: [ "523332075151", "523331466305" ] }
}
]
}
]
},
{
$setOnInsert: {
"unique-id": "422984b6-6688-4782-9ba1-852a9fc6db3b", deleted: "0"
},
$push: {
packet: {
datetime: new Date(1422371239182),
opc: "327", dpc: "6407",
transState: "end",
otid: "", dtid: "M2PA042e3918", sccpCalling: "523332075151", ... }
}
},
upsert: true
}
],
writeConcern: { j: "1" }
}
)
Now, all of this works, until I put it in production.
It seems packets are coming way to fast and I see lots of:
"ClientCursor::staticYield Can't Unlock B/c Of Recursive Lock" Warnings
I read that we can ignore this warning, but I've found that my upserts DO NOT update the documents! It looks like there's a lock and mongodb forgets about the update. If I change the upsert to a simple insert, no packets are lost
I also read this is related to no indexes being used, I have the following index:
"3" : {
"v" : 1,
"key" : {
"packet.otid" : 1,
"packet.dtid" : 1,
"packet.sccpCalling" : 1,
"packet.sccpCalled" : 1
},
"name" : "packet.otid_1_packet.dtid_1_packet.sccpCalling_1_packet.sccpCalled_1",
"ns" : "tracer.packets"
So in conclusion:
1.- If this index is not correct, can someone please help me creating the correct index?
2.- Is it normal that mongo would NOT update a document if it finds a lock?
Thanks and regards!
David
Why are you storing all of the packets in an array? Normally in this kind of situation it's better to make each packet a document on its own; it's hard to say more without more information about your use case (or, perhaps, more knowledge of all these acronyms you're using :D). Your updates would become inserts and you would not need to do the update query. Instead, some other metadata on a packet would join related packets together so you could reconstruct a transaction or whatever you need to do.
More directly addressing your question, I would use an array field tids to store [otid, dtid] and an array field sccps to store [sccpCalling, sccpCalled], which would make your update query look like
{ "tids" : { "$in" : ["M2PA042e3918"] }, "sccps" : { "$in" : [ "523332075151", "523331466305" ] } }
and amenable to the index { "tids" : 1, "sccps" : 1 }.
I am using Swift to let my app update an item on DynamoDB. I followed the example https://github.com/aws/aws-sdk-ios/blob/master/AWSiOSSDKTests/AWSDynamoDBTests.m#L244, the "- (void)testUpdateItem" method. My codes are as follows:
var dynamoDB = AWSDynamoDB()
var hashValue: AWSDynamoDBAttributeValue = AWSDynamoDBAttributeValue()
hashValue.S = userID
var updatedValue: AWSDynamoDBAttributeValue = AWSDynamoDBAttributeValue()
updatedValue.S = updatedContent
var updateInput: AWSDynamoDBUpdateItemInput = AWSDynamoDBUpdateItemInput()
updateInput.tableName = kAWSDynamoDBMoEndpointARNTableName
updateInput.key = ["UserID": hashValue]
var valueUpdate: AWSDynamoDBAttributeValueUpdate = AWSDynamoDBAttributeValueUpdate()
valueUpdate.value = updatedValue
valueUpdate.action = AWSDynamoDBAttributeAction.Put
updateInput.attributeUpdates = ["Updated": valueUpdate]
updateInput.returnValues = AWSDynamoDBReturnValue.UpdatedNew
dynamoDB.updateItem(updateInput).waitUntilFinished()
However, as I run the codes, it always returns fatal error: unexpectedly found nil while unwrapping an Optional value. Looking into the error, I can find it is from Swift._fatalErrorMessage (Swift.StaticString, Swift.StaticString, Swift.StaticString, Swift.Uint) -> () with Thread 1: EXC_BREAKPOINT (code=EXC_ARM_BREAKPOINT, subcode=0xe7ffdefe)
I didn't use .continueWithBlock or .continueWithSuccessBlock cuz neither helps to catch the error.
Any idea why? THANKS
You need to get an instance of AWSDynamoDB as follows:
let dynamoDB = AWSDynamoDB.defaultDynamoDB()
You are using the default - init method without any parameter, and it causes AWSDynamoDB instance to have no AWSServiceConfiguration.
With the current AWS, found that this worked well for posting the updateItem in Swift.
dynamoDB.updateItem(updateInput) .continueWithExecutor(AWSExecutor.mainThreadExecutor(), withBlock: { (task:AWSTask!) -> AnyObject! in
if (task.error == nil) {
log.debug("task.error == nil")
}
return nil
})
// instead of
dynamoDB.updateItem(updateInput).waitUntilFinished()
i have to build an small app in order to show some data from the Google Financial API.
I know that i could study it inside out, but I don't have much time.
The url http://www.google.com/finance/info?q=MSFT returns this JSON string:
// [ { "id": "358464" ,"t" : "MSFT" ,"e" : "NASDAQ" ,"l" : "24.38" ,"l_cur" : "24.38" ,"ltt":"4:00PM EDT" ,"lt" : "Oct 1, 4:00PM EDT" ,"c" : "-0.11" ,"cp" : "-0.45" ,"ccol" : "chr" ,"el": "24.39" ,"el_cur": "24.39" ,"elt" : "Oct 1, 7:58PM EDT" ,"ec" : "+0.01" ,"ecp" : "0.04" ,"eccol" : "chg" ,"div" : "0.16" ,"yld" : "2.63" } ]
I don't know how to make that string available to a view. I need to "catch it" and show (some of) it in my template. I need something like:
def myview(...)
URL = 'http://www.google.com/finance/info?q=MSFT'
mystring = catchfromURL(URL)
#work with the string
return render_to_response('page.html', mystring)
Thanks in advance.
That little // at the beginning threw me off too. Here's what you do:
import json
jsonData = json.loads(mystring[3:])
Now, I don't know what any of the encoded data there means, but that's how you can get it as python objects.