Testnet transaction is not broadcasted/confirmed within 5 hours - blockchain

Trying to send transaction using python lib https://github.com/petertodd/python-bitcoinlib
We debugged rest API which publishes transaction to to service chain.so and it was successful, also we can see this TX here:
https://chain.so/tx/BTCTEST/98d9f2bc3f65b0ca81f775f43c2f48b6ffe29fcfa06779c7ab299709ea7fc639
DST address is our BitPay testnet wallet,
SRC addresses come generated from python-bitcoinlib using wallet.get_key()
Also we even can't find it on other services.
Also we tried same to to post transaction to bitaps.com and we can't see it there too.
Probably someone can give a clue where to look, what can be wrong and why Confidence on chain.so is 0%. Probably you can recommend other service?
Code is very straightforward:
wallet = wallet_create_or_open(name='MainWallet', network='testnet', witness_type='segwit')
transaction = wallet.send_to(
to_address=body.address,
amount=decimal_to_satoshi(body.amount),
fee='low', offline=True,
)
transaction.send(offline=False)

Found an issue:
Raw hex into https://tbtc.bitaps.com/broadcast but there is an error it shows "Broadcast transaction failed: min relay fee not met, 111 < 141 (code 66)"
Based on the error it seems that I am sending tBTC with a below minimum fee.

Related

Does 'msgMultiSend' of Cosmos coin, guarantee the order of input and output?

I am a coin wallet developer, and I am investigating Cosmos' transfer this time.
Cosmos has msgMultiSend as well as msgSend.
I know that MsgMultiSend sends several transfers using inputs and outputs in the form of an array.
At this time, I wonder if the order of inputs and outputs is matched one on one and guaranteed.
(i.e., whether the recipient matching the first sender of inputs is always guaranteed to be the first of outputs.)
(i.e.
transfer 1 : inputs[0] -> outputs[0]
transfer 2 : inputs[1] -> outputs[1]
...)
In cosmos 0.45.9, cosmjs 0.28.11, msgMultiSend have inputs that must be the same address. If you have multiple input addresses, you must have multiple signatures to verify them. And when I try to do this, the SDK show error BroadcastTxError: Broadcasting transaction failed with code 4 (codespace: sdk). Log: wrong number of signers; expected 1, got 2: unauthorized at CosmWasmClient.broadcastTx. But if you use the same address, It'll successful. Example on Aura Network Testnet: A070ED2C0557CFED34F48BF009D2E21235E79E07779A80EF49801F5983035F1B. Click JSON to view Raw Data.
And the sum token amount of inputs should equal the sum token amount of outputs. If it's not equal, this error will throw Broadcasting transaction failed with code 4 (codespace: bank). Log: sum inputs != sum outputs.
You can see the events data of the transaction to know more about this typeUrl.
Example:
1 input send to 19 outputs

Getting OUT_OF_ENERGY error at the TRON's contract call while not all of provided fee_limit was burned

I am trying to call method of the smart-contract deployed on Tron from program code by calling
/wallet/triggersmartcontract/
Tron node's API.
This worked just fine on the testnet Shasta, but the attempt of executing this on the mainnet failed with OUT_OF_ENERGY error, while from the provided 10 TRX for burning - only 2,8 TRX was burned actually for obtaining 10000 energy. proof from Tronscan
The balance of the caller have more than 10 TRX and there 0 TRX was sent along with the contract's call.
On Shasta - contracts calls by the same code exceeded 10000 energy and I made on purpose calls from accounts with 0 energy in order to force burning TRXs. Everything worked.
Can somebody explain how this happened and how to workaround this problem?
you can take a looke to this page mabye usefull.
document of tron
there are something about out of energy

Spamassassin's custom rule (for subject line filtering) doesn't work

I'm setting up Spamassassin to use along isbg to filter mail in my IMAP mail account. My ISP already has a pretty good spam filter that adds "[SPAM]" in front of the subject line of each message it detects; thus, I'm setting up a custom rule in Spamassassin so that it adds a high score to any mail which Subject line starts with "[SPAM]". My user_prefs file is:
required_score 9
score HTML_COMMENT_8BITS 0
score UPPERCASE_25_50 0
score UPPERCASE_50_75 0
score UPPERCASE_75_100 0
score OBSCURED_EMAIL 0
score SUBJ_ILLEGAL_CHARS 0
header SPAM_FILTRADO Subject =~ /^\s*\[SPAM\]/
score SPAM_FILTRADO 20
And yet, when I feed it a spam message to test it, it doesn't seem to trigger my rule. I feed it an email with this subject line, for example:
Subject: [SPAM] See Drone X Pro in action
And I analyse it in this way:
[paulo#myserver mails]$ spamc -R < spam7.txt
9.3/9.0
Spam detection software, running on the system "myserver", has
identified this incoming email as possible spam. The original message
has been attached to this so you can view it (if it isn't spam) or label
similar future email. If you have any questions, see
##CONTACT_ADDRESS## for details.
Content preview: Big Drone Companies Are Terrified Of This New Drone That Hit
The Market <http://www.fairfood.icu/uisghougw/pjarx44255sweouci/I31AAdtTTKmLsu_A6Dq7ZK_a47Ko45fCRXk7Fr9fqm4/BbYMgcZjieuj_YxMOSmnXetiI6e4Z37yS9H2zVIeHEilOpatuk8V8Mt0EtJDfLLE1llzj6MiwlLzR99DGODekcqeM7kn63lcFcp8fJutAsw>
[...]
Content analysis details: (9.3 points, 9.0 required)
pts rule name description
---- ---------------------- --------------------------------------------------
2.4 DNS_FROM_AHBL_RHSBL RBL: Envelope sender listed in dnsbl.ahbl.org
2.7 RCVD_IN_PSBL RBL: Received via a relay in PSBL
[193.17.4.113 listed in psbl.surriel.com]
-0.0 SPF_PASS SPF: sender matches SPF record
1.3 HTML_IMAGE_ONLY_24 BODY: HTML: images with 2000-2400 bytes of words
0.0 HTML_MESSAGE BODY: HTML included in message
1.6 RCVD_IN_BRBL_LASTEXT RBL: RCVD_IN_BRBL_LASTEXT
[193.17.4.113 listed in bb.barracudacentral.org]
1.3 RDNS_NONE Delivered to internal network by a host with no rDNS
There isn't anything about my rule.
I know that my user_prefs is being loaded because, after the section I pasted above, I have some email addresses set up in a whitelist, and when analysing emails coming from those addresses, Spamassassin correctly detects them.
What's wrong with my rule?
Support for custom rules in user_prefs file is turned off by default.
You may use item_allow_user_rules in global configuration to change it.

Looking for SLAs pertaining to endpoint responsiveness

Could you please point me at documents you might have detailing your expected responsiveness of HERE geocoding APIs please? I'm after something more concrete than 99.9% availability.
Also, if I'm waiting for 40 minutes or 14 hours for a batch job containing a single address to be processed, does that fail 99.9%?
You can view the SLA-report by logging into developer.here.com and going to https://developer.here.com/sla-report. Batch jobs are POST requests and the time to complete your request depends on the queue size (there would be other requests waiting) and the batch size. So this doesn't fail 99.9%.
For a single address as you listed will take just few milliseconds. Anything above that, especially 40 min indicates that you are probably not connected. This includes invalid address input as the result will be back telling you that the address is not found. You can check the previous status using RequestID from the request like that https://batch.geocoder.ls.hereapi.com/6.2/jobs/
E2bc948zBsMCG4QclFKCq3tddWYCsE9g
?action=status
&apiKey={YOUR_API_KEY}
In general, if your address has more address tokens, it will take longer to return comparing to if it has fewer address tokens.
example
USA vs 1600 Pennsylvania Ave NW, Washington, DC 20500,USA

WS02 CEP Siddhi Queries

New to Siddhi CEP. Other than the regular docs on WS02 CEP can someone point to a good tutorial.
Here are our requirements. Point out some clues on the right ways of writing such queries.
Have a single stream of sensor device notification ( IOT application ).
Stream input is via REST-JSON and output is also to be formatted to REST-JSON. ( Hope this is possible on WS02 CEP 3.1)
Kind of execution plan required:
- If device notification reports usage of Sensor 1, then monitor to see if within 5 mins if device notification reports usage of Sensor 2 also. If found then generate output stream reporting composite-activity back on REST-JSON.
- If such composite-activity is not detected during a time slot in morning, afternoon and evening then generate warning-event-stream status on REST-JSON. ( So how to find events which did not occur in time )
- If such composite-activity is not found within some time slots in morning, afternoon and evening then report failure1-event-stream status back on REST-JSON.
This should work day on day, so how will the previous processed data get deleted in WSO2 CEP.
Regards,
Amit
The queries can be as follows (these are draft queries and may require slight modifications to get them running)
To detect sensor 1, and then sensor 2 within 5 minutes (assuming sensorStram has id, value) you can simply use a pattern like following with the 'within' keyword:
from e1=sensorStream[sensorId == '1'] -> e2=sensorStream[sensorId == '2']
select 'composite activity detected' as description, e1.value as sensor1Value, e2.value as sensor2Value
within 5 minutes
insert into compositeActivityStream;
To detect non occurrences (id=1 arrives, but no id=2 within 5 minutes) we can have following two queries:
from sensorStream[sensorId == '1']#window.time(5 minutes)
select *
insert into delayedSensor1Stream for expired-events;
from e1=sensorStream[sensorId == '1'] -> nonOccurringEvent = sensorStream[sensorId == '2'] or delayedEvent=delayedSensor1Stream
select 'id=2 not found' as description, e1.value as id1Value, nonOccurringEvent.sensorId as nonOccurringId
having (not(nonOccurringId instanceof string))
insert into nonOccurrenceStream;
This will detect non-occurrences immediately at the end of 5 minutes after the arrival of id=1 event.
For an explanation of the above logic, have a look at the non occurrence sample of cep 4.0.0 (the syntax is a bit different, but the same idea)
Now since you need to periodically generate a report, we need another query. For convenience i assume you need a report every 6 hours (360 minutes) and use a time batch window here. Alternatively with the new CEP 4.0.0 you can use the 'Cron window' to generate this at specific times which is better for your use case.
from nonOccurrenceStream#window.timeBatch(360 minutes)
select count(id1Value) as nonOccurrenceCount
insert into nonOccurrenceReportsStream for expired-events;
You can use http input/output adaptors and do json mappings with json builders and formatters for this use case.