ingress_expiry not within expected range error in candid ui. Motoko language, console prompted a breaking change on running dfx deploy - blockchain

I was following a course tutorial to create a small decentralized bank application using dfinity's internet computer.
the main.mo file is as follows:
import Debug "mo:base/Debug";
import Float "mo:base/Float";
import Time "mo:base/Time";
actor DBank {
stable var currentValue: Float = 300;
stable var startTime = Time.now();
Debug.print(debug_show(currentValue));
public func topUp(amount: Float){
currentValue += amount;
Debug.print(debug_show(currentValue));
};
public func withdraw(amount: Float){
let tempValue: Float = currentValue - amount;
if(tempValue >= 0){
currentValue -= amount;
Debug.print(debug_show(currentValue));
} else{
Debug.print("Withdrawal amount is more than Balance.")
}
};
public query func checkBalance(): async Float{
return currentValue;
};
public func compound() {
let currentTime = Time.now();
let timeElapsedNS = currentTime - startTime;
let timeElapsedSec = timeElapsedNS / 1_000_000_000;
currentValue := currentValue * (1.01 ** Float.fromInt(timeElapsedSec));
};
}
The Candid Interface was working until I changed the the data type of currentValue from Nat to float. It showed a warning when i ran dfx deploy on the terminal:
user***#hp:~/ic-projects/dbank$ dfx deploy
Deploying all canisters.
All canisters have already been created.
Building canisters...
Building frontend...
Installing canisters...
WARNING!
Candid interface compatibility check failed for canister 'dbank'.
You are making a BREAKING change. Other canisters or frontend clients relying on your canister may stop working.
Method checkBalance: func () -> (float64) query is not a subtype of func () -> (nat) query
Do you want to proceed? yes/No
yes
I do not have the original warning, I tried to recreate it so float64 and nat might be interchanged.
On deployment the Candid UI shows this:
An error happened in Candid canister:
Error: Server returned an error:
Code: 400 (Bad Request)
Body: Specified ingress_expiry not within expected range:
Minimum allowed expiry: 2023-02-06 19:04:14.511184689 UTC
Maximum allowed expiry: 2023-02-06 19:09:44.511184689 UTC
Provided expiry: 2023-02-06 19:25:14.339 UTC
Local replica time: 2023-02-06 19:04:14.511189299 UTC
at _.query (http://127.0.0.1:8000/index.js:2:8821)
at async http://127.0.0.1:8000/index.js:2:100976
at async getRemoteDidJs (http://127.0.0.1:8000/index.js:2:266158)
at async Object.fetchActor (http://127.0.0.1:8000/index.js:2:265243)
at async http://127.0.0.1:8000/index.js:2:271946
I was thinking whether this could be a problem from Time module but couldn't find much about it, as the module documentation page has been removed.
Looked up the ingress_expiry problem and tried it's solutions:
the local machine's time is correct and in sync with the actual time.
Using wsl1 in vscode, it is the latest version i suppose as running wsl --update shows you have the latest version
The breaking change warning in the question description was orginally prompted for the checkBalance() function but I could not include anything worthwhile.

Related

How to get local time of different timezone in Postman?

I want to get local time of Rome time zone. Didn't find any details on how to use moment sandbox built-in library in postman documentation here postman_sandbox_api_reference
what I have tried so far
var moment = require('moment');
console.log(moment().tz(environment.TimeZone).format());
error it throws - TypeError | moment(...).tz is not a function
another attempt-
var moment = require('moment-timezone');
console.log(moment().tz(environment.TimeZone).format());
error it throws - Error | Cannot find module 'moment-timezone'
Where I'm going wrong? can anyone point me in right direction.
Thanks
Postman only has the moment library built-in and not moment-timezone.
If what you're doing isn't part of the moment docs, it's not going to work.
https://momentjs.com/docs/
As a workaround to get the data, you could use a simple 3rd party API.
Making a request to this endpoint would get you some timezone data that you could use.
http://worldtimeapi.org/api/timezone/Europe/Rome
This could be added to the pm.sendRequest() in the pre-request script to fetch the data you require and use this in another request.
pm.sendRequest("http://worldtimeapi.org/api/timezone/Europe/Rome", function (err, res) {
pm.globals.set("localTimeRome", res.json().datetime);
});
Actually, you can write a simple function to get local time in other time zones only with moment:
const moment = require('moment');
const TimeZoneUTCOffsetMapping = {
'America/Chicago': -6,
'Europe/Rome': 2,
'Asia/Shanghai': 8,
...
};
const LocalUTCOffset = 8;
function getMomentDisplayInTimeZone(momentObj, timeZone) {
let timeZoneUTCOffset = TimeZoneUTCOffsetMapping[timeZone];
if (timeZoneUTCOffset === undefined) {
throw new Error('No time zone matched');
}
return momentObj.add(timeZoneUTCOffset - LocalUTCOffset, 'hour').format('YYYY-MM-DDTkk:mm:ss');
}
console.log(getMomentDisplayInTimeZone(moment(), 'Europe/Rome'));

I'm trying to import either GoogleAPIClient or GoogleAPIClientForREST

I'm trying to follow Google's tutorial on making their QuickStart app to learn how to make API calls with Swift. I followed the tutorial completely and ended up with this code
import GoogleAPIClient
import GTMOAuth2
import UIKit
class ViewController: UIViewController {
private let kKeychainItemName = "Drive API"
private let kClientID = "592019061169-nmjle7sfv8i8eahplae3cvto2rsj4gev.apps.googleusercontent.com"
// If modifying these scopes, delete your previously saved credentials by
// resetting the iOS simulator or uninstall the app.
private let scopes = [kGTLAuthScopeDriveMetadataReadonly]
private let service = GTLServiceDrive()
let output = UITextView()
// When the view loads, create necessary subviews
// and initialize the Drive API service
override func viewDidLoad() {
super.viewDidLoad()
output.frame = view.bounds
output.editable = false
output.contentInset = UIEdgeInsets(top: 20, left: 0, bottom: 20, right: 0)
output.autoresizingMask = [.FlexibleHeight, .FlexibleWidth]
view.addSubview(output);
if let auth = GTMOAuth2ViewControllerTouch.authForGoogleFromKeychainForName(
kKeychainItemName,
clientID: kClientID,
clientSecret: nil) {
service.authorizer = auth
}
}
// When the view appears, ensure that the Drive API service is authorized
// and perform API calls
override func viewDidAppear(animated: Bool) {
if let authorizer = service.authorizer,
let canAuth = authorizer.canAuthorize, canAuth {
fetchFiles()
} else {
presentViewController(
createAuthController(),
animated: true,
completion: nil
)
}
}
// Construct a query to get names and IDs of 10 files using the Google Drive API
func fetchFiles() {
output.text = "Getting files..."
let query = GTLQueryDrive.queryForFilesList()
query.pageSize = 10
query.fields = "nextPageToken, files(id, name)"
service.executeQuery(
query,
delegate: self,
didFinishSelector: "displayResultWithTicket:finishedWithObject:error:"
)
}
// Parse results and display
func displayResultWithTicket(ticket : GTLServiceTicket,
finishedWithObject response : GTLDriveFileList,
error : NSError?) {
if let error = error {
showAlert("Error", message: error.localizedDescription)
return
}
var filesString = ""
if let files = response.files(), !files.isEmpty {
filesString += "Files:\n"
for file in files as! [GTLDriveFile] {
filesString += "\(file.name) (\(file.identifier))\n"
}
} else {
filesString = "No files found."
}
output.text = filesString
}
// Creates the auth controller for authorizing access to Drive API
private func createAuthController() -> GTMOAuth2ViewControllerTouch {
let scopeString = scopes.joinWithSeparator(" ")
return GTMOAuth2ViewControllerTouch(
scope: scopeString,
clientID: kClientID,
clientSecret: nil,
keychainItemName: kKeychainItemName,
delegate: self,
finishedSelector: "viewController:finishedWithAuth:error:"
)
}
// Handle completion of the authorization process, and update the Drive API
// with the new credentials.
func viewController(vc : UIViewController,
finishedWithAuth authResult : GTMOAuth2Authentication, error : NSError?) {
if let error = error {
service.authorizer = nil
showAlert("Authentication Error", message: error.localizedDescription)
return
}
service.authorizer = authResult
dismissViewControllerAnimated(true, completion: nil)
}
// Helper for showing an alert
func showAlert(title : String, message: String) {
let alert = UIAlertController(
title: title,
message: message,
preferredStyle: UIAlertControllerStyle.Alert
)
let ok = UIAlertAction(
title: "OK",
style: UIAlertActionStyle.Default,
handler: nil
)
alert.addAction(ok)
presentViewController(alert, animated: true, completion: nil)
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
My problem is that for
import GoogleAPIClient
I get the error "No such module GoogleAPIClient", which seems weird to me since GTMOAuth2 doesn't get an error, even though it's part of the same Pod I think (I'm new to this, so I'm probably butchering the terminology).
From researching the problem, I found that GoogleAPIClientForREST should be substituted for GoogleAPIClient. This document on GitHub says to just use GoogleAPIClientForREST in the code instead of GoogleAPIClient, but I get the same error with that as well.
Then I thought maybe I could re-install the pods with some changes to Google's tutorial. In the tutorial, it says to execute this code in Terminal
$ cat << EOF > Podfile &&
> platform :ios, '7.0'
> use_frameworks!
> target 'QuickstartApp' do
> pod 'GoogleAPIClient/Drive', '~> 1.0.2'
> pod 'GTMOAuth2', '~> 1.1.0'
> end
> EOF
> pod install &&
> open QuickstartApp.xcworkspace
So I thought maybe I could replace GoogleAPIClient for GoogleAPIClientForREST in the terminal code, but that landed me with the same error
As you can see in the screenshot, the framework is there on the left-hand side, but I'm still getting the "No such module" error.
Embedded Binaries and Linked Frameworks
Search Paths
I also found some suggestions here that I tried to follow, but I didn't completely understand the explanation. Nevertheless, I tried, and did this (if I did it wrong please tell me):
So I'm trying to get either GoogleAPIClient or GoogleAPIClientForREST to work. Thank you for your help
Use this for your Podfile:
platform :ios, '7.0'
use_frameworks!
target 'QuickstartApp' do
pod 'GoogleAPIClientForREST/Drive', '~> 1.1.1'
pod 'GTMOAuth2', '~> 1.1.0'
end
Change your import to
import GoogleAPIClientForREST
Then follow the instructions here to migrate the project:
Migrating from GoogleAPIClient to GoogleAPIClientForREST
This mostly involves changing GTL calls to GTLR calls with some word swapping. For example, GTLServiceDrive becomes GTLRDriveService.
Regarding framework search paths, this image shows the section you might need to change (note it works for me using the default):
Search paths can be per target, too. Here's an image showing the application target and the framework search paths:
So I followed the Quickstart tutorial exactly as well and was able to get it working. I moved the GoogleAPIClientForRest in Framework Search Paths above GTMOAuth2:
Screenshot
I ran into an error in the code after successfully including the module and had to change this line to get it to build and run:
if (result.files!.count to if (result.files!.count > 0).
Of course now, Google has deprecated GTMOAuth2 and replaced it with GTMAppAuth, which renders this app useless.
Although the solution towards which I am pointing you might be for other library, but it will help you for sure. https://stackoverflow.com/a/25874524/5032645 . Please try and let me know, if I should simplify it more for you.
First, look at the Pods_QuickstartApp.framework in the Frameworks group of your Quickstart project. If it is still red, as it is on your screenshot, then Xcode didn't build it. If Xcode didn't build the framework, Xcode can't import it for you.
Cocoapods builds a workspace including your app project, plus another project that assembles your individual pod frameworks into a larger framework.
It seems cocoapods built your workspace, and you did open the workspace instead of the project. That's good.
Check the contents of the file named "Podfile". It should match:
platform :ios, '7.0'
use_frameworks!
target 'QuickstartApp' do
pod 'GoogleAPIClient/Drive', '~> 1.0.2'
pod 'GTMOAuth2', '~> 1.1.0'
end
If it doesn't, fix it, exit Xcode, delete the .xcodeworkspace file, and then run
pod install
from the console. That may fix your dependencies so that Xcode builds the frameworks.
If you do get it to compile, your problems have just begun. Google has deprecated the OAAuth authorization from an embedded user-agent.

Meteor regex find() far slower than in MongoDB console

I've been researching A LOT for past 2 weeks and can't pinpoint the exact reason of my Meteor app returning results too slow.
Currently I have only a single collection in my Mongo database with around 2,00,000 documents. And to search I am using Meteor subscriptions on the basis of a given keyword. Here is my query:
db.collection.find({$or:[
{title:{$regex:".*java.*", $options:"i"}},
{company:{$regex:".*java.*", $options:"i"}}
]})
When I run above query in mongo shell, the results are returned instantly. But when I use it in Meteor client, the results take almost 40 seconds to return from server. Here is my meteor client code:
Template.testing.onCreated(function () {
var instance = this;
// initialize the reactive variables
instance.loaded = new ReactiveVar(0);
instance.limit = new ReactiveVar(20);
instance.autorun(function () {
// get the limit
var limit = instance.limit.get();
var keyword = Router.current().params.query.k;
var searchByLocation = Router.current().params.query.l;
var startDate = Session.get("startDate");
var endDate = Session.get("endDate");
// subscribe to the posts publication
var subscription = instance.subscribe('sub_testing', limit,keyword,searchByLocation,startDate,endDate);
// if subscription is ready, set limit to newLimit
$('#searchbutton').val('Searching');
if (subscription.ready()) {
$('#searchbutton').val('Search');
instance.loaded.set(limit);
} else {
console.log("> Subscription is not ready yet. \n\n");
}
});
instance.testing = function() {
return Collection.find({}, {sort:{id:-1},limit: instance.loaded.get()});
}
And here is my meteor server code:
Meteor.publish('sub_testing', function(limit,keyword,searchByLocation,startDate,endDate) {
Meteor._sleepForMs(200);
var pat = ".*" + keyword + ".*";
var pat2 = ".*" + searchByLocation + ".*";
return Jobstesting.find({$or:[{title:{$regex: pat, $options:"i"}}, { company:{$regex:pat,$options:"i"}},{ description:{$regex:pat,$options:"i"}},{location:{$regex:pat2,$options:"i"}},{country:{$regex:pat2,$options:"i"}}],$and:[{date_posted: { $gte : endDate, $lt: startDate }},{sort:{date_posted:-1},limit: limit,skip: limit});
});
One point I'd also like to mention here that I use "Load More" pagination and by default the limit parameter gets 20 records. On each "Load More" click, I increment the limit parameter by 20 so on first click it is 20, on second click 40 and so on...
Any help where I'm going wrong would be appreciated.
But when I use it in Meteor client, the results take almost 40 seconds to return from server.
You may be misunderstanding how Meteor is accessing your data.
Queries run on the client are processed on the client.
Meteor.publish - Makes data available on the server
Meteor.subscribe - Downloads that data from the server to the client.
Collection.find - Looks through the data on the client.
If you think the Meteor side is slow, you should time it server side (print time before/after) and file a bug.
If you're implementing a pager, you might try a meteor method instead, or
a pager package.

Sync Framework - Conflict Resolution Triggers Change, Resulting in Unnecessary Downloads

I'm using Sync Framework v2.1 configured in a hub <--> spoke fashion.
Hub: SQL Server 2012 using SqlSyncProvider.
Spokes: LocalDb 2012 using SqlSyncProvider. Each spokes' database begins as a restored backup from the server, after which PostRestoreFixup is executed against it. In investigating this, I've also tried starting with an empty spoke database whose schema and data are created through provisioning and an initial, download-only sync.
Assume two spokes (A & B) and a central hub (let's call it H). They each have one table with one record and they're all in sync.
Spoke A changes the record and syncs, leaving A & H with identical records.
Spoke B changes the same record and syncs, resulting in a conflict with the change made in step #1. B's record is overwritten with H's, and H's record remains as-is. This is the expected/desired result. However, the SyncOperationStatistics returned by the orchestrator suggest changes are made at H. I've tried both SyncDirectionOrder directions, with these results:
- DownloadAndUpload (H's local_update_peer_timestamp and last_change_datetime are updated) -->
* Download changes total: 1
* Download changes applied: 1
* Download changed failed: 0
* Upload changes total: 1
* Upload changes applied: 1
* Upload changed failed: 0
- UploadAndDownload (H's local_update_peer_timestamp is updated)-->
* Upload changes total: 1
* Upload changes applied: 1
* Upload changed failed: 0
* Download changes total: 1
* Download changes applied: 1
* Download changed failed: 0
And, indeed, when Spoke A syncs again the record is downloaded from H, even though H's record hasn't changed. Why?
The problem arising from this is, for example, if Spoke A makes another change to the record between steps #2 and 3, that change will (falsely) be flagged as a conflict and will be overwritten at step #3.
Here's the pared-down code demonstrating the issue or, rather, my question. Note that I've implemented the provider's ApplyChangeFailed handlers such that the server wins, regardless of the SyncDirectionOrder:
private const string ScopeName = "TestScope";
private const string TestTable = "TestTable";
public static SyncOperationStatistics Synchronize(SyncEndpoint local,SyncEndpoint remote, EventHandler<DbSyncProgressEventArgs> eventHandler)
{
using (var localConn = new SqlConnection(local.ConnectionString))
using (var remoteConn = new SqlConnection(remote.ConnectionString))
{
// provision the remote server if necessary
//
var serverProvision = new SqlSyncScopeProvisioning(remoteConn);
if (!serverProvision.ScopeExists(ScopeName))
{
var serverScopeDesc = new DbSyncScopeDescription(ScopeName);
var serverTableDesc = SqlSyncDescriptionBuilder.GetDescriptionForTable(TestTable, remoteConn);
serverScopeDesc.Tables.Add(serverTableDesc);
serverProvision.PopulateFromScopeDescription(serverScopeDesc);
serverProvision.Apply();
}
// provision locally (localDb), if necessary, bringing down the server's scope
//
var clientProvision = new SqlSyncScopeProvisioning(localConn);
if (!clientProvision.ScopeExists(ScopeName))
{
var scopeDesc = SqlSyncDescriptionBuilder.GetDescriptionForScope(ScopeName, remoteConn);
clientProvision.PopulateFromScopeDescription(scopeDesc);
clientProvision.Apply();
}
// create\initialize the sync providers and go for it...
//
using (var localProvider = new SqlSyncProvider(ScopeName, localConn))
using (var remoteProvider = new SqlSyncProvider(ScopeName, remoteConn))
{
localProvider.SyncProviderPosition = SyncProviderPosition.Local;
localProvider.SyncProgress += eventHandler;
localProvider.ApplyChangeFailed += LocalProviderOnApplyChangeFailed;
remoteProvider.SyncProviderPosition = SyncProviderPosition.Remote;
remoteProvider.SyncProgress += eventHandler;
remoteProvider.ApplyChangeFailed += RemoteProviderOnApplyChangeFailed;
var syncOrchestrator = new SyncOrchestrator
{
LocalProvider = localProvider,
RemoteProvider = remoteProvider,
Direction = SyncDirectionOrder.UploadAndDownload // also issue with DownloadAndUpload
};
return syncOrchestrator.Synchronize();
}
}
}
private static void RemoteProviderOnApplyChangeFailed(object sender, DbApplyChangeFailedEventArgs e)
{
// ignore conflicts at the server
//
e.Action = ApplyAction.Continue;
}
private static void LocalProviderOnApplyChangeFailed(object sender, DbApplyChangeFailedEventArgs e)
{
// server wins, force write at each client
//
e.Action = ApplyAction.RetryWithForceWrite;
}
To reiterate, using this code along w/the configuration described at the outset, conflicting rows are, as expected, overwritten on the spoke containing the conflict and the server's version of that row remains as-is (unchanged). However, I'm seeing that each conflict results in an update to the server's xxx_tracking table, specifically the local_update_peer_timestamp and last_change_datetime fields. This, I'm guessing, results in a download to every other spoke even though the server's data hasn't really changed. This seems unnecessary and is, to me, counter-intuitive.

Akka 2.1 Remote: sharing actor across systems

I'm learnin about remote actors in Akka 2.1 and I tried to adapt the counter example provided by Typesafe.
I implemented a quick'n'dirty UI from the console to send ticks. And to quit with asking(and showing the result) the current count.
The idea is to start a master node that will run the Counter actor and some client node that will send messages to it through remoting. However I'd like to achieve this through configuration and minimal changes to code. So by changing the configuration local actors could be used.
I found this blog entry about similar problem where it was necessary that all API calls go through one actor even though there are many instances running.
I wrote similar configuration but I cant get it to work. My current code does use remoting but it creates a new actor on the master for each new node and I can't get it to connect to existing actor without explicitly giving it the path(and defying the point of configuration). However this is not what I want since state cannot be shared between JVMs this way.
Full runnable code available through a git repo
This is my config file
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/counter {
remote = "akka://ticker#127.0.0.1:2552"
}
}
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
log-sent-messages = on
netty {
hostname = "127.0.0.1"
}
}
}
And full source
import akka.actor._
import akka.pattern.ask
import scala.concurrent.duration._
import akka.util.Timeout
import scala.util._
case object Tick
case object Get
class Counter extends Actor {
var count = 0
val id = math.random.toString.substring(2)
println(s"\nmy name is $id\ni'm at ${self.path}\n")
def log(s: String) = println(s"$id: $s")
def receive = {
case Tick =>
count += 1
log(s"got a tick, now at $count")
case Get =>
sender ! count
log(s"asked for count, replied with $count")
}
}
object AkkaProjectInScala extends App {
val system = ActorSystem("ticker")
implicit val ec = system.dispatcher
val counter = system.actorOf(Props[Counter], "counter")
def step {
print("tick or quit? ")
readLine() match {
case "tick" => counter ! Tick
case "quit" => return
case _ =>
}
step
}
step
implicit val timeout = Timeout(5.seconds)
val f = counter ? Get
f onComplete {
case Failure(e) => throw e
case Success(count) => println("Count is " + count)
}
system.shutdown()
}
I used sbt run and in another window sbt run -Dakka.remote.netty.port=0 to run it.
I found out I can use some sort of pattern. Akka remote allows only for deploying on remote systems(can't find a way to make it look up on remote just through configuration..am I mistaken here?).
So I can deploy a "scout" that will pass back the ActorRef. Runnable code available on the original repo under branch "scout-hack". Because this feels like a hack. I will still appreciate configuration based solution.
The actor
case object Fetch
class Scout extends Actor{
def receive = {
case Fetch => sender ! AkkaProjectInScala._counter
}
}
Counter actor creating is now lazy
lazy val _counter = system.actorOf(Props[Counter], "counter")
So it only executes on the master(determined by the port) and can be fetched like this
val counter: ActorRef = {
val scout = system.actorOf(Props[Scout], "scout")
val ref = Await.result(scout ? Fetch, timeout.duration) match {
case r: ActorRef => r
}
scout ! PoisonPill
ref
}
And full config
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/scout {
remote = "akka://ticker#127.0.0.1:2552"
}
}
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
log-sent-messages = on
netty {
hostname = "127.0.0.1"
}
}
}
EDIT: I also found a clean-ish way: check configuration for "counterPath" anf if present actorFor(path) else create actor. Nice and you can inject the master when running and code is much cleaner than with the "scout" but it still has to decide weather to look up or create an actor. I guess this cannot be avoided.
I tried your git project and it actually works fine, aside from a compilation error, and that you must start the sbt session with -Dakka.remote.netty.port=0 parameter to the jvm, not as parameter to run.
You should also understand that you don't have to start the Counter actor in both processes. In this example it's intended to be created from the client and deployed on the server (port 2552). You don't have to start it on the server. It should be enough to create the actor system on the server for this example.