We had a leak in code which resulted in few keys not getting deleted in redis elasticache. We noticed it only when the number became > 3 million. We have fixed code around it, however we need to fix the redis as well. Now we cant flush all as it will delete all the keys. We only want to delete keys, lets say older than 15 days. I found few commands online like however how can I iterate over 3 million records without getting the system stuck. Please help.
Thank you in advance.
object idletime
del record
Anyone looking for similar problem as mine, the below code worked for me. It's not very efficient but does the work for me.
Iterable<String> iter = redissonClient.getKeys().getKeysByPattern(patternNew, scanLimit);
delList = new ArrayList<>();
for (String key : iter) {
RBucket<String> bucket = redissonClient.getBucket(key);
idletime = bucket.getIdleTime();
if (idletime > idletimeout) {
delList.add(key);
}
}
if (!delList.isEmpty()) {
recordsDeleted += delList.size();
count = redissonClient.getKeys().deleteAsync(delList.stream().toArray(String[]::new));
}
}
You can configure the maxmemory and set an eviction policy which will delete the keys based on the policy once maxmemory is hit.
https://docs.aws.amazon.com/whitepapers/latest/database-caching-strategies-using-redis/evictions.html
Related
I am currently doing the near stakewars challenge and I have an issue. When I created my staking pool, I guess I used a different key then the key that I had set for my validator key. So my node was kicked off the network. The solution to this problem is supposed to be to update the key, but when I do that I get a permissions error. After trying to figure this out for days I am going to have to just create a new pool and start over. But that kind of sucks because I had 2000 shardnet tokens and what what I would like to do is get them back because the more tokens you have the more likely you are to get a seat.
It seems to me (i could be wrong here) that the issue is the near-cli software does not want to accept a custom key path. I have the private key for the staking pool, so this does not make a lot of sense to me. This is what happens when I try to update the key:
near call luminaryvision.factory.shardnet.near update_staking_key '{"stake_public_key": "ed25519:3EL542RuLDcUDeHXF2dJYfb73nhwAfs9GTXjrknxEqRd"}' --accountId luminaryvision.shardnet.near
"ServerTransactionError: {"index":0,"kind":{"ExecutionError":"Smart contract panicked: panicked at 'assertion failed: `(left == right)`\n left: `AccountId(\"luminaryvision.shardnet.near\")`,\n right: `AccountId(\"luminaryvision\")`: Can only be called by the owner', staking-farm/src/owner.rs:148:9"}}"
Full output here: https://termbin.com/jih0
When I try to unstake, I get a strange error "amount to unstake must be positive". But you can clearly see my staking pool has the near ...
$ near state luminaryvision.factory.shardnet.near
Account luminaryvision.factory.shardnet.near
{
amount: '143620184635167341594709',
block_hash: 'D3spB7dKrRz8tyEsVQFMP1ahqCZ6XGL5cqEKQknVDg66',
block_height: 1932683,
code_hash: 'DD428g9eqLL8fWUxv8QSpVFzyHi1Qd16P8ephYCTmMSZ',
locked: '2048074136681715248000000000',
storage_paid_at: 0,
storage_usage: 346690,
formattedAmount: '0.143620184635167341594709'
}
I tried to give near-cli a custom key path but it still complains about not being able to find the key. The other thought i had was can I just use the key that I used for my stake pool as my validator key so that they match? Someone said i cannot do that, but I don't know why not. Is that an options?
Can anyone help me with this? Would be super appreciated.
We have partially moved some of our tables from AWS RDS to AWS Keyspaces to see if we could get better performance on KeySpaces. We have put a lot of work to migrate from MySQL to Keyspaces and also we have been monitoring the system to avoid exploding inconsistency. Through our monitoring period, we have observed the following warnings that result in High CPU and memory usage.
- DefaultTokenFactoryRegistry - [s0] Unsupported partitioner 'com.amazonaws.cassandra.DefaultPartitioner, token map will be empty.
-DefaultTopologyMonitor - [s0] Control node IPx/IPy:9142 has an entry for itself in system.peers: this entry will be ignored. This is likely due to a misconfiguration; please verify your rpc_address configuration in cassandra.yaml on all nodes in your cluster(IPx and IPy are cassandra node IPs)
- Control node cassandra.{REGION}.amazonaws.com/{IP_1}:9142 has an entry for itself in system.peers: this entry will be ignored. This is likely due to a misconfiguration; please verify your rpc_address configuration in cassandra.yaml on all nodes in your cluster.
Even though these warnings does not appear immediately after we deployed our code and the following hours, it somehow appears after 24-72 hours after the deployment.
What we have done so far?
We have tried all connections methods existing in AWS Keyspaces Developer Guide: https://docs.aws.amazon.com/keyspaces/latest/devguide/using_java_driver.html
We have found there is an already open discussion in AWS forums: https://forums.aws.amazon.com/thread.jspa?messageID=945795
We configured our client as it's stated by an amazonian: https://forums.aws.amazon.com/profile.jspa?userID=512911
We have also created an issue on the GitHub of aws-sigv4-auth-cassandra-java-driver-plugin. You can see the details by following the link https://github.com/aws/aws-sigv4-auth-cassandra-java-driver-plugin/issues/24
We have walked through the DataStax java driver code to see what's wrong. When we check DefaultTopologyMonitor class, we have seen that there's a rule that checks if our access point to AWS Keyspaces -{IP_2}- which resolves from contact-point [cassandra.{REGION}.amazonaws.com:9142] is control node or not. As this ip address [{IP_2}] exists in system.peers, the control connections is triggered always and iterations and asssignments consumes high cpu and creates garbage. As we understood, the contact point should not be listed in system.peers. We do not have any decision making point to adjust system.peers table, or setting the control node. These are all managed by AWS keyspaces.
Even though it's possible to suppress warnings by setting the log level to error, The Driver says there's a misconfiguration in cassandra.yml which we do not have permission to edit or view. Is there a way to avoid this warning or any solution suggested to solve this issue?
datastax-java-driver {
basic {
contact-points = ["cassandra.eu-west-1.amazonaws.com:9142"]
load-balancing-policy {
class = DefaultLoadBalancingPolicy
local-datacenter = eu-west-1
}
request {
timeout = 10 seconds
default-idempotence = true
}
}
advanced {
auth-provider = {
class = software.aws.mcs.auth.SigV4AuthProvider
aws-region = eu-west-1
}
ssl-engine-factory {
class = DefaultSslEngineFactory
truststore-path = "./cassandra_truststore.jks"
truststore-password = "XXX"
keystore-path = "./cassandra_truststore.jks"
keystore-password = "XXX"
}
retry-policy {
class = com.ABC.DEF.config.cassandra.AmazonKeyspacesRetryPolicy
max-attempts = 5
}
connection {
pool {
local {
size = 9
}
remote {
size = 1
}
}
init-query-timeout = 5 seconds
max-requests-per-connection = 1024
}
reconnect-on-init = true
heartbeat {
timeout = 1 seconds
}
metadata {
schema {
enabled = false
}
token-map {
enabled = false
}
}
control-connection {
timeout = 1 seconds
}
}
}
----------
This is indeed a non-standard, unsupported partitioner: com.amazonaws.cassandra.DefaultPartitioner. Token-aware routing won't work with AWS Keyspaces unless you write your own TopologyMonitor and TokenFactory.
I suggest that you disable token-aware routing completely, see here for instructions.
The warning is just letting you know that the ip will be filtered out. See the line of code here on github. In cassandra the system.peers table contains a list of nodes minus the ip of the control node. In Amazon Keyspaces, the system.peers table also contains the control node ip. You will see this warning when driver initiates a connection or when the driver metadata is updated. When using keyspaces this warning is expected and will not impact performance. There is a patch that will resolve the warning, but I do not have an ETA to share.
I suggest upgrading the java driver to see if your issue is resolved. You can also download the lastest sigv4 plugin which brings in java driver 4.13 as a dependency.
<dependency>
<groupId>software.aws.mcs</groupId>
<artifactId>aws-sigv4-auth-cassandra-java-driver-plugin</artifactId>
<version>4.0.5</version>
</dependency>
Here is a sample driver config for reference.
datastax-java-driver {
basic.contact-points = ["cassandra.us-east-2.amazonaws.com:9142"]
basic.load-balancing-policy {
class = DefaultLoadBalancingPolicy
local-datacenter = us-east-2
}
advanced {
auth-provider = {
class = software.aws.mcs.auth.SigV4AuthProvider
aws-region = us-east-2
}
ssl-engine-factory {
class = DefaultSslEngineFactory
truststore-path = "./src/main/resources/cassandra_truststore.jks"
truststore-password = "my_password"
hostname-validation = false
}
}
advanced.metadata.token-map.enabled = false
advanced.metadata.schema.enabled = false
advanced.reconnect-on-init = true
advanced.connection {
pool {
local.size = 3
remote.size = 1
}
}
}
I have an Analytics pipeline added just before the standard one in section to delete duplicate triggered pageevents before submitting all to database so I can have unique triggered events as there seems to be a bug on android/ios devices that triggers several events within few seconds interval.
In this custom pipeline I need to get the list of all goals/events the current user triggered in his session so I can compare with the values in dataset obtained from args parameter and delete the ones already triggered.
The args.DataSet.Tables["PageEvents"] only returns the set to be submitted to database and that doesn't help since it is changing each time this pipeline runs. I also tried Sitecore.Analytics.Tracker.Visitor.DataSet but I get a null value for these properties.
Does anyone knows a way how to get a list with all goals the user triggered so far in his session without requesting it directly to the database ?
Some code:
public class CommitUniqueAnalytics : CommitDataSetProcessor
{
public override void Process(CommitDataSetArgs args)
{
Assert.ArgumentNotNull(args, "args");
var table = args.DataSet.Tables["PageEvents"];
if (table != null)
{
//Sitecore.Analytics.Tracker.Visitor.DataSet.PageEvents - this list always empty
...........
}
}
}
I had a similar question.
In Sitecore 7.5 I found that this worked:
Tracker.Current.Session.Interaction.Pages.SelectMany(x=>x.PageEvents)
However I'm a little worried that this will be inefficient if the Pages collection is very large.
I have a web service which will be called from about...let us say 100000 users in the same time (within 3 hours). The services reads and updates the SQL database using Entity Framework 4.1. Here is the code
[WebMethod]
public bool addVotes(string username,string password,int votes)
{
bool success= false;
if (Membership.ValidateUser(username, password) == true)
{
DbContext context = new DbContext();
AppUsers user = context.AppUsers.Where(x => x.Username.Equals(username)).FirstOrDefault();
if (user != null)
{
user.Votat += votes;
context.SaveChanges();
success = true;
}
}
return success;
}
The web service will be called from android mobiles(as I said maybe 100000 maybe more maybe less but that`s not important right now). Is there a deadlock possibility or a possibility for things to go wrong?
What will happen when reading from database and what when updating. As one of the answers said: I am updating just the field Vote per each user. If there is any problem with this how do you advice me to correct it.
Thank You in advance :)
This should be fine.
The reason i say that is that as far as i can tell, the only thing that happens when this method is called on behalf of a user is that the vote count (Votat) in their row in the database is increased. As long as they are only touching their own row, and not any row that might also be touched by one of the 99999 other users, then there is no contention between users, and this should scale well.
I have an application which I developed about a year ago and I'm
fetching facebook accounts like this:
facebookClient = new DefaultFacebookClient(access_token);
Connection<CategorizedFacebookType> con = facebookClient.fetchConnection("me/accounts", CategorizedFacebookType.class);
fbAccounts = con.getData();
It worked fine until about a month ago, but now it returns the
fbAccounts list empty. Why is that?
I was hoping moving from restfb-1.6.2.jar to restfb-1.6.9.jar would
help but no luck, it comes up empty on both.
What am I missing?
EDIT, to provide the code for another error I have with this API. The following code used to work:
String id = page.getFbPageID(); // (a valid facebook page id)
FBInsightsDaily daily = new FBInsightsDaily(); // an object holding some insights values
try {
Parameter param = Parameter.with("asdf", "asdf"); // seems like the param is required
JsonObject allValues = facebookClient.executeMultiquery(createQueries(date, id), JsonObject.class, param);
daily.setPageActiveUsersDaily((Integer)(((JsonArray)allValues.opt("page_active_users_daily")).getJsonObject(0)).opt("value"));
...
This throws the following exception:
com.restfb.json.JsonException: JsonArray[0] not found.
at com.restfb.json.JsonArray.get(JsonArray.java:252)
at com.restfb.json.JsonArray.getJsonObject(JsonArray.java:341)
Again, this used to work fine but now throws this.
You need the manage_pages permission from the user to access their list of adminned pages - a year ago I'm not sure you did - check that you're obtaining that permission from your users
{edit}
Some of the insights metrics were also deprecated, the specific values you're checking may no longer exist - https://developers.facebook.com/docs/reference/fql/insights/ should have the details of what is available now
Try to check your queries manually in the Graph API Explorer to eliminate any issues in your code and hopefully get more detailed error messages that your SDK may be swallowing