We use RabbitMQ 3.10.10 (by Amazon AWS) and created some queues with type "classic", auto delete "no", durability "durable". But all queues were deleted after a few weeks and we don't know why, because auto delete was "no".
We use many docker containers for our applications. Is there maybe a problem when we restart a container?
Does anyone knows what was the problem? Some wrong settings? Or is there a default value?
Thanks for help!
Related
This is driving me insane, so hopefully someone can help.
I am attempting to upgrade/migrate an Aurora MySQL Serverless instance from V1 to V2 utilizing the process found in the documentation. When I reach step 4...
Restore the snapshot to create a new, provisioned DB cluster running Aurora MySQL version 3 that's compatible with Aurora Serverless v2, for example, 3.02.0.
... the database that results from the restored snapshot is Aurora v2 again, even though the cluster was v3 until the database was created. This means that I can't change it to Serverless V2 (I hate how confusing these version numbers are...).
I've tried several different tiers and types of provisioned databases for the interim copies, and I've tried using the CLI tool in case it was an issue with the GUI, and I get the same result every time.
Has anyone ran into this? Am I just missing something? I'm pretty much at a complete loss here, so any help is appreciated.
I'm not entirely sure what happened initially, but trying again on a different day resulted in an error log. Previously, no logs were coming through on the migrated instance. It may have been my impatience then, but at least now I have an answer.
In my case, it was some corrupted Views that were part of some legacy code that was disallowing the migration. If anyone else runs into this, make sure you give the log files time to generate, and look at the upgrade_prechecks.log file to see what the actual errors are.
More information about the logs and how to find them can be found in the official documentation.
I am deploying a pipeline to Google Cloud DataFlow using Apache Beam. When I want to deploy a change to the pipeline, I drain the running pipeline and redeploy it. I would like to make this faster. It appears from the logs that on each deploy DataFlow builds up new worker nodes from scratch: I see Linux boot messages going by.
Is it possible to drain the pipeline without tearing down the worker nodes so the next deployment can reuse them?
rewriting Inigo's answer here:
Answering the original question, no, there's no way to do that. Updating should be the way to go. I was not aware it was marked as experimental (probably we should change that), but the update approach has not changed in the last 3 i have been using DF. About the special cases of update not working, supposing your feature existed, the workers would still need the new code, so no really much to save, and update should work in most of the other cases.
I have tried to reboot the Amazon RDS and the status is stuck at Rebooting. It's been 3 days now and it still shows the same message. I have tried killing all the processes running on the database but it did not work.
I'm unable to take a snapshot also due to this.
Please suggest a solution.
Image for Reference
Contact AWS support.
Realistically that is the only way you will get this resolved. 3 days implies something has gone wrong with the underlying system and they will need to get system engineers involved to resolve it.
I have an EC2 instance running at AWS with some standard webpages. Since a few days the server replies with "AWS!" instead of delivering the index.page. Checking the source code of this page:
<html><body><script type="text/javascript" src="http://gc.kis.v2.scr.kaspersky-labs.com/XXXXX/main.js" charset="UTF-8"></script>AWS!</body></html>
On this instance Kaspersky is not installed. I didnt found any hints on Google so far - maybe someone has made a similiar experience and give me a hint why my index-page is not shown anymore (the code was not changed). Maybe AWS has undergone a change?
Any hint is very appreciated.
Issue fixed, stdunbar & John pointed into the right direction. DNS entry was wrong.
I have a project deployed on EC2 instance and is up.
But sometime when I login through FTP and transfer the updated build to the EC2, some of my project file gets missing.
After a while those set of files is seen listed at the same place.
Couldn't relate why these unexpected behavior is happening. Let me know if anyone has faced similar kind of situation.
Or anyone can give me a way to know what all logins are being done through FTP and SSH on my EC2.
Files don't just randomly go missing on an EC2 instance. I suspect there is something going on and you'll need to diagnose it. There is not enough information here to help you but I can try point you in the right direction.
A few things that come to mind are:
What are you running to execute the ftp command? If it's appearing after some time, are you sure it's just not in progress when you first check then it appears when it's done? are you sure nothing is being cached?
Are you sure your FTP client is connected to the right instance?
Are you sure there are no cron tasks or external entities connecting to the instance and cleaning out a certain directory? You said something about the build, is this a build agent you're performing this on?
I highly doubt it's this one but: What type of volume are you working on? EBS? Instance Store? Instance Store is ephemeral so stopping/starting the instance can result in data being lost.
Have you tried using scp ?
If you're still stumped, please provide more info on your ec2 config and how you're transferring the file.