WHM Cpanel / AWS hosting, this partition is full, how? - amazon-web-services

I've got this hosting account with AWS, and I had a partition that was getting really full. So I hired someone to create a new partition and he showed me how to migrate a few accounts into the new partition.
I thought this would bring down the amount of data that was in the near full partition, but it hasn't.
See from the screenshot, what is called 'home3' is totally full, and 'home4' (the new partition) is slowly filling up.
I'm assuming home3 is full of backups or something.
How do I clean up home3 without messing with command line tools? Or, do I need to hire a real pro to do this? Is there something in WHM that allows me to do a clean up? Because if there is I can't find it.

If you are not sure what is stored in /home3 partition, you should hire server admin to check and perform necessary migration for you. Before you delete any data, you should know whether those data are important or not.
Also, in some cause, it requires a server reboot after deleting or moving data from full partition. Actual free space should be reflected after server reboot.

Related

Google Cloud SQL - Database instance storage size increased dramatically everyday

I have a database instance (MySQL 8) on Google Cloud and since 20 days ago, the instance's storage usage just keeps increasing (approx 2Gb every single day!).
But I couldn't find out why.
What I have done:
Take a look at Point-in-time recovery "Point-in-time recovery" option, it's already disabled.
Binary logs is not enabled.
Check the actual database size and I see my database is just only 10GB in size
No innodb_per_table flag, so it must be "false" by default
Storage usage chart:
Database flags:
The actual database size is 10GB, now the storage usage takes up to 220GB! That's a lot of money!
I couldn't resolve this issue, please give me some ideal tips. Thank you!
I had the same thing happen to me about a year ago. I couldn't determine any root cause of the huge increase in storage size. I restarted the server and the problem stopped. None of my databases experienced any significant increase in size. My best guess is that some runaway process causes the binlog to blow up.
Turns out the problem is in a Wordpress theme's function called "related_products" which just read and write every instance of the products that user comes accross (it would be millions per day) and makes the database physically blew up.

Couchdb with Clouseau plugin is taking more storage than expected

I've been using an AWS instance with CouchDb as a backup to IBM's Cloudant database of my application (using replication).
Everything seems to work fine but I've been noticing the permanent increase of Volume size in the AWS instance (it gets full all the time with the annoying problem of increasing a volume when there's no space in the partition).
Actual use of storage
The data in the screenshot is using almost 250 GB. I would like to know the possible reason for this issue, my guess is that the Clouseau plugin is using more space to enable the search index queries.
As I'm not an expert with this database, Anyone could explain to me why this is happening and how could I mitigate the issue?
My best regards!
If you are only backing up a Cloudant database to a CouchDB instance via replication, you should not need Clouseau enabled.
Clouseau is only required for search indices and if you are not doing queries on your backup database you can disable Clouseau in there. The indices are not backed up in the replication.

google cloud hard disk deleted. all data lost

My google cloud VM hard disk got full. So I tried to increase its size. I have done this before. This time things went differently. I increased the size. But the VM was not picking up the new size. So I stopped VM. Next thing I know, my VM got deleted and recreated, my hard disk returned to previous size with all data lost. It had my database with over 2 months of changes.
I admit I was careless not to backup. But currently my concern is, is there a way to retrieve the data. On Google Cloud, it shows $400 for Gold Plan which includes Tech Support. If I can be certain that they will be able to recover the data, I will am willing to pay. Does anyone know if I pay $400, the google support team will be able to recover the data?
If there are other ways to recover data, kindly let me know.
UPDATE:
Few people have shown interest in investigating this.
This most likely happened because by default "Auto-delete boot disk" option is selected which I was not aware of. But even then, I would expect auto-delete to happen when I delete the VM, not when I simply stopped it.
I am attaching screenshot of all activities that happened after I resized the boot partition.
As you can see, I resized the disk at 2:00AM.
After receiving resize successful message, I stopped the VM.
Suddenly at 2:01, VM got deleted.
At this point I had not checked notifications, I simply thought, it stopped. Then I started VM hoping to see new resized disk.
Instead of starting my VM, new VM was created with new disk and all previous data was lost.
I tried stopping and starting VM again. But the result was still the same.
UPDATE:
Adding activities before the incident.
It is not possible to recover deleted PDs.
You have no snapshots either?
The disk may have been marked for auto-delete.
However, this disk shouldn't have been deleted when the instance was stopped even if it was marked for auto-delete.
You can also only recover a persistent disk from a snapshot.
In a managed instance group, when you stop an instance, health check fails and the MIG deletes and recreates an instance if autoscaler is on. The process is discussed here. I hope that sheds some light if that is your use case.

Cannot restore lifetimevirtualgood item that was given as a gift

I use this method
soomla::CCStoreInventory::sharedStoreInventory()->giveItem(REMOVE_ADS_ITEM_ID, 1);
to give player one remove-ads item. After that, player remove and reinstall app again and click on Restore Purchase button but no remove-ads item is restored.
I'm so confused that given item cannot restore or there are somethings I missed? Please help.
The restore functionality works by looking up what IAPs the user owns (on the App Store/Google Play/etc.), and gives each non-consumable locally so that Soomla knows about it.
Since you're just giving the item locally directly, restore items has no idea that the item was granted (since you're wiping the data that says it was). It's still only looking at the official stores.
What you can do is sync what items the user owns to the cloud, and restore from that, using a UID. If you want complete control, this is the best bet, but that involves your own servers and coming up with a way of generating UID purely from device information, and not one-size-fits-all. Then you'd give the items locally when you can verify that the same user owns it on your server.
But there is an easier way. Soomla has an official implementation where they do all of this for you: Grow Sync.
Update (May 2016): Soomla is now shutting down Grow Sync, Highway, etc. so you can no longer rely on those services.

Sitecore Database Cleanup Fails

Sitecore 6.6
I'm speaking with Sitecore Support about this as well, but thought I'd reach out to the community too.
We have a custom agent that syncs media on the file system with the media library. It's a new agent and we made the mistake of not monitoring the database size. It should be importing about 8 gigs of data, but the database ballooned to 713 GB in a pretty short amount of time. Turns out the "Blobs" table in both "master" and "web" databases is holding pretty much all of this space.
I attempted to use the "Clean Up Databases" tool from the Control Panel. I only selected one of the databases. This ran for 6 hours before it bombed due to consuming all the available locks on the SQL Server:
Exception: System.Data.SqlClient.SqlException
Message: The instance of the SQL Server Database Engine cannot obtain a LOCK
resource at this time. Rerun your statement when there are fewer active users.
Ask the database administrator to check the lock and memory configuration for
this instance, or to check for long-running transactions.
It then rolled everything back. Note: I increased the SQL and DataProvider timeouts to infinity.
Anyone else deal with something like this? It would be good if I could 'clean up' the databases in smaller chunks to avoid overwhelming the SQL Server.
Thanks!
Thanks for the responses, guys.
I also spoke with support and they were able to provide a SQL script that will clean the Blobs table:
DECLARE #UsableBlobs table(
ID uniqueidentifier
);
INSERT INTO
#UsableBlobs
select convert(uniqueidentifier,[Value]) as EmpID from [Fields]
where [Value] != ''
and (FieldId='{40E50ED9-BA07-4702-992E-A912738D32DC}' or FieldId='{DBBE7D99-1388-4357-BB34-AD71EDF18ED3}')
delete top (1000) from [Blobs]
where [BlobId] not in (select * from #UsableBlobs)
The only change I made to the script was to add the "top (1000)" so that it deleted in smaller chunks. I eventually upped that number to 200,000 and it would run for about an hour at a time.
Regarding cause, we're not quite sure yet. We believe our custom agent was running too frequently, causing the inserts to stack on top of each other.
Also note that there was a Sitecore update that apparently addressed a problem with the Blobs table getting out of control. The update was 6.6, Update 3.
I faced such a problem previously, and we had contacted Sitecore Support.
They gave us a Sitecore Support DLL, and suggessted a Web.Config change for Dataprovider -- from main type="Sitecore.Data.$(database).$(database)DataProvider, Sitecore.Kernel" to the new one.
The reason I am posting on this question of yours is that because the most of the time taken for us was in Cleaning up Blobs and and they gave us this DLL to increase Cleanup Blobs speed. So I think it might help you too.
Hence, I would like to suggest if you could please request Sitecore Support in this case, I am sure you might get the best solution to solve your case.
Hope this helps you!
Regards,
Varun Shringarpure
If you have a staging environment I would recommend taking a copy of database and try to shrink the database. Part of the database size might also be related to the transaction log.
if you have a DBA please have him (her) involved.