Amazon RDS MySQL tmpdir location - amazon-web-services

We are facing a strange problem. We are running a Magento based store. In our admin, when we try to see orders, we are getting the error:
SQLSTATE[HY000]: General error: 126 Incorrect key file for table '/rdsdbdata/tmp/#sql_20b_0.MYI'; try to repair it
After lot of research, I found that tmp folder has run out of space.
I executed the command: show variables like '%tmpdir%'
And the value of folder was: /rdsdbdata/tmp
I ssh into my server and executed: df -h
This returned:
/dev/xvda1 mounted on /
tmpfs mounted on /dev/shm
/dev/xvdb mounted on /mnt/data
But I could not find the location: /rdsdbdata/tmp anywhere
So I'm not able to clear memory.
enter image description here

I ssh into my server
Not really. Your database is on an RDS instance, which can't be accessed over SSH. You must have ssh'ed into your web server, instead.
RDS provides you with a managed server with MySQL -- and nothing else -- running on it. It's not the machine where you were looking. You can't perform any administration on the underlying server. Everything -- including increasing the amount of allocated storage -- is done through the AWS console or API.
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ModifyInstance.MySQL.html

Related

Moving Existing Directories to a new EBS Volume in AWS (Red Hat)

I have an EC2 instance with a 20GB root volume. I attached an additional 200GB volume for partitioning to comply with DISA/NIST 800-53 STIG by creating separate partitions for directories such as /home, /var, and /tmp, as well as others required by company guidelines. Using Red Hat Enterprise Linux 7.5. I rarely ever do this and haven't done it for years so I'm willing to accept stupid solutions.
I've followed multiple tutorials using various methods and get the same result each time. The short (from my understanding), the OS cannot access certain files/directories on these newly mounted partitions.
Example: When I mount say /var/log/audit, the audit daemon will fail.
"Job for auditd.service failed because the control process exited with error code. See..."
systemctl status auditd says "Failed to start Security Auditing Service". I am also unable to login via public key when I mount /home but these types of problems go away when I unmount the new partitions. journalctl -xe reveals "Could not open dir /var/log/audit (Permission denied).
Permission for each dir is:
drwxr-xr-x root root /var
drwxr-xr-x root root /var/log
drws------ root root /var/log/audit
Which is consistent with the base OS image, which DOES work when the partition isn't mounted.
What I did:
-Create EC2 with 2 volumes (EBS)
-Partitioned the new volume /dev/xvdb
-Formatted partition to extf
-Create /etc/fstab entries for the partitions
-Mounted partitions to a temp place in /mnt then copied the contents using rsync -av <source> <dest>
-Unmounted the new partitions and updated fstab to reflect actual mount locations, e.g. /var/log/audit
I've tried:
-Variations such as different disk utilities (e.g. fdisk, parted)
-Using different partition schemes (GPT, DOS [default], Windows basic [default for the root volume, not sure why], etc.)
-Using the root volume from an EC2, detaching, attaching to other instance as a 2nd volume, and only repartitioning
Ownership, permissions, and directory structures are exactly the same, as far as I can tell, between the original directory and the directory created on the new partitions.
I know it sounds stupid but did you try restarting instance after new mounts?
The reason why I suggest this is because linux caches dir/file path to inode mapping. When you change mount, I am not sure if cache is invalidated. And that can possibly the reason for errors.
Also, though it is Ubuntu have a look at: https://help.ubuntu.com/community/Partitioning/Home/Moving

Mount persistent disk with data to VM without formatting

I am trying to mount a persistent disk with data to a VM to use it in Google Datalab. So far no success, ideally I would like to see my files in the Datalab notebook.
First, I added the disk in VM settings with Read/Write mode.
Second, I ran $lsblk to see what disks there are.
Then tried this: sudo mount -o ro /dev/sdd /mnt/disks/z
I got this error:
wrong fs type, bad option, bad superblock on /dev/sdd,
P.S. I used the disk I want to mount on another VM and downloaded some data on it. It was formatted as NTFS disk.
Any ideas?
I would need to create a file system as noted in the following document
$ mkfs.vfat /dev/sdX2
Creates a new partition, without creating a new file system on that partition.
Command: mkpart [part-type fs-type name] start end

Redis telling me "Failed opening .rdb for saving: Permission denied"

I'm running Redis server 2.8.17 on a Debian server 8.5. I'm using Redis as a session store for a Django 1.8.4 application.
I haven't changed the software configuration on my server for a couple of months and everything was working just fine until a week ago when Django began raising the following error:
MISCONF Redis is configured to save RDB snapshots but is currently not able to persist to disk. Commands that may modify the data set are disabled. Please check Redis logs for details...
I checked the redis log and saw this happening about once a second:
1 changes in 900 seconds. Saving...
Background saving started by pid 22213
Failed opening .rdb for saving: Permission denied
Background saving error
I've read these two SO questions 1, 2 but they haven't helped me find the problem.
ps shows that user "redis" is running the server:
redis 26769 ... /usr/bin/redis-server *.6379
I checked my config file for the redis file name and path:
grep ^dir /etc/redis/redis.conf =>
dir /var/lib/redis
grep ^dbfilename /etc =>
dbfilename dump.rdb
The permissons on /var/lib/redis are 755 and it's owned by redis:redis.
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis too.
I also ran strace on the server process:
ps -C redis-server # pid = 26769
sudo strace -p 26769 -o /tmp/strace.out
But when I examine the output, I don't see any errors. In particular I don't see a "Permission denied" error as I would expect.
Also, /var/lib/redis is not an NFS directory.
Does anyone know what else could be causing this? I'd hate to have to stop using Redis. I know I can run the command "set stop-writes-on-bgsave-error yes" but that doesn't solve the problem.
This is now happening on a daily basis and the only way I can stop the error is to restart the Redis server.
Thanks.
I just had a similar issue. Despite my config file being correct, when I checked the actual dbfilename and dir in redis-client, they were incorrect.
Run redis-cli and then
CONFIG GET dbfilenamewhich should return something like
1) "dbfilename"
2) "dump.rdb"
1) is just the key and 2) the value. Similarly then run CONFIG GET dir should return something like
1) "dir"
2) "/var/lib/redis"
Confirm that these are correct and if not, set them with CONFIG SET dir /correct/path
Hope this helps!
If you have moved Redis to a new mounted volume: /mnt/data-01.
sudo vim /etc/systemd/system/redis.service
Set ReadWriteDirectories=-/mnt/data-01
sudo mkdir /mnt/data-01/redis
Set chown and chmod on new redis data dir and rdb file.
The permissons on /var/lib/redis are 755 and it's owned by redis:redis
The permissons on /var/lib/redis/dump.rdb are 644 and it's owned by redis:redis
Switch configurations while redis is running
$ redis-cli
127.0.0.1:6379> CONFIG SET dir /data/tmp
redis-cli 127.0.0.1:6379> CONFIG SET dbfilename temp.rdb
127.0.0.1:6379> BGSAVE
tail /var/log/redis/redis.cnf (verify saved)
Start Redis Server in a directory where Redis has write permissions
The answers above will definitely solve your problem, but here's what's actually going on:
The default location for storing the rdb.dump file is ./ (denoting current directory). You can verify this in your redis.conf file. Therefore, the directory from where you start the redis server is where a dump.rdb file will be created and updated.
Since you say your redis server has been working fine for a while and this just started happening, it seems you have started running the redis server in a directory where redis does not have the correct permissions to create the dump.rdb file.
To make matters worse, redis will also probably not allow you to shut down the server either until it is able to create the rdb file to ensure the proper saving of data.
To solve this problem, you must go into the active redis client environment using redis-cli and update the dir key and set its value to your project folder or any folder where non-root has permissions to save. Then run BGSAVE to invoke the creation of the dump.rdb file.
CONFIG SET dir "/hardcoded/path/to/your/project/folder"
BGSAVE
(Now, if you need to save the dump.rdb file in the directory that you started the server in, then you will need to change permissions for the directory so that redis can write to it. You can search stackoverflow for how to do that).
You should now be able to shut down the redis server. Note that we hardcoded the path. Hardcoding is rarely a good practice and I highly recommend starting the redis server from your project directory and changing the dir key back to./`.
CONFIG SET dir "./"
BGSAVE
That way when you need redis for another project, the dump file will be created in your current project's directory and not in the hardcoded path's project directory.
You can resolve this problem by going into the redis-cli
Type redis-cli in the terminal
Then write config set stop-writes-on-bgsave-error no and it resolved my problem.
Hope it resolved your problem
Up to redis 3.2 it shipped with pretty insane defaults which opened the port to the public. In combination with the CONFIG SET instruction everybody can change your redis config from outside easily. If the error starts after some time, someone probably changed your config.
On your local machine check that
telnet SERVER_IP REDIS_PORT
is denied. Otherwise check your config, you should have the setting
bind 127.0.0.1
enabled.
Dependent on the user that runs redis, you should also check for damage that the intruder has done.

Cannot access cloudera manager on port 7180

Installing Cloudera Manager on an AWS EC2 instance, following the official instruction:
http://www.cloudera.com/documentation/archive/manager/4-x/4-6-0/Cloudera-Manager-Installation-Guide/cmig_install_on_EC2.html
I successfully run the .bin package, but when I visit the IP:7180 , the browser says my access has been denied...Why ...
I tried to confirm the status of cm server: service cloudera-scm-server status. At first it said
cloudera-scm-server is dead and pid file exists
The log file showed mentioned "unknown host ip-10-0-0-110" then I add a map between ip-10-0-0-110 and the EC2 instance **public** ip. Then restart the scm-server service. It could run normally, but the IP:7180 remained unaccessable, saying ERR_CONNECTION_REFUSED. I have uninstalled both the iptables and closed my windows firewall.
After a few minute, cloudera-scm-server is dead and pid file exists appeared again...
Using: tail -40 /var/log/cloudera-scm-server/cloudera-scm-server.out
JAVA_HOME=/usr/lib/jvm/java-7-oracle-cloudera Java HotSpot(TM) 64-Bit
Server VM warning: INFO: os::commit_memory(0x0000000794223000,
319201280, 0) failed; error='Cannot allocate memory' (errno=12)
There is insufficient memory for the Java Runtime Environment to continue.
Native memory allocation (malloc) failed to allocate 319201280 bytes for committing reserved memory.
An error report file with more information is saved as:
/tmp/hs_err_pid5523.log
What type of EC2 instance are you using? The error is pretty descriptive and indicates that CM is unable to access memory. Maybe you are using an instance type with too little RAM.
Also - the docs you are referencing are out of date. The latest docs on deploying CDH5 in the cloud can be found here: https://www.cloudera.com/documentation/director/latest/topics/director_get_started_aws.html
These docs also recommend using Cloudera Director which will simplify much of the deployment and configuration of your cluster.

unable to export HDFS FUSE mount over NFS

I m using CDH 4.3.0 and I have mounted hdfs using FUSE on my edge node. And the fuse mount point automatically changes the permissions from root:root to hdfs:hadoop.
When I try to export it over NFS, it throws a permissions error to me. Can anyone help me how to get around this. I ve read somewhere that it works only in kernel versions 2.6.27 and above, and mine is 2.6.18...Is this true ?
my mount command gives this output for hdfs fuse:
fuse on /hdfs-root/hdfs type fuse (rw,nosuid,nodev,allow_other,allow_other,default_permissions)
my /etc/exports looks like this:
/hdfs-root/hdfs/user *(fsid=0,rw,wdelay,anonuid=101,anongid=492,sync,insecure,no_subtree_check,no_root_squash)
my /etc/fstab looks like this:
hadoop-fuse-dfs#hdfs:: /hdfs-root/hdfs fuse allow_other,usetrash,rw 2 0
:/hdfs-root/hdfs/user /hdfsbkp nfs rw
the first line is for FUSE, and 2nd line is to export the mounted HDFS over NFS..
when I run mount -a, I get the following error...
"mount: :/hdfs-root/hdfs/user failed, reason given by server: Permission denied"
Also, I tried to change the ownership of FUSE mount to root:root, but it wouldnt let me do that either...by the way, we are using kerberos as authentication method to access hadoop...
Any help is really appreciated!!!