In my lab, HBase archive Write Ahead Logs i.e. oldWALs files are not deleted and oldWALs directory is growing quickly in terabyte.
8.1 K 24.4 K /hbase/.hbase-snapshot
0 0 /hbase/.hbck
0 0 /hbase/.tmp
3.6 K 10.7 K /hbase/MasterProcWALs
900.3 M 7.1 G /hbase/WALs
3.4 G 10.3 G /hbase/archive
0 0 /hbase/corrupt
938.7 G 2.8 T /hbase/data
42 84 /hbase/hbase.id
7 14 /hbase/hbase.version
4.9 T 4.9 T /hbase/oldWALs
0 0 /hbase/staging
Tried below options to clean up; but no luck.
Updated replication is false at hbase master and restarted
Decrease ttl to 1sec
No peers
Multiple times restarted the HBase component.
I tried below properties then it is delete around 1 terabyte of oldWALs files.
hbase.cleaner.scan.dir.concurrent.size=1 (default is 0.25).
hbase.oldwals.cleaner.thread.size=10 ( default is 2)
Related
We have build Ejabberd in AWS EC2 instance and have enabled the clustering in the 6 Ejabberd servers in Tokyo, Frankfurt, and Singapore regions.
The OS, middleware, applications and settings for each EC2 instance are exactly the same.
But currently, the Ejabberd CPUs in the Frankfurt and Singapore regions are overloaded.
The CPU of Ejabberd in the Japan region is normal.
Could you please tell me the suspicious part?
You can take a look at the ejabberd log files of the problematic (and the good) nodes, maybe you find some clue.
You can use the undocumented "ejabberdctl etop" shell command in the problematic nodes. It's similar to "top", but runs inside the erlang virtual machine that runs ejabberd
ejabberdctl etop
========================================================================================
ejabberd#localhost 16:00:12
Load: cpu 0 Memory: total 44174 binary 1320
procs 277 processes 5667 code 20489
runq 1 atom 984 ets 5467
Pid Name or Initial Func Time Reds Memory MsgQ Current Function
----------------------------------------------------------------------------------------
<9135.1252.0> caps_requests_cache 2393 1 2816 0 gen_server:loop/7
<9135.932.0> mnesia_recover 480 39 2816 0 gen_server:loop/7
<9135.1118.0> dets:init/2 71 2 5944 0 dets:open_file_loop2
<9135.6.0> prim_file:start/0 63 1 2608 0 prim_file:helper_loo
<9135.1164.0> dets:init/2 56 2 4072 0 dets:open_file_loop2
<9135.818.0> disk_log:init/2 49 2 5984 0 disk_log:loop/1
<9135.1038.0> ejabberd_listener:in 31 2 2840 0 prim_inet:accept0/3
<9135.1213.0> dets:init/2 31 2 5944 0 dets:open_file_loop2
<9135.1255.0> dets:init/2 30 2 5944 0 dets:open_file_loop2
<9135.0.0> init 28 1 3912 0 init:loop/1
========================================================================================
I am unsure if this is the platform to ask. But hopefully it is :).
I've got a 3 node setup of ceph.
node1
mds.node1 , mgr.node1 , mon.node1 , osd.0 , osd.1 , osd.6
14.2.22
node2
mds.node2 , mon.node2 , osd.2 , osd.3 , osd.7
14.2.22
node3
mds.node3 , mon.node3 , osd.4 , osd.5 , osd.8
14.2.22
For some reason though, When I down one node, It does not start backfilling/recovery at all. It just reports 3 osd's down as below. But does nothing to repair it....
If I run a ceph -s I get the below ouput:
[root#node1 testdir]# ceph -s
cluster:
id: 8932b76b-282b-4385-bee8-5c295af88e74
health: HEALTH_WARN
3 osds down
1 host (3 osds) down
Degraded data redundancy: 30089/90267 objects degraded (33.333%), 200 pgs degraded, 512 pgs undersized
1/3 mons down, quorum node1,node2
services:
mon: 3 daemons, quorum node1,node2 (age 2m), out of quorum: node3
mgr: node1(active, since 48m)
mds: homeFS:1 {0=node1=up:active} 1 up:standby-replay
osd: 9 osds: 6 up (since 2m), 9 in (since 91m)
data:
pools: 4 pools, 512 pgs
objects: 30.09k objects, 144 MiB
usage: 14 GiB used, 346 GiB / 360 GiB avail
pgs: 30089/90267 objects degraded (33.333%)
312 active+undersized
200 active+undersized+degraded
io:
client: 852 B/s rd, 2 op/s rd, 0 op/s wr
[root#node1 testdir]#
The odd thing though, when I boot up my 3rd node again it does recover and sync. But it looks like it's backfilling just not starting at all...
Is there something that might be causing it?
Update
What I did notice, If I mark a drive as out, it does recover it... But when the server node's down, and the drive's marked as out, it then does not recover it at all...
Update 2:
I noticed while experimenting that if the OSD is up, but out, It does recover... When the OSD is marked as down it does not begin to recover at all...
The default is 10 minutes for ceph to wait until it marks OSDs as out (mon_osd_down_out_interval). This can help in case a server just needs a reboot and returns within 10 minutes then all is good. If you need a longer maintenance window but you're not sure if it will be longer than 10 minutes, but the server will eventually return, set ceph osd set noout to prevent unnecessary rebalancing.
Xoogler in the cloud here. I have a very low qps service that serves HTML plus the follow-up resources. So it typically sits idle and then receives something in the order of 20 requests over 5s with concurrency well below 10, where concurrency limit is 80. I observe that clients regularly receive 429s from Cloud Run, typically after periods of service inactivity, even though an instance is still up (so it's not a cold-start problem). This can either be on the first request but often somewhere in the middle of the sequence (i.e. icons, css don't load).
The instance is concurrent, responsive and could easily handle the load, but Cloud Run doesn't let it. No other instances are spun up either, although we're not even at the max of 2. This suggests that Cloud Run for some reason estimates >2 instances needed?
Here's a typical request sequence, redacted from the logs:
... 20 min idle ...
I 2020-03-27T18:21:27.619317Z GET 307 288 B 5 ms
I 2020-03-27T18:21:27.706580Z GET 302 0 B 0 ms
I 2020-03-27T18:21:27.760271Z GET 200 5.83 KiB 5 ms
I 2020-03-27T18:21:27.838066Z GET 200 1.89 KiB 4 ms
I 2020-03-27T18:21:27.882751Z GET 200 1.05 KiB 4 ms
I 2020-03-27T18:21:27.886743Z GET 200 582 B 3 ms
I 2020-03-27T18:21:27.893060Z GET 200 533 B 4 ms
I 2020-03-27T18:21:27.897352Z GET 200 5.35 KiB 4 ms
I 2020-03-27T18:21:27.899086Z GET 200 11.38 KiB 6 ms
I 2020-03-27T18:21:27.905967Z GET 200 22.48 KiB 13 ms
I 2020-03-27T18:21:27.906113Z GET 200 592 B 13 ms
I 2020-03-27T18:21:27.907967Z GET 200 35.08 KiB 14 ms
...500ms...
I 2020-03-27T18:21:28.434846Z GET 200 2.76 MiB 50 ms
I 2020-03-27T18:21:28.465552Z GET 200 2.29 MiB 67 ms <= up to here all resources served from image
...2500ms...
I 2020-03-27T18:21:31.086943Z GET 200 2.95 KiB 706 ms <= IO-bound, talking to backend api
...1600ms...
W 2020-03-27T18:21:32.674973Z GET 429 14 B 0 ms <= !!!
W 2020-03-27T18:21:32.675864Z GET 429 14 B 0 ms <= !!!
W 2020-03-27T18:21:32.676292Z GET 429 14 B 0 ms <= !!!
I 2020-03-27T18:21:32.684265Z GET 200 547 B 6 ms
I 2020-03-27T18:21:32.686695Z GET 200 504 B 9 ms
I 2020-03-27T18:21:32.690580Z GET 200 486 B 12 ms
Conceivably that last group of requests are 6 parallel requests. Why would three be denied and three served? The service is way under capacity. A couple of reloads typically solve the issue.
It really appears to me as if the algorithm vastly overestimates the required resources after a period of inactivity. I'm happy to try a larger max-instances (redeployed to 10 now) but something really seems off with the estimates on the low end of the spectrum. If "2" as a concurrency setting is below what the platform supports, gcloud probably should probably enforce a higher minimum in the first place.
This is somewhat sad as it impacts people just "trying out" Cloud Run and they observe intermittent errors (partially rendered pages, ...) - which are even pinned on the client (4xx) who is certainly not at fault.
Happy to provide more data.
Configuration:
template:
metadata:
...
annotations:
...
autoscaling.knative.dev/maxScale: '2'
spec:
timeoutSeconds: 900
...
containerConcurrency: 80
containers:
...
resources:
limits:
cpu: 1000m
memory: 244Mi
This looks like a known issue with Cloud Run, I would recommend starring it to receive notifications and expedite resolution.
Parity doesn't seem to have any documentation on what it's console output means. At least none that I've found which admittedly doesn't mean a whole lot. Can anyone give me a breakdown of the meaning of the following line?
2018-03-09 00:05:12 UTC Syncing #4896969 61ee…bdad 2 blk/s 508 tx/s 16 Mgas/s 645+ 1 Qed #4897616 17/25 peers 4 MiB chain 135 MiB db 42 MiB queue 5 MiB sync RPC: 0 conn, 0 req/s, 182 µs
Thanks.
Why document when you can just read code? (bleh)
2018-03-09 00:05:12 UTC(1) Syncing #4896969(2) 61ee…bdad(3) 2 blk/s(4) 508 tx/s(5) 16 Mgas/s(6) 645+(7) 1(8) Qed #4897616(9) 17/25 peers(10) 4 MiB chain(11) 135 MiB db(12) 42 MiB queue(13) 5 MiB sync(14) RPC: 0 conn(15), 0 req/s(16), 182 µs(17)
Timestamp
Best block number (latest verified block number)
Best block hash
Blocks downloaded per second
Transactions downloaded per second
Millions of gas processed per second
Unverified queue size
Verified queue size
Latest block number
Number of active peer nodes/number of total peer nodes
Blockchain header cache size
Blockchain state cache size
Queue cache size
Node sync metadata cache size
Number of open RPC sessions to your node
RPC requests per second
Approximate roundtrip ping
Now the answer to this question is also included in Parity's FAQ. It provides a comprehensive explanation of different command line output:
What does Parity's command line output mean?
I recently faced flash overflow problem. After doing some optimization in code, I saved some flash memory and executed software successfully. I want to how much flash memory is saved through my changes. Please let me know how can I check for used flash / available flash memory. Also I want to how much flash is utilized by particular function/file.
Below mentioned are some info about my developing environment.
- Avr microcontroller with 64 k ram and 512 K flash.
- Using freeRtos.
- Using GNU C++ compiler.
- Using AVRATJTAGEICE for programming and Debugging.
Please let me know the solution.
Regards,
Jagadeep.
GCC's size program is what you're looking for.
size can be passed the full compiled .elf file. It will, by default, output something like this:
$ size linked-file.elf
text data bss dec hex filename
11228 112 1488 12828 321c linked-file.elf
This is saying:
There are 11228 bytes in the .text "section" of this file. This is generally for functions.
There are 112 bytes of initialized data: global variables in the program with initial values.
There are 1488 bytes of uninitialized data: global variables without initial values.
dec is simply the sum of the previous 3 values: 11228 + 112 + 1488 = 12828.
hex is simply the hexadecimal representation of the dec value: 0x321c == 12828.
For embedded systems, generally dec needs to be smaller than the flash size of your target device (or the available space on the device).
It is generally sufficient to simply watch the dec or text outputs of GCC's size command to monitor the size of your compiled code over time. A large jump in size often indicates a poorly implemented new feature or constexpr that are not getting compiled away. (Don't forget function-sections and data-sections).
Note: For AVR's, you'll want to use avr-size for checking the linked size of AVR .elf files. avr-size takes an extra argument of the target chip and will automatically calculate the percentage of used flash for your chosen chip.
GCC's size also works directly on intermediate object files.
This is particularly useful if you want to check the compiled size of functions.
You should see something like this excerpt:
$ size -A main.cpp.o
main.cpp.o :
section size addr
.group 8 0
.group 8 0
.text 0 0
.data 0 0
.bss 0 0
.text._Z8sendByteh 8 0
.text._ZN3XMC5IOpin7setModeENS0_4ModeE 64 0
.text._ZN7NamSpac6OptionIN5Clock4TimeEEmmEi 76 0
.text.Default_Handler 24 0
.text.HardFault_Handler 16 0
.text.SVC_Handler 16 0
.text.PendSV_Handler 16 0
.text.SysTick_Handler 28 0
.text._Z5errorPKc 8 0
.text._ZN7NamSpac5Motor2goEi 368 0
.text._ZN7NamSpac5Motor3getEv 12 0
.rodata.cst1 1 0
.text.startup.main 632 0
.text._ZN7NamSpac7Program3runEv 380 0
.text._ZN7NamSpac8Position4tickEv 24 0
.text.startup._GLOBAL__sub_I__ZN7NamSpac7displayE 292 0
.init_array 4 0
.bss._ZN5Debug9formatterE 4 0
.rodata._ZL10dispDigits 8 0
.bss.position 4 0
.bss.motorState 4 0
.bss.count 4 0
.rodata._ZL9diameters 20 0
.bss._ZN7NamSpac8diameterE 16 0
.bss._ZN5Debug3pinE 12 0
.bss._ZN7NamSpac7displayE 24 0
.rodata.str1.4 153 0
.rodata._ZL12dispSegments 32 0
.bss._ZL16diametersDisplay 10 0
.bss.loadAggregate 4 0
.bss.startCount 4 0
.bss._ZL15runtimesDisplay 10 0
.bss._ZN7NamSpac7runtimeE 16 0
.bss.startTime 4 0
.rodata._ZL8runtimes 20 0
.comment 111 0
.ARM.attributes 49 0
Total 2494
Please let me know the solution.
Sorry, there's no the solution! You've gotta getting through what's linked to your final ELF, and decide if it was linked by intend, or unwanted default.
Please let me know how can I check for used flash / available flash memory.
That primarily depends on your actual target hardware platform, so you have to manage to get your .text section fitting in there.
Also I want to how much flash is utilized by particular function/file.
The nm tool of the GCC binutils provides detailed information about any (global) symbol found in an ELF file and the space it occupies in it's associated section. You'll just need to grep the results for particular functions/classes/namespaces (best demangled!) to accumulate section type and symbol filtered outputs for analysis.
That's the approach, I've been using for a little tool called nmalyzr. Sorry to say, as it stands on the GIT repo, its not really working as intended (I've got working versions, that aren't pushed back).
In general, it's a good strategy to chase for code that has #include <iostream> statements (no matter if std::cout or alike are used or not, static instances are provided!), or unwanted newlib/libstdc++ bindings as for e.g. default exception handling.
Use size command from binutils on the generated elf file. As you seem to use an AVR chip, use avr-size.
To get the size of functions, use nm command from binutils (avr-nm on AVR chips).