What specifies the history length of the Clojure REPL? - clojure

Where's it set ? I'm using Lein REPL and the LaClojure REPL, I can't find where the history length is set.

There is the idea.cycle.buffer.size option the idea.properties file (located in the bin directory):
---------------------------------------------------------------------
This option controls console cyclic buffer: keeps the console output size not higher than the specified buffer size (Kb)
Older lines are deleted. In order to disable cycle buffer use idea.cycle.buffer.size=disabled
--------------------------------------------------------------------
idea.cycle.buffer.size=1024
Also in https://stackoverflow.com/a/10793230/151650 Micah mentions the set history-size 10000 setting in the .inputrc file

Related

Displaying large output on terminal window

int a[10000];
for(int i=0;i<10000;i++)
{
a[i]=i; cout<<a[i]<<endl;
}
Suppose this is the code and on the terminal screen I need all outputs (0-9999) But it only displays (9704-9999) at the end
I want to see all the numbers on the terminal window but it removes the upper part of the data. I guess I have to change some settings.
Increase the console buffering. Depending on which terminal you're using it'll be different. For example on Windows conhost.exe is the default console used by cmd and PowerShell. Just click the icon on the top left > Properties > Layout and set Screen Buffer Size to a large enough number
But the better solution would be redirecting to file, because no one wants to read 10000 lines on the console, and there's no guarantee that the console will have a buffer of infinite length or length of more than 10000 lines. conhost for example only supports maximum 9999 lines so you'll miss at least the command you typed and the first output line. Besides that'll often remove other commands' output from the history which is undesirable
Either do that from the command line with the redirection operator >
yourapp.exe >output.txt
or save to file directly from your code

Eclipse CDT & STM32: force predefined program memory

I have an uncommon but in my eyes reasonable use case:
I have to build two STM32 firmware images: a boot loader and an application (by using the latest Eclipse CDT based IDE from ST Microelectronics, called: "STM32CubeIDE").
Because my constraints are mostly low power consumption and not security, therefore I have only the requirement for data integrity for the DFU (Device Firmware Upgrade) scenario and for this I implemented a CRC32 check over the complete FW images. The tricky part is, that the firmware itself contains its actually size within a C-struct at a fixed offset address 0x200 in the code memory (the benefit for this design is, that not the complete code memory has to be transmitted, but the FW is always protected by the CRC32):
The layout of a firmware is something like this:
<ISR Table> <FW-Header#FixedAddress0x200> <RestFWCode> " + CRC32
FW Header contains FW size
The complete FW size which is used by the booloader to flash the application is the stored FW size (see 1.) + 4 byte of the appended CRC32
For my implementation I need to replace a memory area within the "FW Header" area with the actual FW size (which is only available after the build process).
For this I made a python script which patches the binary "*.bin" file, but it seems, that Eclipse/GDB uses for debugging the ELF-file, which looks for me much more complicated for a custom patch compared to the binary image, since I found no easy way to do this (to replace the actual FW size and append the 4 bytes of the CRC32).
Therefore, I thought the easiest way would be to patch the code memory right after the firmware got loader from the debugger.
I tested successfully a command-line tool from ST which can manipulate arbitrary memory even in the code memory (my code memory starts at 0x08000000 + application at offset 0x4000 and FW header at öffset 0x200 -> 0x08004200):
ST-LINK_CLI.exe -c SWD -w32 0x08004200 0xAABBCCDD
(see: https://www.st.com/resource/en/user_manual/cd00262073.pdf)
My Problem is, I don't know how to initiate this simple EXE call right before the debugger got attached to the MCU... I tried the "Debug Configuration"-> "Startup" -> "Run Commands", but without success...
Does anybody know a way how to achieve this?
Running a program before starting a debug session can be done using Eclipse's "Launch group" located under Debug Configurations, e.g. top menu -> Run -> Debug Configurations.
However before doing that you should go to project properties -> Builders and add your program invocation there - path to the executable plus its arguments. Make sure it's NOT checked so that it doesn't run when you build your project. Then you can go to the Launch Groups described above and create a group that contains the program you've defined in the project Builders section, then after that your regular debug session which you should already have available on the list.
I am going to recommend a different workflow to achieve the same image format:
<ISR Table> <FW-Header#FixedAddress0x200> <RestFWCode> " + CRC32
Once in place, your build process will be:
Build FW image to produce a raw binary, without a CRC-32 value
Calculate CRC-32 on the raw binary image, third-party tool
Insert the calculated CRC-32 into linker script, rebuild FW image
Note: For STM32CubeIDE, you will want to have your own *.elf project that includes the STM32CubeMX project as a static library. Otherwise, STM32CubeMX will overwrite your linker script each time it generates new code.
Since each project tends to have a slightly different linker script, I am going to demonstrate using a .test_crc sector. Insert the following into your *.ld linker script:
.test_crc :
{
_image_start = .;
BYTE( 0x31)
BYTE( 0x32)
BYTE( 0x33)
BYTE( 0x34)
BYTE( 0x35)
BYTE( 0x36)
BYTE( 0x37)
BYTE( 0x38)
BYTE( 0x39)
/* FW Image Header */
LONG( _image_end - _image_start ) /* Size of FW image */
LONG( _image_start ) /* Base address to load FW image */
/* Place this at the end of the FW image */
_image_end = .;
_IMAGE_CRC = ABSOLUTE(0x0); /* Using CRC-32C (aka zip checksum) */
/*LONG( (-_IMAGE_CRC) - 1 ) /* Uncomment to append value, comment to calculate new value */
} >FLASH
Add the following as a post-build step in STM32CubeIDE (generates the raw binary image):
arm-none-eabi-objcopy -S -O binary -j .test_crc ${ProjName}.elf ${ProjName}.bin
Now your ready to test/evaluate the process:
Rebuild your project to generate a *.bin file.
Using a third-party tool, calculate a CRC-32 checksum. I use 7-Zip command-line interface to generate the CRC-32C value for the *.bin file
Append the calculated CRC-32C. In the linker script, set the 0x0 in the following _IMAGE_CRC = ABSOLUTE(0x0) to match your calculated value. Uncomment the following:
LONG( (-_IMAGE_CRC) - 1 ) /* Uncomment to append value, comment to calculate new value */
Rebuild your image and run the third-party CRC utility, it should now report 0xFFFFFFFF for a CRC-32C value.
When you are ready to apply this to your actual FW image, do the following:
Change the post-build step to dump the full binary: arm-none-eabi-objcopy -S -O binary ${ProjName}.elf ${ProjName}.bin
Move _image_start = .; in front of your vector table.
Move the following after your vector table:
/* FW Image Header */
LONG( _image_end - _image_start ) /* Size of FW image */
LONG( _image_start ) /* Base address to load FW image */
Move the following to end of the last sector:
/* Place this at the end of the FW image */
_image_end = .;
_IMAGE_CRC = ABSOLUTE(0x0); /* Using CRC-32C (aka zip checksum) */
/*LONG( (-_IMAGE_CRC) - 1 ) /* Uncomment to append value, comment to calculate new value */
You may find that you actually do not need the CRC value built into the image, and can just append the CRC value to the *.bin. Then provide the *.bin to your bootloader. The *.bin will still contain the load address and size of FW image, +4 bytes for the appended CRC value.

FreeType Glyph Metrics Caching of multiple Font sizes

Situation:
I have a project that renders product information onto a given template (custom XML format), then renders and converts it in a custom binary LCD format (steps simplified)
Our customers now want auto-fitting text container. (customer gives a box of specific size and all kinds of strings have to get auto-resized to fit into that container
For that I have to calculate the width of the string (freetype: each char/glyph) for multiple font-sizes (e.g. 100pt doesnt fit, 99pt doesnt fit, 98pt doesnt..., ..., 65pt fits!)
Problem:
The Problem is that freetype takes a lot of time (~20-30 ms) for each auto-fit element and I have only ~100ms for my whole application to use. (so when customer adds 5 more autofit elements it's already guaranteed to exceed ~100 ms)
Attempts:
A selfmade font-cache-generator which takes a font-file and calculates the widths of each unicode-character for font-sizes from 1pt to 100pt. Then it generates C source code out of the data like this:
//
#define COUNT_SIZES 100 // Font-Size 1-100
#define COUNT_CHARS 65536 // Full Unicode Table
int char_sizes[COUNT_SIZES][COUNT_CHARS] =
{
{1,1,2,2,3,1,1,2,2,3,1,2,2,1,2,2,3,1,2,.......// 65536
{2,2,3,3,4,2,1,3,3,4,2,3,3,2,3,3,4,2,3,.......// 65536
{2,3,4,3,5,2,2,4,4,5,2,4,4,2,4,3,5,3,3,.......// 65536
// ...
// 100 font sizes
};
That compiled in a dynamic lib (.so) is 25 MB in size and takes ~50ms to "dlload" and ~10ms to "dlsym" (WAAAAAAY too much!)
The same way but only ASCII table (so only 128 of 65536) compiles into a 58 KB .so file and takes ~500µs for "dlload" and ~100µs for "dlsym" (very nice!)
My next attempt would be to integrate the font-cache-generator into my project and cache only the glyphs I need for the specific customer (customer in europe needs ~500 glyphs, one in asia (e.g. traditional chinese) needs ~2500 (only examples, not exactly sure, maybe even more needed)
But before I take on that hard-work journey (:() I wanted to ask you if you know a better way of doing it? A library/project that does just that?
I cannot believe that it's not possible, how should a browser show lorem ipsum without loading seconds otherwise? :D
Any idea on how to solve this performance issue?
Any informative link on data caching with extremly fast access to cache (somewhat <1ms)?
System Info:
Unix (Ubuntu 16.04) 64bit
x86 AND arm architectures exist!
I found one possible way using these libraries:
ICU (for unicode)
Freetype (for the Glyphs)
Harfbuzz (for layout)
Github Project:
Harfbuzz-ICU-Freetype
Loose build instructions:
Search options in CMakeLists.txt option(WITH_XX "DESCRIPT." ON/OFF)
Enable CMake options with -D: cmake -DWITH_ZLIB=ON -DWITH_Harfbuzz=ON ..
mkdir build && cd build && cmake [option [option [...]]] ..
make -j $count_of_cpu_cores && sudo make install
Google for some Harfbuzz Layout tutorials / guides

weka: how to generate libsvm training parameter

I am running libsvm through weka. Its output accuracy looks good to me, so I am planning to write a svm model by myself. However, weka didn't generate any training parameter, such as number of support vector. Therefore i cannot do anything. Searching the web, i found somebody said it would generate some parameters like the following:
optimization finished, #iter = 27
nu = 0.058475864943863545
obj = -1.871013102744184, rho = -0.19357337828800944
nSV = 9, nBSV = 0 `enter code here`
Total nSV = 9
but how come i didn't see any of them? any step that i missed? please help me. Thanks a lot.
Weka writes the output you mentioned to stderr.
So if you have started weka.sh or weka.bat from a terminal (or "command window" if you are on Windows), you should see that output appear in your terminal window after clicking "classify"
If you want to have access to this information via scripts, you can
redirect the output to a file and read in that file.
Here is how to edit the startup file weka.sh / weka.bat.
Edit this line (it is probably the last line) in order to write log info to a file instead of the terminal window:
java -cp $CP -Xmx8092m weka.gui.GUIChooser 2>>/opt/weka-stable/weka.log &
You can also add a properties file to your home directory to add more fine-grained behaviour.
https://weka.wikispaces.com/Properties+file
(You probably can also access information via the Weka Java API somehow, but you did not ask for that)

How to properly debug a binary generated by `go test -c` using GDB?

The go test command has support for the -c flag, described as follows:
-c Compile the test binary to pkg.test but do not run it.
(Where pkg is the last element of the package's import path.)
As far as I understand, generating a binary like this is the way to run it interactively using GDB. However, since the test binary is created by combining the source and test files temporarily in some /tmp/ directory, this is what happens when I run list in gdb:
Loading Go Runtime support.
(gdb) list
42 github.com/<username>/<project>/_test/_testmain.go: No such file or directory.
This means I cannot happily inspect the Go source code in GDB like I'm used to. I know it is possible to force the temporary directory to stay by passing the -work flag to the go test command, but then it is still a huge hassle since the binary is not created in that directory and such. I was wondering if anyone found a clean solution to this problem.
Go 1.5 has been released, and there is still no officially sanctioned Go debugger. I haven't had much success using GDB for effectively debugging Go programs or test binaries. However, I have had success using Delve, a non-official debugger that is still undergoing development: https://github.com/derekparker/delve
To run your test code in the debugger, simply install delve:
go get -u github.com/derekparker/delve/cmd/dlv
... and then start the tests in the debugger from within your workspace:
dlv test
From the debugger prompt, you can single-step, set breakpoints, etc.
Give it a whirl!
Unfortunately, this appears to be a known issue that's not going to be fixed. See this discussion:
https://groups.google.com/forum/#!topic/golang-nuts/nIA09gp3eNU
I've seen two solutions to this problem.
1) create a .gdbinit file with a set substitute-path command to
redirect gdb to the actual location of the source. This file could be
generated by the go tool but you'd risk overwriting someone's custom
.gdbinit file and would tie the go tool to gdb which seems like a bad
idea.
2) Replace the source file paths in the executable (which are pointing
to /tmp/...) with the location they reside on disk. This is
straightforward if the real path is shorter then the /tmp/... path.
This would likely require additional support from the compiler /
linker to make this solution more generic.
It spawned this issue on the Go Google Code issue tracker, to which the decision ended up being:
https://code.google.com/p/go/issues/detail?id=2881
This is annoying, but it is the least of many annoying possibilities.
As a rule, the go tool should not be scribbling in the source
directories, which might not even be writable, and it shouldn't be
leaving files elsewhere after it exits. There is next to nothing
interesting in _testmain.go. People testing with gdb can break on
testing.Main instead.
Russ
Status: Unfortunate
So, in short, it sucks, and while you can work around it and GDB a test executable, the development team is unlikely to make it as easy as it could be for you.
I'm still new to the golang game but for what it's worth basic debugging seems to work.
The list command you're trying to work can be used so long as you're already at a breakpoint somewhere in your code. For example:
(gdb) b aws.go:54
Breakpoint 1 at 0x61841: file /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go, line 54.
(gdb) r
Starting program: /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.test
[snip: some arnings about BinaryCache]
Breakpoint 1, github.com/stellar/deliverator/aws.imageIsNewer (latest=0xc2081fe2d0, ami=0xc2081fe3c0, ~r2=false)
at /Users/mat/gocode/src/github.com/stellar/deliverator/aws/aws.go:54
54 layout := "2006-01-02T15:04:05.000Z"
(gdb) list
49 func imageIsNewer(latest *ec2.Image, ami *ec2.Image) bool {
50 if latest == nil {
51 return true
52 }
53
54 layout := "2006-01-02T15:04:05.000Z"
55
56 amiCreationTime, amiErr := time.Parse(layout, *ami.CreationDate)
57 if amiErr != nil {
58 panic(amiErr)
This is just after running the following in the aws subdir of my project:
go test -c
gdb aws.test
As an additional caveat, it does seem very selective about where breakpoints can be placed. Seems like it has to be an expression but that conclusion is only via experimentation.
If you're willing to use tools besides GDB, check out godebug. To use it, first install with:
go get github.com/mailgun/godebug
Next, insert a breakpoint somewhere by adding the following statement to your code:
_ = "breakpoint"
Now run your tests with the godebug test command.
godebug test
It supports many of the parameters from the go test command.
-test.bench string
regular expression per path component to select benchmarks to run
-test.benchmem
print memory allocations for benchmarks
-test.benchtime duration
approximate run time for each benchmark (default 1s)
-test.blockprofile string
write a goroutine blocking profile to the named file after execution
-test.blockprofilerate int
if >= 0, calls runtime.SetBlockProfileRate() (default 1)
-test.count n
run tests and benchmarks n times (default 1)
-test.coverprofile string
write a coverage profile to the named file after execution
-test.cpu string
comma-separated list of number of CPUs to use for each test
-test.cpuprofile string
write a cpu profile to the named file during execution
-test.memprofile string
write a memory profile to the named file after execution
-test.memprofilerate int
if >=0, sets runtime.MemProfileRate
-test.outputdir string
directory in which to write profiles
-test.parallel int
maximum test parallelism (default 4)
-test.run string
regular expression to select tests and examples to run
-test.short
run smaller test suite to save time
-test.timeout duration
if positive, sets an aggregate time limit for all tests
-test.trace string
write an execution trace to the named file after execution
-test.v
verbose: print additional output