Colored text console output in Rebol - console-application

In some languages I can put Esc codes around text to make them colored on Linux console/terminal. However, this does not seem to work in Rebol:
NORMAL: "\e[0m"
RED: "\e[0;31m"
print rejoin["\e[0;31m" "red text" "\e[0m"]
Above code only produces black (usual colored) text output:
\e[0;31mred text\e[0m
Can colored text output be printed with Rebol on Linux terminal?

You can similarly use colour codes in Rebol/Red.
print "This text is ^[[0;31mred^[[0m text."
#"^[" is the Escape character in Rebol/Red.
You can, for example, change the prompts in Red with the following codes:
system/console/prompt: "^[[31m^[[5D>>^[(B^[[m "
system/console/result: "^[[32m^[[5D==^[(B^[[m"
In the Ren-C branch of Rebol 3 you can change the prompts with the following (similar) codes:
system/console/prompt: "^[[31m^[[5D>>^[(B^[[m "
system/console/result: "^[[32m^[[5D==^[(B^[[m "

REBOL [
Title: "colorebol"
Date: 14-Jul-2013
File: %colorebol.reb
Version: 1.0.0
Purpose: "Enable switching of terminal font colors and backgrounds etc"
Note: "Includes the clr func for clearing the screen"
]
clr: does [prin "^(page)"]
coloreb: func [
{Use Fore/Red /Green /Yellow /Blue /Magenta /Cyan /White /Reset and even /Black. Viola! Font-color
Similarly Background/Blue etc..., then Style/bright /dim /normal /reset_all and finally Cyclor, which
randomly picks a font color. It needs some polishing}
][cyclor print ["this is all i do. that, and provide a help doc-string"] cyclor]
Fore: make object! [
Colors: ["black" "red" "green" "yellow" "blue" "magenta" "cyan" "white" "reset"]
BLACK: does [prin "^[[30m"]
RED: does [prin "^[[31m"]
GREEN: does [prin "^[[32m"]
YELLOW: does [prin "^[[33m"]
BLUE: does [prin "^[[34m"]
MAGENTA: does [prin "^[[35m"]
CYAN: does [prin "^[[36m"]
WHITE: does [prin "^[[37m"]
RESET: does [prin "^[[39m"]
]
Background: make object! [
Colors: ["black" "red" "green" "yellow" "blue" "magenta" "cyan" "white" "reset"]
BLACK: does [prin "^[[40m"]
RED: does [prin "^[[41m"]
GREEN: does [prin "^[[42m"]
YELLOW: does [prin "^[[43m"]
BLUE: does [prin "^[[44m"]
MAGENTA: does [prin "^[[45m"]
CYAN: does [prin "^[[46m"]
WHITE: does [prin "^[[47m"]
RESET: does [prin "^[[49m"]
]
Style: make object! [
Styles: ["bright" "dim" "normal" "reset_all"]
BRIGHT: does [prin "^[[1m"]
DIM: does [prin "^[[2m"]
NORMAL: does [prin "^[[22m"]
RESET_ALL: does [prin "^[[0m"]
]
cyclor: func [] [fore/(to-word fore/colors/(random/only [2 3 4 5 6 7 8]))]
Put this in your other script files:
do %colorebol.reb
and then use it like so:
col: has [
"Wrap the colorebol.reb wrappers to reduce visual clutter"
color /red /green /blue /yellow /cyan /magenta /black /white][
if red [color: 'red]
if green [color: 'green]
if blue [color: 'blue]
if yellow [color: 'yellow]
if cyan [color: 'cyan]
if magenta [color: 'magenta]
if black [color: 'black]
if white [color: 'white]
if unixy-os? [fore/(color)]
]
;test it:
col/magenta print "magenta" ;(it works). Maybe just mod /%colorebol.reb?
I'm not that fluent in Rebol - I'm sure there is a more concise way. But this has worked for me very well on GNU/Linux. To keep scripts portable I have an OS-detection function and the colorizing code is dependent on it.

Related

I want to select the icon from the list but it always showing some other icon instead of icon in code point

I am trying to get the icon from the list but the code I am entering into the list is not the same I got from the output
final List<int> points = <int>[
0xe5d5,
58837,
];
final Random r = Random();
Icon randomIcon() => Icon(
IconData(points[r.nextInt(points.length)],
fontFamily: 'MaterialIcons', matchTextDirection: true),
color: myColor,
);
print(r.nextInt(points.length));

How to remove white padding/borders that tint::tintHtml added around cells in kable table?

I'd like to get rid of the white around each cells. When I knit to html_document there is no such padding.
I'm guessing the tint.css file is responsible for this (https://github.com/eddelbuettel/tint/blob/master/inst/rmarkdown/templates/tintHtml/resources/tint.css)
---
title: tintHtml() add padding to kable "
output: tint::tintHtml
---
```{r}
library(kableExtra)
library(magrittr)
```
```{r}
knitr::kable(
mtcars[1:6, 1:6],
caption = 'how do I get rid of white padding ?'
) %>%
row_spec(0, background = "blue", color = "white") %>%
row_spec(1, background = "green", color = "white")
```
Step one: You may "right-click an element on a web page and select Inspect Element"
Step two: You will need to override the default border-spacing (Bootstrap CSS) property:
The border-spacing property sets the distance between the borders of
adjacent cells.
Note: This property works only when border-collapse is
separate.
<style>
table{
border-spacing: unset; # values: inherent, initial, unset or 0px
}
</style>
Output:
---
title: tintHtml() add padding to kable "
output: tint::tintHtml
---
```{r}
library(kableExtra)
library(magrittr)
```
<style>
table{
border-spacing: unset; # inherent, initial, unset, 0px
}
</style>
```{r}
knitr::kable(
mtcars[1:6, 1:6],
caption = 'how do I get rid of white padding ?'
) %>%
row_spec(0, background = "blue", color = "white") %>%
row_spec(1, background = "green", color = "white")
```

Vision- grabbing source image uri in each AnnotateImageResponse from batch_annotate_images

Images are uploaded in google bucket. Code given below is running fine. It's in python3. I am trying to catch each image_uri for corresponding AnnotateImageResponse so that I can save responses in database with corresponding image uris. How can I grab input image_uri for each response? Because in each response, source image uri data not available. I may be wrong but I guess, while making requests in generate_request function, I may need to send image_uri as image_context, but not finding any good docs for that. please help.
Image files uploaded in google buckets are
'39149.7ae80cfb87bb228201e251f3e234ffde.jpg', 'anatomy_e.jpg'
def generate_request(input_image_uri):
if isinstance(input_image_uri, six.binary_type):
input_image_uri = input_image_uri.decode('utf-8')
source = {'image_uri': input_image_uri}
image = {'source': source}
features = [{"type_": vision.Feature.Type.LABEL_DETECTION},{"type_": vision.Feature.Type.FACE_DETECTION},{"type_": vision.Feature.Type.TEXT_DETECTION}]
requests = {"image": image, "features": features}
return requests
def sample_async_batch_annotate_images(input_uri):
client = vision.ImageAnnotatorClient()
requests = [
generate_request(input_uri.format(filename)) for filename in ['39149.7ae80cfb87bb228201e251f3e234ffde.jpg','anatomy_e.jpg',]
]
# below response is A BatchAnnotateFilesResponse instance.
response = client.batch_annotate_images(requests=requests)
for each in response.responses:
#each response is AnnotateImageResponse instance
print("each_response", each)
sample_async_batch_annotate_images('gs://imagevisiontest/{}')
I was a little bit confused on your sample_async_batch_annotate_images function since it was named asynchronous but you are not using the asynchronous method of Vision API. The response from batch_annotate_images does not return context that contains the source image_uri. Also the image_context is only used to refine your image detection like providing language hints, telling the API to search for products, etc.
I suggest using the asynchronous batch annotate method of the API as it will include context which is seen at the end of the response along with the source image_uri. Also the response is saved in the output_uri specified in the code.
I processed this image and I renamed it to image_text.jpeg for testing on batch_annotate_images and async_batch_annotate_images:
Code snippet used for batch_annotate_images:
from google.cloud import vision_v1
def sample_batch_annotate_images(
input_image_uri="gs://your_bucket_here/image_text.jpeg",
):
client = vision_v1.ImageAnnotatorClient()
source = {"image_uri": input_image_uri}
image = {"source": source}
features = [
{"type_": vision_v1.Feature.Type.LABEL_DETECTION},
{"type_": vision_v1.Feature.Type.IMAGE_PROPERTIES},
]
requests = [{"image": image, "features": features}]
response = client.batch_annotate_images(requests=requests)
print(response)
sample_batch_annotate_images()
Response from batch_annotate_images:
responses {
label_annotations {
mid: "/m/02wzbmj"
description: "Standing"
score: 0.9516521096229553
topicality: 0.9516521096229553
}
label_annotations {
mid: "/m/01mwkf"
description: "Monochrome"
score: 0.9407921433448792
topicality: 0.9407921433448792
}
label_annotations {
mid: "/m/01lynh"
description: "Stairs"
score: 0.9399806261062622
topicality: 0.9399806261062622
}
label_annotations {
mid: "/m/03scnj"
description: "Line"
score: 0.9328843951225281
topicality: 0.9328843951225281
}
label_annotations {
mid: "/m/012yh1"
description: "Style"
score: 0.9320641756057739
topicality: 0.9320641756057739
}
label_annotations {
mid: "/m/03d49p1"
description: "Monochrome photography"
score: 0.911144495010376
topicality: 0.911144495010376
}
label_annotations {
mid: "/m/01g6gs"
description: "Black-and-white"
score: 0.9031684994697571
topicality: 0.9031684994697571
}
label_annotations {
mid: "/m/019sc"
description: "Black"
score: 0.8788009881973267
topicality: 0.8788009881973267
}
label_annotations {
mid: "/m/030zfn"
description: "Parallel"
score: 0.8722482919692993
topicality: 0.8722482919692993
}
label_annotations {
mid: "/m/05wkw"
description: "Photography"
score: 0.8370979428291321
topicality: 0.8370979428291321
}
image_properties_annotation {
dominant_colors {
colors {
color {
red: 195.0
green: 195.0
blue: 195.0
}
score: 0.4464040696620941
pixel_fraction: 0.10618651658296585
}
colors {
color {
red: 117.0
green: 117.0
blue: 117.0
}
score: 0.16896472871303558
pixel_fraction: 0.1623961180448532
}
colors {
color {
red: 13.0
green: 13.0
blue: 13.0
}
score: 0.12974770367145538
pixel_fraction: 0.24307478964328766
}
colors {
color {
red: 162.0
green: 162.0
blue: 162.0
}
score: 0.11677403748035431
pixel_fraction: 0.09510618448257446
}
colors {
color {
red: 89.0
green: 89.0
blue: 89.0
}
score: 0.08708541840314865
pixel_fraction: 0.17659279704093933
}
colors {
color {
red: 225.0
green: 225.0
blue: 225.0
}
score: 0.05102387070655823
pixel_fraction: 0.012119113467633724
}
colors {
color {
red: 64.0
green: 64.0
blue: 64.0
}
score: 1.7074732738819876e-07
pixel_fraction: 0.2045244723558426
}
}
}
crop_hints_annotation {
crop_hints {
bounding_poly {
vertices {
x: 123
}
vertices {
x: 226
}
vertices {
x: 226
y: 182
}
vertices {
x: 123
y: 182
}
}
confidence: 0.4375000298023224
importance_fraction: 0.794996440410614
}
}
}
Code snippet used for async_batch_annotate_images (got the code in Vision API docs):
from google.cloud import vision_v1
def sample_async_batch_annotate_images(
input_image_uri="gs://your_bucket_here/image_text.jpeg",
output_uri="gs://your_bucket_here/",
):
"""Perform async batch image annotation."""
client = vision_v1.ImageAnnotatorClient()
source = {"image_uri": input_image_uri}
image = {"source": source}
features = [
{"type_": vision_v1.Feature.Type.LABEL_DETECTION},
{"type_": vision_v1.Feature.Type.IMAGE_PROPERTIES},
]
# Each requests element corresponds to a single image. To annotate more
# images, create a request element for each image and add it to
# the array of requests
requests = [{"image": image, "features": features}]
gcs_destination = {"uri": output_uri}
# The max number of responses to output in each JSON file
batch_size = 2
output_config = {"gcs_destination": gcs_destination,
"batch_size": batch_size}
operation = client.async_batch_annotate_images(requests=requests, output_config=output_config)
print("Waiting for operation to complete...")
response = operation.result(90)
# The output is written to GCS with the provided output_uri as prefix
gcs_output_uri = response.output_config.gcs_destination.uri
print("Output written to GCS with prefix: {}".format(gcs_output_uri))
sample_async_batch_annotate_images()
Response from async_batch_annotate_images:
{
"responses":[
{
"labelAnnotations":[
{
"mid":"/m/02wzbmj",
"description":"Standing",
"score":0.9516521,
"topicality":0.9516521
},
{
"mid":"/m/01mwkf",
"description":"Monochrome",
"score":0.94079214,
"topicality":0.94079214
},
{
"mid":"/m/01lynh",
"description":"Stairs",
"score":0.9399806,
"topicality":0.9399806
},
{
"mid":"/m/03scnj",
"description":"Line",
"score":0.9328844,
"topicality":0.9328844
},
{
"mid":"/m/012yh1",
"description":"Style",
"score":0.9320642,
"topicality":0.9320642
},
{
"mid":"/m/03d49p1",
"description":"Monochrome photography",
"score":0.9111445,
"topicality":0.9111445
},
{
"mid":"/m/01g6gs",
"description":"Black-and-white",
"score":0.9031685,
"topicality":0.9031685
},
{
"mid":"/m/019sc",
"description":"Black",
"score":0.878801,
"topicality":0.878801
},
{
"mid":"/m/030zfn",
"description":"Parallel",
"score":0.8722483,
"topicality":0.8722483
},
{
"mid":"/m/05wkw",
"description":"Photography",
"score":0.83709794,
"topicality":0.83709794
}
],
"imagePropertiesAnnotation":{
"dominantColors":{
"colors":[
{
"color":{
"red":195,
"green":195,
"blue":195
},
"score":0.44640407,
"pixelFraction":0.10618652
},
{
"color":{
"red":117,
"green":117,
"blue":117
},
"score":0.16896473,
"pixelFraction":0.16239612
},
{
"color":{
"red":13,
"green":13,
"blue":13
},
"score":0.1297477,
"pixelFraction":0.24307479
},
{
"color":{
"red":162,
"green":162,
"blue":162
},
"score":0.11677404,
"pixelFraction":0.095106184
},
{
"color":{
"red":89,
"green":89,
"blue":89
},
"score":0.08708542,
"pixelFraction":0.1765928
},
{
"color":{
"red":225,
"green":225,
"blue":225
},
"score":0.05102387,
"pixelFraction":0.0121191135
},
{
"color":{
"red":64,
"green":64,
"blue":64
},
"score":1.7074733e-07,
"pixelFraction":0.20452447
}
]
}
},
"cropHintsAnnotation":{
"cropHints":[
{
"boundingPoly":{
"vertices":[
{
"x":123
},
{
"x":226
},
{
"x":226,
"y":182
},
{
"x":123,
"y":182
}
]
},
"confidence":0.43750003,
"importanceFraction":0.79499644
}
]
},
"context":{
"uri":"gs://your_bucket_here/image_text.jpeg"
}
}
]
}

Need a way to hide marker on apex charts when y-axis has 0 value

I need a way to hide marker on apex charts when the y-axis has 0 value.
For instance, this is my dataset,
d = [{x:1, y:0}, {x:2, y:10}]
So, when my chart renders, I should see one marker with point (2,10) on the chart.
If this makes sense, then can you please help me with this one?
You can't set marker's size dynamically based on y-value, but you can use the markers.discrete option.
markers: {
size: 6,
discrete: [
{
seriesIndex: 0,
dataPointIndex: 0,
size: 0
},
{
seriesIndex: 0,
dataPointIndex: 1,
fillColor: '#419EF7',
strokeColor: '#fff',
size: 6
}
]
}
Docs

Kivy: How to use toggle button to lock scaling and translation of an image in scatter layout?

This has probably a very simple solution but I'm new to Kivy and I can't figure it out:
I use scatter layout to hold image in the application so I can scale and move it, however I woild like to implement an option to lock scaling and transformation with toggle button so it would be available again when I "unpress" the button.
No errors are returned however the scaling still works.
I'm working on this combining python and KV file.
py code:
class ScreenStartMapping(Screen):
image_path = StringProperty('')
do_scale = BooleanProperty()
def mapLine(self, *args):
if args[1] == "down":
print args[1]
self.do_scale = False
print self.do_scale
elif args[1] == "normal":
print args[1]
self.do_scale = True
print self.do_scale
and Kivy file defining entire screen:
<ScreenStartMapping>:
GridLayout:
id: gl
rows: 2
ScatterLayout:
id: sl
do_rotation: False
do_scale: toggleMappingMode
auto_bring_to_front: False
Image:
source: root.image_path
canvas:
Line:
Label:
size_hint_x: 1
size_hint_y: 0.1
canvas.before:
Color:
rgb: .1,.1,.1
Rectangle:
pos: self.pos
size: (root.size[0], root.size[1]/10)
Button:
background_color: (1.0, 0.0, 0.0, 0.5)
pos: self.pos
size: (root.size[0]/5, root.size[1]/10)
text: "Back"
font_name: "C:\WINDOWS\Fonts\segoescb.ttf"
on_press: root.manager.current = "screen1"
ToggleButton:
id: toggleMappingMode
background_color: (0.0, 1.0, 0.0, 0.5)
pos: (root.size[0]/5, self.pos[1])
size: (root.size[0]/5, root.size[1]/10)
text: "Draw line"
font_name: "C:\WINDOWS\Fonts\segoescb.ttf"
on_state:
root.mapLine(*args)
Thank you very much for your help would also appreciate any suggestions how to make this better .
So problem solved. As I thougt it was very basic beginner problem with me not knowing how exacly access the property in second widget from first.