I'm trying to add transparency to my images in Renpy with screenshot(). Currently using Atom and Visual Studio Code.
I'm not able to try anything due to forgetfulness. I've done it before but lost all my stuff earlier this year. I'm attempting to make a Character Generator for both Renpy and RPG Maker.
This is the code I have.
define config.screenshot_crop = (0, 0, sizemex, sizemey)
The idea is to add transparency to the SS images which only captures the sprites. I can do the sizes but that's it. Currently. I just need an idea on what these codes are.
Related
Is there any way to apply or have custom frames (images) around specific X windows?
For example, in xfwm4 and fvwm (window managers) it's possible to have a specific window decoration for a window with different images, one at the bottom, other at the top, etc. examples:
For xfwm4 and fvwm.
Obviously if it was that easy, I'd just use either of them, however, I do think that a singular program could handle it instead of needing to change the whole window manager.
I'm currently using dwm and there are ways to change the colour and thickness of border, and that's it. If I was better at C I could create a rule to draw images around specific X windows with specific WM_CLASS, but that's too much for me now, so any help is really appreciated.
An alternative solution would be to draw a single image (from a file) bigger than the X window behind it and make it to follow the X window position, and maybe the size as well (that's harder, so it's not really that necessary)
I started writing a C++ program to do that, but that may take too much time since C++ still a new tool to me and looking at how xfwm4 handles this.
For university, we need to make a game in Unity that is controlled with an Arduino. My idea was a hacking game where the Arduino acts as the 'hacking device' when hacking something in the game. The arduino would have a screen and on that screen would be a basic command-line interface that lets me input simple commands to 'hack' but I've been having trouble regarding text and clearing it.
I've been able to use unity to send typed characters to the display as-well as a backspace function (pressing backspace would remove last character in the string)
I first had issue with clearing all the text when writing (calling tft.print doesn't clear any previous text). I was using fillScreen which was slow. I found out setTextColor had a second argument that let me just set all certain colored text to a different color. Setting it to black would essentially clear it.
This made it update pretty much instantly when writing to the screen but I now had a new issue, backspace would no longer would.
My understand is that when removing the character, it's color won't be updated when calling setTextColor leaving it on the screen until a restart/fillScreen is called.
I'm not really sure how to solve this and all google searches turn up little to no help.
Here's my code for updating the text:
void updateString(char c){
tft.setTextColor(WHITE,BLACK);
if(c!='<'){
//Add new character to end of string
str.concat(String(c));
}
else if(c=='<' && str.length()>2){
//Remove last character in string
str.remove(str.length()-1);
}
//Set cursor back to 0,0
tft.setCursor(0,0);
//Display text
tft.print(str);
}
I'd appreciate any help.
Maybe, use a similar function to tft.clear() each time you refresh the screen or you can draw a filled square of the background on the text so it looks like it has been erased then you rewrite the text.
Sounds like you are using proportionally-spaced fonts instead of the original classic fonts that ships with Adafruit_GFX. Historically, when using the default classic fonts one could set the background color option of the text to the same color as the background of the screen (usually black). This would overwrite the old screen contents with new data and work because the characters using the classic fonts are a uniform size. When using proportionally-spaced fonts, the background color option for the font is disabled by design.
This was presumably because of memory requirements and speed on slower AVR's. Regardless, when using proportionally-spaced fonts the background color feature won't work due to non-uniform sized characters overlapping the same regions as adjacent characters.
There are two work-arounds to this. Adafruit says in order to replace previously drawn text when using custom fonts you can either:
Use getTextBounds() to determine the smallest rectangle that encloses a
string and then erase that area using fillRect() prior to drawing
the new text. OR
Create an offscreen bitmap using GFXcanvas1 for a fixed size area,
draw the text in the GFXcanvas1 object and then copy to the screen
using drawBitmap().
Note that options 1 & 2 above are mutually exclusive. The second method requires more memory. The first method isn't perfect and produces some small amount of flicker, but is in general acceptable if coded carefully.
I hope that I have understood what the nature of your problem is and answered it in a satisfactory manner. If nothing else, at least now you know why custom font's will not work with the so called background color feature that works with the original 'classic' Adafruit fonts.
Nikki Cooper
I am currently working on an engine that involves creating 3D objects such as boxes and spheres. My next task is to create 3D Text inside as an object. I am using Visual Studio 2012, OpenGL, Qt, FreeType2, and FTGL in C++ on a Windows 7 computer.
I have been able to import everything fine and get some things to render, but not the way I would like. If I create a box, the text jumps to the box I created. If I create another one, it jumps to that. If I move the camera, it jumps to the center of the scene. On top of that, all the letters in my string appear right on top of each other and I cannot position them.
What I want is for the text to be at whatever 3D location I tell it and for it to stay on the screen (so have multiple strings rendering) and for the letters to actually be able to make words. I have tried using glRasterPos3f as well to move it but it still stays latched onto whatever object it is. I have tried a lot to get a decent result but I keep running into the same thing. Please give me an idea of what the problem could be. Here is a sample of how I am rendering it to give an example:
glPushMatrix();
FTExtrudeFont* font = new FTExtrudeFont("Arial.ttf");
font->Depth(.5);
font->FaceSize(10);
glRasterPos3f(10,10,10);
font->Render("Text");
glPopMatrix();
I have modified Nehe's terrain tutorial so that it generates a terrain using Perlin noise instead of loading the static .raw file that comes with the tutorial. I want to specify the parameters for perlin noise (frequency, amplitude, number of octaves) before rendering the terrain. In fact, I just want to create a window that takes those 3 parameters and then dies, I don't need anything else, I do my interfaces on GLUT, I just want this particular app to run this way.
How can I do that? What should be modified in the Nehe project? I understand MFC doesn't have a built-in input box?
I am not familiary with the tutorial but if you want the input box to not be seen but to precess parameters could you just create the window as invisible?
I want to control a button using hand motions. For example, in a video frame I create a circle-shaped button. Then when I move my hand to that circle I want to play an mp3 file, and when I move my hand to another circle the mp3 song stops playing. How can I do this?
i am working in windows7 OS and i use microsoft visual studio 2008 for work...
You have infinite options to do that. Probably the easiest is trying to do background segmentation and then check if there's anything which is not background that overlaps with the button area. It would work with any part of your body, not only your hands, but that might not be an issue.
Another option would be to try to detect and track your hands based on skin color. For this you need to obtain an histogram of the skin color and then use it with the camshift tracker. A nice way to obtain the skin color on runtime would be running a face detector (haarcascade) and getting the color from the detected region.
I'm sure there are hundreds of additional ways to do it.
Also, if you can get your hands on a Kinect camera it could help a lot. Check OpenNI and the MS Kinect SDK to see what it enables you to do.
The first thing you will have to do is create a haar cascade xml file and train it on human hands.