UE4 - I want to make an ingame camera/photography mode and save the pictures - c++

for my game project I want to include a "camera mode".
This means, that on a press of a button the current camera view gets saved in an in-game gallery.
After some search, I only found ways to save a screenshot on the disk (BP for saving Screenshot, semi functional)
, but I want the picture to be still available in my game, maybe as a texture or in a struct. So I can later use it, well in an in-world picture frame or newspaper.
I did try SceneCaptureComponent2D, but I never got that one really working and searching online got no satisfactory results.
By the way, I'm fine with C++, I'm just building my current prototype with BP for faster testing and altering.
I hope you can help me.

I would have commented on your question, but I do not have enough reputation to do so, because this answer I am providing you is more a hint on how you could do it rather than a straight solution for your problem.
Check out this repository on how to capture images with C++ during a running application which is actually meant for recording data.

Related

Moving cursor using code and opening folder

This is a very random and maybe a bit strange question that i thought of at 3AM. I was thinking about how code could make my day to day life easier. Every morning I wake up, open chrome to the facebook conversations with my boyfriend, and write "good morning". And thats when i thought about this hypothetical project(just out of curiosity, I wouldn't use it haha): making a code that i can just run that does all of this for me.
I could have a html file that could redirect to the facebook link(https://www.facebook.com/messages/t/boyfriend_name). But how would I go on to make the code open this file, then move the mouse to where its supposed to go (the white area where the user inputs the text) then insert the text then press send?
I'm not asking for any code help as I can imagine that is too much, but my question is: could this be achievable in C++?(This is what we've been studying at school so far). If not, what coding language should I use? Is the idea achievable without a vast knowledge in computer science? If yes, have you got any sources about opening files using C++, moving cursor etc.
Note:The OS this would happen on is Windows 10
To do what you want is possible by using AutoIT and to use it from C++ you can try AutoITX for C++. With AutoIT it's possible to detect windows, move the mouse and insert text, although a web page is like a blackbox to it, so you'll have to rely on relative pixel coordinates (it might not be very robust).

DirectShow-IMediaDet only extracts the first frame

I experienced a weird effect concerning DirectShow and splitters.
I was not able to track it, so maybe somebody can help?
In my application I extract a few frames from movies.
I do this by DirectShow's IMediaDet interface.
(BTW: It's XP SP3, DirectShow 9.0).
Everything works fine, as long as there is no media splitter involved
(this is the case for mp4, mkv, flv, ...).
Concerning codecs I use the K-Lite distribution.
Since some time there are two splitters: LAV and Haali.
The Gabest splitter has been removed since some time.
But only with the latter activated everything worked fine!
OK - the effect:
It's about IMediaDet.GetBitmapBits:
Some (most) medias that uses splitters always extract the very first frame.
And with some other medias with splitters this effect is only when I
used get_StreamLength before. (Although GetBitmapBits should switch
back to BitmapGrab mode, as the docu says.).
As said - everything works fine as far as no splitter is involved (mpg, wmv, ...).
Does someone experienced a similar effect?
Where may be the bug: In DShow, in the splitters, in my code?
Any help appreciated ... :-)
Your assumption is not quite correct. IMediaDet::GetBitmapBits builds a filter graph internally, and attempts to navigate playback to position of interest. Then starts streaming to get a valid image onto its Sample Grabber filter "BitBucket".
It does not matter if splitter is a separate filter or it is combined with source. Important part is the ability of the graph to seek, a faulty filter might be an obstacle there, even though the snapshot is taken. This is the symptom you are describing.
For instance the internal graph might be like this:
There is a dedicated multiplexer there, and snapshot is taken from correct position.

NRRD volume offset in 2D renderer

I'm just looking into a project of making a scan viewer much like Slice:Drop from the Xtk examples. I'm very new to XTK and web programming in general.
I'm using an example file of a brain, but in Slice:Drop and my own implementation (mostly borrowed from "Lesson 13: I want 2D!"), the scan comes out offset, so the image is cut off. Does anyone now how I can fix this? I was looking for a translate method or similar in the volume class, but haven't found anything useful so far.
I've tried loading this file in a couple of desktop scan loading software packages, and it loads fine (meaning centered, not cut off) there, so wondering where the problem could be.
Does anyone have any idea what might be going on here?
Cheers
Edit: Just tried the same file in .nii format and that loaded fine, with the scans all centered. Maybe something strange going on with the NRRD file ...

How do I find a pattern on the screen?

I thought I would try out making a bot to play a game on a website for me. How can I read the pixels of the screen? My best idea so far is basically:
Take screenshot
Scan screenshot for other images (bit comparison of one row in image?)
Click somewhere on the screen depending on what image was found.
Loop a few times per second
If this is the best/easiest way to do this: How do I do these things? I know some c++ but I've only worked with CLI programs and text/file IO so far. If you can think of a better way please tell me.
Using something like C# you can take screenshots of the screen and convert the resulting image to a Bitmap to do this, but it seems to me that you'd be better off looking at the HTML page on the wire (lookup a tutorial on how HTTP works or run wireshark to see how the page is transmitted on the wire). This will almost certainly be easier for you.

Help with algorithm to dynamically update text display

First, some backstory:
I'm making what may amount to be a "roguelike" game so i can exersize some interesting ideas i've got floating around in my head. The gameplay isn't going to be a dungeon crawl, but in any case, the display is going to be done in a similar fasion, with simple ascii characters.
Being that this is a self exercise, I endeavor to code most of it myself.
Eventually I'd like to have the game runnable on arbitrarily large game worlds. (to the point where i envision havening the game networked and span over many monitors in a computer lab).
Right now, I've got some code that can read and write to arbitrary sections of a text console, and a simple partitioning system set up so that i can path-find efficiently.
And now the question:
I've ran some benchmarks, and the biggest bottleneck is the re-drawing of text consoles.
Having a game world that large will require an intelligent update of the display. I don't want to have to re-push my entire game buffer every frame... I need some pointers on how to set it up so that it only draws sections of the game have have been updated. (and not just individual characters as I've got now)
I've been manipulating the windows console via windows.h, but I would also be interested in getting it to run on linux machines over a puTTY client connected to the server.
I've tried adapting some video-processing routines, as there is nearly a 1:1 ratio between pixel and character, but I had no luck.
Really I want a simple explanation of some of the principles behind it. But some example (psudo)code would be nice too.
Use Curses, or if you need to be doing it yourself, read about the VTnnn control codes. Both of these should work on windows and on *nix terms and consoles (and Windows). You can also consult the nethack source code for hints. This will let you change characters on the screen wherever changes have happened.
I am not going to claim to understand this, but I believe this is close to the issue behind James Gosling's legendary Gosling Emacs redrawing code. See his paper, titled appropriately, "A Redisplay Algorithm", and also the general string-to-string correction problem.
Having a game world that large will
require an intelligent update of the
display. I don't want to have to
re-push my entire game buffer every
frame... I need some pointers on how
to set it up so that it only draws
sections of the game have have been
updated. (and not just individual
characters as I've got now)
The size of the game world isn't really relevant, as all you need to do is work out the visible area for each client and send that data. If you have a typical 80x25 console display then you're going to be sending just 2 or 3 kilobytes of data each time, even if you add in colour codes and the like. This is typical of most online games of this nature: update what the person can see, not everything in the world.
If you want to experiment with trying to find a way to cut down what you send, then feel free to do that for learning purposes, but we're about 10 years past the point where it is inefficient to update a console display in something approaching real time and it would be a shame to waste time fixing a problem that doesn't need fixing. Note that the PDF linked above gives an O(ND) solution whereas simply sending the entire console is half of O(N), where N is defined as the sum of the lengths of A and B and D.