I'm a beginner in iOS developing.
I wanted to develop a SwiftUI app (for children) which detect the drawing of a letter (for example "A") of a user, from a model displayed (an image of "A"). If the user is drawing out of the model, a popup indicates to draw in the model.
I don't ask the code which doesn't help me to learn but the frameworks to use to detect the gestures of "hand" and if the model zone is quite filled.
Thanks a lot for your answer,
Regards. Armand.
Related
I'm Having classes on computational design at university through AutoCad and we are asked to model some drawings. I'd like to know if there's any way to make the image I'm modelling appear on top of the screen while I hold some key, and go back to the model as soon as I release it.
There is no such built-in feature in AutoCAD. You can however insert the image in the background. Or you can write your own plugin to do that.
1) You can use the Attach command to attach your image in AutoCAD
2) Once the image is attached, then you can create your model on top of the image by using the "DR" alias for Draworder and select the image to place on top or behind your model.
3) Select the middle mouse button to Pan around and when you do this, your image file will disappear temporarily until you release your mouse button.
Hope that helps!
I'm working on a stream overlay that extracts information out of a game (flight simulator) and displays it on the screen. Right now I'm using Qt in conjunction with a *.html to render the overlay. It is all working really well, however I wanted to add some customization options for the users of my overlay software and I figured the best way would be to render the Overlay in QML.
The main part of the overlay is a row that contains around 8 "elements" that display the relevant data.
One thing that should be customized is the order of the elements in the row. But I really have no idea how to implement this feature. I've seen a few posts and tutorials on how to customize the order in a View using the DelegateModel. However right now it's not a view but QML Components inserted in a RowLayout due to the fact that they are all different components (e.g. some of the images are actually animated for which I'm using a component that uses Canvas2D to draw the images). I guess I could figure out a way to store those elements in a model using the Loader Component to display the content in QML. But even then I'm not entirely sure how to store and restore the order of the elements. As far as I can tell the DelegateModel only changes the View and not the underlying model.
Any suggestion or best practice to accomplish my goal would be highly appreciated.
Programming the GDK a few weeks now, the CardScrollView is a pretty nice interface for displaying cards. However one issue with the UI is showing the user how far along they are in the card stack. In the Mirror API, this is nicely handled by the Slider view at the bottom of the screen as described on the Glass Design page:
https://developers.google.com/glass/design/style/metrics-grids
Unfortunately, I have not been able to get this Slider object to display on the CardScrollView and instead have resorted to a klugey 1 of n text.
Is there any way to get this Slider view to display in the GDK?
This is not yet supported by our API but is currently tracked with Issue #256.
For future reference, this feature has been already implemented as described in the original issue.
You can use the method setHorizontalScrollBarEnabled to show the bar, e.g.
mCardScrollView.setHorizontalScrollBarEnabled(true);
Sorry for my bad english, i'm from Italy XD
I have done frogger game with a simple graphic choice: a gridlayout with a lot of updating labels. It works perfectly. I could set icons instead of label, but now I want to wear it with some better, and my teacher told me to choose from QGraphicScene + Item + View, or QPainter + QWidget::paintEvent.
What's the best choice for my case? Can you tell me your essential way to do that please?
I would use QGraphicsScene. It's a high-level interface for working with multiple bit-mapped graphics, so it will be simpler to use. The display logic for your game should be pretty forward to implement. (Just keep the graphics scene matched to the current game state).
You'll still need to handle user inputs by listening to keyboard events. You can implement that at the QMainWindow level.
And of course, you'll need to write the actual game logic.
I am working on a graphics game project in OpenGL and I want to make a front page of the game containing a image, few buttons and some text. Buttons on click perform different actions e.g. start button for starting the game , Can anyone please suggest me , How can I do it?
How can I do it?
Well, by implementing it. OpenGL is not a game engine, nor a scene graph, nor a UI toolkit. It's merely a drawing API providing you the means to draw nice pictures, and that's it. Anything beyond that is the task of either a 3rd party library/toolkit, or your own code, or a combination of both.
A usual approach to model this behaviour is by introducing application states. Here is a related question.
You could model your StartScreenState by drawing a plane with buttons using an orthogonal projection and not drawing (or not having initialized yet) the rest. When the player clicks on 'start', you can switch to perspective projection and display game contents.
I don't know that I would even use OpenGL for that. OpenGL is for rendering colored/textured triangles/quads so that you can do tons of stuff graphically. There's no such thing as "load an image to coordinate x,y on the screen". The equivalent would be "draw two triangles with these vertices that make up a rectangle and are textured with this image". Which is why I would probably stay away from OpenGL to do this, because you don't really need to use any of the awesome features that OpenGL has.
A very common UI framework that I believe nestles in with OpenGL well if you really want to use the two together is Qt. It should make your life easier in terms of UI stuff. See wiki and dev page.