Load shape ESRI to QGIS but not in correct position, how to reposition all of features on layer QGIS - shapefile

I have Autocad DXF which has a design plan. Dxf file that I converted to a shape file using ogr2ogr tools to be able to import the QGIS application with quite perfect.
In QGIS plugins are called "OpenLayers" to add a basemap layer like googlemap, openstreet and etc.. But after I saw, the position of the layer design plan that I created, not in accordance with the position I want on the basemap.
I have tried to using "drag and drop" or "move feature (s)" tools in QGIS, but very slowly and cause no response (HUNG).
Is there a way to quickly move a layer to match the desired position?

Related

what's the best approach to design this simple ReactNative AR app?

I'm trying to write a simple AR app in ReactNative, it should simply see 4 predefined markers and draw a rectangle as a boundary on the live preview of the camera, the thing is I'm trying to do the processing in C++ using opencv so as to have the logic of the app in one place accessible to both Android & IOS.
here's what I've been thinking
write the OS dependent code to open the camera and get permissions in (java/ObjC) & the C++ part to do processing on each frame.
call the C++ code (from within the native code) on each frame, and that should return lets say coordinates for the markers.
draw the rect if 4 markers found on the preview in native code (No idea how to achieve this so far but I think it will be native code).
expose that preview (the live preview with the drawn view) to ReactNative (Not sure about that or how to achieve it)
I've looked at the react native camera component but it doesn't provide access to frames & if that's even possible, I'm not sure if it would be a good idea to send frames over the bridge between JS & java/ObjC.
the problem is that I'm not sure of the performance or if that is even possible.
if you know of any ReactNative library that would be great.
Your steps seem sound. After processing the frame in C++, you will need to set the application properties RCTRootView.appProperties in iOS, and emit an event using RCTDeviceEventEmitter on Android. So, you will need an Objective-C wrapper for your C++ code on iOS and a Java wrapper on Android. In either case, you should be able to use the same React Native code for actually drawing the rectangle on top of the camera preview. You're right that the React Native camera component does not have an API for getting individual frames from the camera, so you'll need to write that code natively for each platform.

How to create a cursor in X11 from raw data c++

I have been searching around about this problem for awhile. I am making a cross platform program and I have figured out how to load an animated cursor with the windows API and how to create a cursor during run time from raw bitmap data. However I can't find good documentation for this for X11, for my Unix/Linux build of my program. I know I need to use the XRender extension functions, XRenderCreateCursor and XRenderCreateAnimCursor from this documentation https://www.x.org/releases/X11R7.6/doc/libXrender/libXrender.txt but I do not know how to use these functions and the documentation does now show any examples.
Also the raw image data is in the ARGB format, and I want to support the Alpha channel if possible with these cursors.
Could someone show me how to use the X11 and XRender (or XCursor) Library to create a cursor, static and animated, and possibly how to do it so the cursor can be used with any X11 window.
Thanks!
PS.
I am acually editing a open source libary for cross platfrom Gui that I am using for my program, and I am trying to add this feature into the libary but I am not used to programing with X11.
When it comes to X, nothing is simple.
First, review the specification of the X render extension.
The steps for creating an animated cursor are as follows.
First, you need to create a PICTURE for each frame of the animated cursor, using CreatePicture.
Use CreateCursor to create a CURSOR from each PICTURE. CreateCursor returns a CURSOR handle.
Then, you take the list of all CURSORs for all of the frames, and then use CreateAnimCursor to create a single CURSOR representing the animated cursor.
This all comes down to creating a PICTURE for each frame. A PICTURE is created using CreatePicture from a DRAWABLE and a PICTFORMAT. DRAWABLE would be the PIXMAP with the actual bitmask for the cursor's frame, and PICTFORMAT specifies which channels in the pixmap represent the red, color, and green channels, and must be one of the enumerated PICTFORMATs returned from QueryPictformat.
For more information, see the aforementioned X render extension specification.

Is raphael.js paper a real canvas?

I was researching what graphics to use for a project, and Raphael.js came to be a top contender. However, when reading the sample code and documentation, it shows that Raphael creates a canvas (via the paper variable on the homepage), and then you add stuff to it. Two months later, a passerby comes by and ask some question about our project, and I explained that we didn't use Raphael (instead we chose static SVG and D3) because Raphael used canvas's, and our project would have been greatly disadvantaged by using canvas's. So you Raphael expert's out there, is the canvas in Raphael an actual html canvas or not? and where can you link to it, so that you/and or I can send a pull request to explain that better upfront.
No Raphael's paper is SVG.
It is kinda strange, because the paper object property is called canvas but it only contains the SVGAnimatedString
Fiddle: http://jsfiddle.net/V2DGy/
Raphael uses SVG and VML to create graphics. The variable canvas is simply named as canvas and is not a canvas element. In fact, it is the root SVG element associated with that specific paper.
Raphael is very much similar to d3 but is more on thd lines of a graphics library and the added advantage of compatibility with Internet Explorer 6 through 8 (using VML instead of SVG.)
Though the variable name is misleading, yet Raphael mentions upfront in their home page that it is SVG library.
Quoting their website.
Raphaël is a small JavaScript library that should simplify your work with vector graphics on the web. If you want to create your own specific chart or image crop and rotate widget, for example, you can achieve it simply and easily with this library.
Raphaël ['ræfeɪəl] uses the SVG W3C Recommendation and VML as a base for creating graphics. This means every graphical object you create is also a DOM object, so you can attach JavaScript event handlers or modify them later. Raphaël’s goal is to provide an adapter that will make drawing vector art compatible cross-browser and easy.
Raphaël currently supports Firefox 3.0+, Safari 3.0+, Chrome 5.0+, Opera 9.5+ and Internet Explorer 6.0+.
No. Is svg. Totally different. Canvas contains "a picture", svg contains vectorial elements.

Render a vector graphic (.svg) in C++

My and a friend are working on a 2D game where the graphics will be .svg files and we will scale them appropriately either by rasterizing them first, or rendering them directly on a surface (which still would require rasterization at some point).
The problem is, I've been looking all day to find a library that will allow me to take an .svg file and eventually get it to render in allegro. As far as I know, it would involve rasterization into some sort of format that allegro can read and then allegro could render the "flattened" image.
So what are some C++ libraries I could use for taking an .SVG file and "flattening" it so I can render it? The library obviously needs to support scaling too so I can scale the vector graphic then rasterize it.
I'm using Windows and Visual C++ Express 2010.
I've tried Cairo, but it only allows writing of .svg files and doesn't allow you to read the .svg file. I've also looked into librsvg which works with Cario, but I was having a lot of trouble getting it to work properly on Windows (because it has loads of GNOME dependencies). If you have any guides for getting these to work (on Windows) that would be great too.
The wxsvg library allows loading and manipulating SVG files. Qt also has an SVG module.
I'm coming a little late to the conversation, but I would suggest you to look at Nano SVG, an extremely lightweight svg renderer that doesn't need cairo/libsvg. I got nanosvg compiled and working in a couple of hours. It's very basic, but it gets the job done.
https://github.com/sammycage/lunasvg is a nice svg parsing, rendering and manipulating library. It is written with pure c++
SVG++ library provides advanced support for SVG reading, so that rendering SVG with allegro can be implemented in reasonable time.
I have recently put together an SVG renderer library in C++:
https://github.com/igagis/svgren
It uses AGG for rendering to off-screen surface.
Supports gradients and all kind of shapes.
Personally, I using NanoSVG in my Simple Viewer GL. It's allowing me easy to load and rasterize SVG images in few lines of code. But this library has weak SVG support.
With the help of nanosvg and many other c++ svg parsers, adding svg rendering capability to your application should be trivial. The recipe is as follows: svg parser + vector rendering library = trivial svg rendering. The vector rendering library can be cairo and a number of other libraries (nanovg comes to mind, as well as a number of other vg libraries). Here's an example of how to support svg rendering with cairo + fltk + nanosvg combo. Now, all the svg parsers, as well as cairo itself, along with other renderers, have bugs/shortcomings, but basic svg support should never present a problem.
I was looking for a real quick way to render SVG file in a Windows OS based MFC project.
In that case, Microsoft Browser Web Browser ActiveX Control is found to be an ideal solution.
And here is the result of loading SVG file using the Browser control.

General printing raster and/or vector images

I'm looking for some API for printing.
Basically what I want to achieve is to print set of pixels(monochromatic bitmap which I store in memory) onto the generic paper format (A4,A5..etc.).
What I think that would be minimum API is:
printer devices list
printer buffer where I could send my in-memory pixmap (ex. like winXP printer tasks folder)
some API which would translate SI dimensions onto printer resolution, or according to previous - in memory pixmap (ex. 450x250) onto paper in appropriate resolution.
What I was considering is postScript, but I've some old LPT drived laserjet which probably doesn't support *PS.
Currently I'm trying to find something interesting in Qt - QGraphicsView.
http://doc.trolltech.com/4.2/qgraphicsview.html
Well you got close, look at Printing in Qt. There is the QPrinter class that implements some of what you are looking for. It is implmenetent as a QPaintDevice. This means that any widget that can render itself on the screen can be printed. This also mean you don't need to render to a bitmap to print, you can use Qt widgets or drawing functions for printing
On a side note, check the version number of the Qt documentation, the last release of Qt is 4.5, 4.6 is in beta.
You might want to investigate wx python for printing. Learning the framework might be a bit of an overhead for you though! I've had success with that in the past, both on windows and linux.
I've also used reportlab to make PDFs which are pretty easy to print using the minimum of OS interaction.
I would use PIL to create a BMP file, and then just use the standard OS services to print that file. PIL will accept data in either raster or vector form.