Create colored shape using GDAL library - shapefile

I am trying to create a colored polygon using GDAL library. I am currently using the C Shape Library, but it does not allow me to use colors. Is this possible in GDAL? How?
Thanks.

Shapefiles don't store colors.
They only contain the geometry (points, multipoints, lines, polygons, etc) of the features and their attributes. The color of each feature depends on the software that you use to display the shapes. For instance, in ArcGIS Desktop you would use the 'Symbology' dialog box to choose colors. They can depend on the value of one or many attributes. In this case, the symbology can be stored in a separate "layer" file (.lyr).

Related

Using Illustrator image for custom animation using C++

I have a vectorized Adobe Illustrator image that I would like to animate using custom xyz input (in this case simulated plot points that I would like to visualize over time, using a hand drawn picture/wireframe model) from e.g. a c++ program or perhaps even javascript application. Is there any (fairly straightforward) strategy to achieve this? E.g. using open GL or some other (open source) tool?
If you want to draw vectorized images you would need to use vector image renderer. The easiest way to do this is to use Flash since it has support vector drawing (one of the best) and really strong scripting language to do all sorts of things (anymate stuff based on input, etc) even 3d graphics.
The hard way of doing this is to use a custom library in c++ do draw vector grapics in opengl or directx. I can only speak of gameswf (opensource player for flash files) or scaleform. There two have support for swf files exported by flash. If you only need a renderer without any animation then there should be plenty of libraries out there (check out this thread)

pixel text to 3d mesh convert library

Via GDI/GDI+ get the text pixels or glyph, how to convert to 3d mesh? Does any exists library or source code can be used?
PS: I know D3DXCreateText, but Im using opengl...
If you works on OpenGL, you can try FTGL, it allows you to generate different polygon meshes from fonts, including extrudes meshes as well as render them:
http://ftgl.sourceforge.net/docs/html/ftgl-tutorial.html
but I am not sure how portable is this library specially for OpenGL ES...
Using GDI is definitely not among the best ways to go if you need to obtain glyphs for the text, you could use FreeType library instead (http://www.freetype.org), which is open-source and portable. It can produce both bitmaps and vectorized representation for the glyphs. You will have to initialize single instance of class FT_Library in your program that is later used to work with multiple fonts. After loading font form file (TrueType, OpenType, PostScript and some other formats) you'll be able to obtain geometrical parameters of specific characters and use it appropriately to create texture or build primitives using your preferred rendering API, OpenGL let it be.

OpenGL VBO Loading Font data

I need to draw a VBO consisting of font data, mainly numbers. How do I obtain the data and send it to the VBO?
I know that there is a library called freetype which should do this, but that uses bitmap fonts and I do not need bitmaps in my project. I just want polygon data which I can fill with my own color and reposition/scale.
Freetype also does outline fonts, but how do I go about tessellating the outline fonts to create accurate geometry?
Is what I am trying to achieve difficult? Can I find some examples of something similar?
Is what I am trying to achieve difficult?
In the case of rendering crisp fonts at all sizes with proper gamma correction and antialiasing: Yes!
This is actually a subject of active research.
Can I find some examples of something similar?
Just use a ready to use font drawing library for OpenGL, like FTGL.
A solution that could work is to save the font data as XY coordinates with indices from a 3D Modeling program. Than this data is loaded at startup, the result being the desired one.
Of course that this does not work when changing fonts and it takes time, but if the font will not change, it does its job.

3D model manipulation for a Desktop Augmented Reality application

I'm working on an Augmented Reality project that uses multiple markers to get positions for 3D models that I'm planning to overlay. (I'm doing this from scratch using OpenCV and I'm not using ARToolkit or any other off the shelf marker detection libraries).
Environment: Visual C++ 2008, Windows 7, Core2Duo 1GB ram, OpenCV 2.3
I want the 3D models to be manipulated by user so it will turn out to a sort of simulation.
For this I'm planning to use OpenGL. What are your suggestions, recommendations? Can the simulation part be done by using OpenGL itself or will i need to use something like OpenSceneGraph/ODE/Unity 3D/Ogre 3D?
This is for an academic project so better if I can produce more self-coded system rather than using off-the-shelf products.
it would seem that OpenGL is pretty enough for your needs (drawing a model with a specific colour and size).
If you're new to OpenGL, and you are not going to be using it for your future projects, it might be easier to use the old fixed-function pipeline, which already has the lighting and color system ready and doesn't require you to learn how to write shaders.
For your project, you will need a texture where you would copy the image from camera using glTexSubImage2D() which you would in turn draw to background (or you can use glDrawPixels() in case you don't require any scaling). After that, you need to have your model, complete with normals for lighting. Models can be eg. exported from Blender or 3DS Max to ascii format, which is pretty easy to parse. Then you can draw the model. Colors can be changed using glColor3f() before drawing the model (make sure you don't specify different color while drawing the model). Positioning of the models is done using matrices. The old OpenGL have some handy and easy-to-use functions for rotating and translating objects. There are also functions for scaling the objects (changing size), so that is covered pretty easy. All you need is to figure out camera position, relative to the marker (which i believe is implemented in OpenCV).
If you were to use the forward-compatible OpenGL, you would need to set up vertex buffer objects to contain model data and write vertex and fragment shaders to shade and display your model. That's kinda more work for which you get extended flexibility. But you can use shaders in the old OpenGL as well, if you decide you need them (eg. for some special effects).
Learning how to use a scenegraph or an engine (ogre) can take some time, i would not recommend it for your task.

Convert a raster image into a polygon using QGis (or another method)

I want to convert several images into polygon shp files using QGis (Quantum GIS 1.6).
I need to do edge detection AND differentiate between several different colors of lines (red, green, yellow and black). I need good edge detection as my images are scanned in at 200 DPI.
I'm open to other suggestions that don't involve QGis. Could I use Photoshop or would Arcgis do a better job of this?
Inkscape has a rather functional vectorizer (Trace Bitmap, IIRC)
http://inkscape.org/doc/tracing/tutorial-tracing.html
Inskscape's native format is SVG (fully vector based). It allows simplifcation of the resulting paths as well. Also, you might use the resulting XML to process automatically.