How to implement texturing over a model using pycollada? - python-2.7

I am developing a python script which will be able to generate .DAE (COLLADA) files along with the associated KML files for developing 3D models of buildings. I have the street images of the buildings. By street images, I mean the front face image of each building. I need to put these images as a texture over their respective building models. I am unable to find suitable method by which I can do this using python. Till now, I have succeeded in generating blank cubes or cuboids which can be positioned over the map representing the buildings. I need to put the image as a texture on the front plane of these models taking the image as an input.
Kindly help.

Related

Train a model to draw bounding boxes on certain objects in an image?

Is it possible to use GCP Machine Learning products to train a model to draw bounding boxes on certain objects in an image? I'd like to be able to feed labeled images and have it predict where that label would belong.
I think you are looking for something like this, where the Tensorflow machine learning library is used:
https://cloud.google.com/solutions/creating-object-detection-application-tensorflow
A note:
When you say that you want to be able to feed labeled images and have it predict where that label would belong, i assume you mean where that object is present in the image in terms of the bounding box coordinates. If so then the library should take care of that for you, your job is just to train the network with your labeled images.

Downloading google street view panoramas programmatically from a URL request

I'm currently trying to download the equirectangular images Google displays in their 360 view of Google street view to an image file so that I may display them in VR in Unreal Engine 4. I've tried a few things -
Requesting the panorama tiles from a constructed URL in the format described in the Street View API. This ends up returning a file-not-found error for any pano ID that isn't the example one outlined. Perhaps I'm using the wrong method of getting a panorama ID? I used the following example to extract the pano ID and plugged that into the URL with tileX = tileY = 0 and Zoom level of 1 to no avail.
I've also tried downloading separate 2D images taken at 90-degree angles but when I go to display them on the inside of a cube, the images are misaligned.
There's a tool called UnrealJS that I've been looking into in order to grab the panorama data and save it off, but my inexperience with Node.js and server-side JS has made this a very confusing, fruitless endeavor. Other programs I've looked into that allow you to extract these panoramic images use canvas tags to request the maps API and then save what Google's API writes to the canvas into a buffer. Is this the way to go? UnrealJS does support a bastardized version of HTML that I may be able to use - this, however, is less than ideal.
Streetview panoramas are divided into an equal grid from an equirectangular image. This article explains how to get the url of every tile. For zoom level 5 (the highest resolution) there are 26 by 13 tiles (each tile being 512x512). All that is left is to download every image and draw each one onto a large empty image to their respective place using their position on the grid.
Note: by doing this you will be breaking Google's terms of service

How to visualize planes in Qt with C++

I am with dataset of points and me questions is.
How to show its?
This points are part of a 3D model.
I have them on groups of planes.
I mean, I have all points that are part of a plane.
Then If I join all planes, I have a 3D model like Hexagon3D:
I can see it with sketchUp. This is a file in xml format.
So, I have data of model in virtual dataset in code and I can see data of this model of a .xml that is output of me Qt, c++ application.
I would like to integrate any viewer in my application but I don't know what are the best tecnology.
Could you suggest anything??
I hear talk about opengl, webgl and other libraries like cgal.
Thanks

How to set classification colors in GDAL output files

I am using the GDAL C++ library to reclassify raster map images and then create an output image of the new data. However when I create the new the new image and open it, the classification values don't seem to have a color defined, so I just get a black image. I can fix this by going into the image properties and setting a color for each of the 10 classification values I'm using, but that is extremely time consuming for the amount of maps and trials I am doing.
My question is, is there a way to set metadata info through the GDAL API to define a color for each classification value? Just the name of the right function would be great, I can figure it out from there.
I have tried this using ArcGIS and QuantumGIS, and both have the same problem. Also the file type I am using is Erdas Imagine (called "HFA" in GDAL).
You can use SetColorTable() method on your raster band. Easiest to do is to fetch one pre-existing raster using GetColorTable(), and pass it to your new raster.

OpenGL animation

If I have a human body 3d model, that I want to animate walking, what is the best way to achieve this? Here are the possible ways I see this being implemented:
Create several models with the legs in different positions and then interpolate between these models.
Load the model into openGL, and somehow figure which vertices correspond to the legs and perform the appropriate transformations.
Implement a skeleton or armature (similar to this: blender animation wiki).
Technique that you described in the first option is called morph target animation and often used for some detailes of animation like facial animation or maybe opening and closing of hands.
Second option is procedural or physical animation which works something like robotics where you give the body of your character some velocity to move forward and calculate what legs need to do for it to avoid falling. But you wouldn't do it directly on vertices, but on skeleton. See next one.
Third option is skeletal animation which animates skeleton and the vertices follow it by the set of rules. Attaching vertices to skeleton is called skinning.
I suggest that, after getting hang of opengl stuff (viewing and positioning models in space, camera, etc), you start with skeletal animation.
You will need a rigged and animated model for your 3d app of choice. Then you can write an exporter to your custom format or choose a format that you want to read from your app. That file format should contain description of the model, skeleton, skinning and key frames. Than you read and use that data from your code to build the mesh, skeleton and animate over key frames.
If I were you, I'd download Blender from http://www.blender.org and work through some animation tutorials. For example, this one:
http://wiki.blender.org/index.php/Doc:Tutorials/Animation/BSoD/Character_Animation
Having done that, you can then export your model and animations using e.g. the Ogre exporter. I think this is the latest version, but check to make sure:
http://www.ogre3d.org/tikiwiki/Blender+Exporter&structure=Tools
From there, you just need to write the C++ code to load everything in, interpolate between keyframes, etc. I have code I can show you for this if you're interested.