At this moment, i'm working in a shapefile visor in C++ and QT and using the GDAL/OGR library. I have this method to get the EPSG of my shapefiles:
OGRLayer layer = dataset->GetLayer(0);
OGRSpatialReference *spatialRef = layer->GetSpatialRef();
With this I get the EPSG number with:
atoi(spatialRef->GetAuthorityCode(NULL));
This work fine in all my shape files less one. In this case, the method always retun null.
I try use:
spatialRef->GetAuthorityCode("PROJCS");
spatialRef->GetAuthorityCode("GEOGCS");
spatialRef->GetAuthorityName("GEOGCS");
And all this method return "".
I check this shapefile in a gis program as QGIS and QGIS autodetected that his EPSG is 25830.
My question is this: could the projection information be readed with a different method than what I'm doing?
I wait yours suggestions.
Thank a lot.
EDIT
This is the content of .prj file:
PROJCS["ETRS89_UTM_zone_30N",GEOGCS["GCS_ETRS_1989",DATUM["D_ETRS_1989",SPHEROID["GRS_1980",6378137,298.257222101]],PRIMEM["Greenwich",0],UNIT["Degree",0.017453292519943295]],PROJECTION["Transverse_Mercator"],PARAMETER["latitude_of_origin",0],PARAMETER["central_meridian",-3],PARAMETER["scale_factor",0.9996],PARAMETER["false_easting",500000],PARAMETER["false_northing",0],UNIT["Meter",1]]
Something like this should work:
OGRLayer * layer = dataset->GetLayer(0);
layer->ResetReading();
OGRFeature * feat= layer->GetNextFeature();
OGRGeometry * geom = feat->GetGeometryRef();
OGRSpatialReference * spatRef = geom->getSpatialReference();
int EPSG = spatRef->GetEPSGGeogCS();
Hope it helps!
Related
I have the following script to open ABAQUS ODB file and get displacements and coordinates of a specific node set. I can get these to print on screen but need help to write them to a file (.xlsx, .cvs, .dat or .txt) for postprocess. I'm new to scripting with abaqus so any help would be great appreciated. Code is currently as follows:
from odbAccess import *
from numpy import array
odb = openOdb(path='Test_3.odb')
lastFrame = odb.steps['Step-1'].frames[1]
displacement = lastFrame.fieldOutputs['U']
coords=lastFrame.fieldOutputs['COORD']
NodeSet_x = odb.rootAssembly.instances['CFRP_SKIN_TS-1'].nodeSets['NODE_SET_X_AXIS']
NodeSet_y = odb.rootAssembly.instances['CFRP_SKIN_TS-1'].nodeSets['NODE_SET_Y_AXIS']
centerDisplacement_x = displacement.getSubset(region=NodeSet_x)
NodeCoord_x = coords.getSubset(region=NodeSet_x)
centerDisplacement_y = displacement.getSubset(region=NodeSet_y)
NodeCoord_y = coords.getSubset(region=NodeSet_y)
for v in centerDisplacement_x.values:
disp_out = v.nodeLabel, v.data[2]
print (disp_out)
for c in NodeCoord_x.values:
coord_out = c.nodeLabel, c.data[0], c.data[1], c.data[2]
print (coord_out)
odb.close()
I think, it just basic file read write thing. But anyways.
For more details on how to write the data in text file in python refer below links.
Click here to know about opening and closing files in python.
Click here to know about writing format in python.
Please follow below simple lines of code which works for any number of node sets.
node_sets = ['NODE_SET_X_AXIS','NODE_SET_Y_AXIS']
for node_set in node_sets:
fileName = '%s.dat'%node_set
fout = open(fileName,'w')
nset = odb.rootAssembly.instances['CFRP_SKIN_TS-1'].nodeSets[node_set]
field = odb.steps['Step-1'].frames[1].fieldOutputs['U'].getSubset(region=nset)
for val in field.values:
data = val.data
node_label = val.nodeLabel
node = odb.rootAssembly.instances['CFRP_SKIN_TS-1'].getNodeFromLabel(label=node_label)
coords = node.coordinates
fout.write('%10d%14.4E%14.4E%14.4E%16.4E%16.4E%16.4E\n'%tuple([node_label,]+list(coords)+list(data)))
fout.close()
This code creates a separate text file for each node set.
I want to use C++ to load TensorFlow model. And I want to know size of model's input, which is the placeholder in the model.
I google this problem, but I just find this link in stackoverflow :
C++ equivalent of python: tf.Graph.get_tensor_by_name() in Tensorflow?
Although I can get node, but tensorflow document don't tell me how to access the size of the node. So is there anyone know something about this?
Thank you so much!
OK,after many times attempts. I have find a workaround solution, It maybe tricky but works well.
At first, we can get the placeholder node using following code:
GraphDef mygd = graph_def.graph_def();
for (int i = 0; i < mygd.node_size(); i++)
{
if (mygd.node(i).name() == input_name)
{
auto node = mygd.node(i);
}
}
Then through the NodeDef.pd.h(tensorflow/core/framework/node_def.pb.h), we can get AttrValue through code like below:
auto attr = node.attr();
Then through the attr_value.cc(tensorflow/core/framework/attr_value.cc), we can get the shape attr value through code like below:
tensorflow::AttrValue shape = attr["shape"];
and the shape AttrValue is the structure used to store shape information. We can get the detail information through the function SummarizeAttrValue in tensorflow/core/framework/attr_value_util.h
string size_summary = SummarizeAttrValue(shape);
And then we can get the string format of shape like below:
[?,1024]
Hello :) I would like to import the data I marked as red.
after connecting my program to DB,
I executed this line
m_strE37 = m_command.GetString(37);
but unfortunately, m_strE37 stores "1.6799999999999"
their class is as follows
CString m_strE37;
typedef CCommand<CDynamicsStringAccessorW,CRowset> DbCommand;
DbCommand m_command;'
I Selected that record(row) and tried to get the value by using GetString(37) since it is 37th column.
I was quite new to this DB Process.
Can anyone help me to correctly get 1.68 ??
Thank you a lot in advance!
Try this snippet:
float f = atof(m_strE37);
m_strE37.Format("%3.2f",f);
For unicode :
float f = atof(CStringA(m_strE370.GetString()));
m_strE37.Format(L"%3.2f",f);
I am rying to use tensorboard embeddings page in order to visualize the results of word2vec. After debugging, digging of lots of codes i came to a point that tensorboard successfully runs, reads the confguration file, reads the tsv files but now the embeddings page does not show data.
( the page is opened , i can see the menus , items etc) this is my config file:
embeddings {
tensor_name: 'word_embedding'
metadata_path: 'c:\data\metadata.tsv'
tensor_path: 'c:\data\tensors2.tsv'
}
What can be the problem?
The tensor file originally is 1gb. in size, if i try that file , the app crashes becasue of the memory. So i copy and paste 1 or 2 pages of the original file into tensor2.tsv and use this file. May be this is the problem. May be i need to create more data by copy/ paste.
thx
tolga
Try following code snippet to get visualized word embedding in tensorboard. Open tensorboard with logdir, check localhost:6006 for viewing your embedding.
tensorboard --logdir="visual/1"
# code
fname = "word2vec_model_1000"
model = gensim.models.keyedvectors.KeyedVectors.load(fname)
# project part of vocab, max of 100 dimension
max = 1000
w2v = np.zeros((max,100))
with open("prefix_metadata.tsv", 'w+') as file_metadata:
for i,word in enumerate(model.wv.index2word[:max]):
w2v[i] = model.wv[word]
file_metadata.write(word + '\n')
# define the model without training
sess = tf.InteractiveSession()
with tf.device("/cpu:0"):
embedding = tf.Variable(w2v, trainable=False, name='prefix_embedding')
tf.global_variables_initializer().run()
path = 'visual/1'
saver = tf.train.Saver()
writer = tf.summary.FileWriter(path, sess.graph)
# adding into projector
config = projector.ProjectorConfig()
embed= config.embeddings.add()
embed.tensor_name = 'prefix_embedding'
embed.metadata_path = 'prefix_metadata.tsv'
# Specify the width and height of a single thumbnail.
projector.visualize_embeddings(writer, config)
saver.save(sess, path+'/prefix_model.ckpt', global_step=max)
I need to make a conversion from a DICOM image to a JPG/PNG and save the image using VTK, but the image that I produce does not match the original.
I know I need rescaling the pixels of the image to convert it but I do not know how. Does anyone know how I can do the conversion properly?
Below, my code in python:
from vtk import *
reader = vtkDICOMImageReader()
reader.SetFileName('image.dcm')
reader.Update()
castFilter = vtkImageCast()
castFilter.SetOutputScalarTypeToUnsignedChar()
castFilter.SetInputConnection(reader.GetOutputPort())
castFilter.Update()
writer = vtkJPEGWriter()
writer.SetFileName('output.jpg')
writer.SetInputConnection(castFilter.GetOutputPort())
writer.Write()
DICOMs in MRI and CT modalities are generally short types, and you are casting the image to unsigned char mercilessly.
If you are trying to get a corresponding uchar image, you should be using vtkImageShiftScale, just like the vtkImageCast docs say:
Warning
As vtkImageCast only casts values without rescaling them, its use is not recommented. vtkImageShiftScale is the recommented way to
change the type of an image data.
I made the conversion, here is my code:
from vtk import vtkDICOMImageReader
from vtk import vtkImageShiftScale
from vtk import vtkPNGWriter
reader = vtkDICOMImageReader()
reader.SetFileName('image.dcm')
reader.Update()
image = reader.GetOutput()
shiftScaleFilter = vtkImageShiftScale()
shiftScaleFilter.SetOutputScalarTypeToUnsignedChar()
shiftScaleFilter.SetInputConnection(reader.GetOutputPort())
shiftScaleFilter.SetShift(-1.0*image.GetScalarRange()[0])
oldRange = image.GetScalarRange()[1] - image.GetScalarRange()[0]
newRange = 255
shiftScaleFilter.SetScale(newRange/oldRange)
shiftScaleFilter.Update()
writer = vtkPNGWriter()
writer.SetFileName('output.jpg')
writer.SetInputConnection(shiftScaleFilter.GetOutputPort())
writer.Write()