tensorboard embeddings show no data - tensorboard

I am rying to use tensorboard embeddings page in order to visualize the results of word2vec. After debugging, digging of lots of codes i came to a point that tensorboard successfully runs, reads the confguration file, reads the tsv files but now the embeddings page does not show data.
( the page is opened , i can see the menus , items etc) this is my config file:
embeddings {
tensor_name: 'word_embedding'
metadata_path: 'c:\data\metadata.tsv'
tensor_path: 'c:\data\tensors2.tsv'
}
What can be the problem?
The tensor file originally is 1gb. in size, if i try that file , the app crashes becasue of the memory. So i copy and paste 1 or 2 pages of the original file into tensor2.tsv and use this file. May be this is the problem. May be i need to create more data by copy/ paste.
thx
tolga

Try following code snippet to get visualized word embedding in tensorboard. Open tensorboard with logdir, check localhost:6006 for viewing your embedding.
tensorboard --logdir="visual/1"
# code
fname = "word2vec_model_1000"
model = gensim.models.keyedvectors.KeyedVectors.load(fname)
# project part of vocab, max of 100 dimension
max = 1000
w2v = np.zeros((max,100))
with open("prefix_metadata.tsv", 'w+') as file_metadata:
for i,word in enumerate(model.wv.index2word[:max]):
w2v[i] = model.wv[word]
file_metadata.write(word + '\n')
# define the model without training
sess = tf.InteractiveSession()
with tf.device("/cpu:0"):
embedding = tf.Variable(w2v, trainable=False, name='prefix_embedding')
tf.global_variables_initializer().run()
path = 'visual/1'
saver = tf.train.Saver()
writer = tf.summary.FileWriter(path, sess.graph)
# adding into projector
config = projector.ProjectorConfig()
embed= config.embeddings.add()
embed.tensor_name = 'prefix_embedding'
embed.metadata_path = 'prefix_metadata.tsv'
# Specify the width and height of a single thumbnail.
projector.visualize_embeddings(writer, config)
saver.save(sess, path+'/prefix_model.ckpt', global_step=max)

Related

ABAQUS-python - writing ODB results to file

I have the following script to open ABAQUS ODB file and get displacements and coordinates of a specific node set. I can get these to print on screen but need help to write them to a file (.xlsx, .cvs, .dat or .txt) for postprocess. I'm new to scripting with abaqus so any help would be great appreciated. Code is currently as follows:
from odbAccess import *
from numpy import array
odb = openOdb(path='Test_3.odb')
lastFrame = odb.steps['Step-1'].frames[1]
displacement = lastFrame.fieldOutputs['U']
coords=lastFrame.fieldOutputs['COORD']
NodeSet_x = odb.rootAssembly.instances['CFRP_SKIN_TS-1'].nodeSets['NODE_SET_X_AXIS']
NodeSet_y = odb.rootAssembly.instances['CFRP_SKIN_TS-1'].nodeSets['NODE_SET_Y_AXIS']
centerDisplacement_x = displacement.getSubset(region=NodeSet_x)
NodeCoord_x = coords.getSubset(region=NodeSet_x)
centerDisplacement_y = displacement.getSubset(region=NodeSet_y)
NodeCoord_y = coords.getSubset(region=NodeSet_y)
for v in centerDisplacement_x.values:
disp_out = v.nodeLabel, v.data[2]
print (disp_out)
for c in NodeCoord_x.values:
coord_out = c.nodeLabel, c.data[0], c.data[1], c.data[2]
print (coord_out)
odb.close()
I think, it just basic file read write thing. But anyways.
For more details on how to write the data in text file in python refer below links.
Click here to know about opening and closing files in python.
Click here to know about writing format in python.
Please follow below simple lines of code which works for any number of node sets.
node_sets = ['NODE_SET_X_AXIS','NODE_SET_Y_AXIS']
for node_set in node_sets:
fileName = '%s.dat'%node_set
fout = open(fileName,'w')
nset = odb.rootAssembly.instances['CFRP_SKIN_TS-1'].nodeSets[node_set]
field = odb.steps['Step-1'].frames[1].fieldOutputs['U'].getSubset(region=nset)
for val in field.values:
data = val.data
node_label = val.nodeLabel
node = odb.rootAssembly.instances['CFRP_SKIN_TS-1'].getNodeFromLabel(label=node_label)
coords = node.coordinates
fout.write('%10d%14.4E%14.4E%14.4E%16.4E%16.4E%16.4E\n'%tuple([node_label,]+list(coords)+list(data)))
fout.close()
This code creates a separate text file for each node set.

rmarkdown::render loop stalls after a few iterations

I am trying to generate a large number of html files using rmarkdown::render in a loop, with Parameterized reports.
After generating a number of files, it stalls, and I have to restart RStudio. I can generate each individual file in itself; it is not at the same file it stalls each time when I try running the loop.
There is no error message, making it hard for me to debug.
I have tried the following, none of which helped:
Closing all other programs; reducing the memory used.
Adding knitr::knit_meta(clean = TRUE) before render
Adding clean = T inside render
Calling render with callr::r
Including rm([[data]]); gc() at the end of the .rmd file that is called by render
Any other ideas of how to try and solve this issue?
Taking This from R Markdown: The Definitive Guide Where an example is used to render multiple files. Every file would have a it's own name regarding it's param region and year.
render_report = function(region, year) {
rmarkdown::render(
"MyDocument.Rmd", params = list(
region = region,
year = year
),
output_file = paste0("Report-", region, "-", year, ".html")
)
}

Google Cloud Video Intelligence API in Python - Unable to run object tracking on multiple videos in a folder

I'm trying to run object tracking on a folder containing multiple videos. There are 5 videos in my bucket and following the documentation from here, it suggests using the wildcard (*) operator. However, when I run the entire script, only 1 video gets annotated and not the entire folder containing 5 videos. Also, response2.json does not get created as the output_uri in my GCS bucket.
To identify multiple videos, a video URI may include wildcards in the object-id. Supported wildcards: ' * ' to match 0 or more characters; ‘?’ to match 1 character.
https://googleapis.dev/python/videointelligence/latest/gapic/v1/types.html
Which is what I've done in my input_uri bit of the code:
gcs_uri = 'gs://video_intel/*'
If you check the screenshot, it should the bucket id name and shows multiple videos in the same folder.
Can anyone pls help with this question. Thanks.
Full script:
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='poc-video-intelligence-da5d4d52cb97.json'
"""Object tracking in a video stored on GCS."""
from google.cloud import videointelligence
video_client = videointelligence.VideoIntelligenceServiceClient()
features = [videointelligence.enums.Feature.OBJECT_TRACKING]
gcs_uri = 'gs://video_intel/*'
output_uri = 'gs://video_intel/response2.json'
operation = video_client.annotate_video(input_uri=gcs_uri, features=features, output_uri=output_uri)
print("\nProcessing video for object annotations.")
result = operation.result(timeout=300)
print("\nFinished processing.\n")
# The first result is retrieved because a single video was processed.
object_annotations = result.annotation_results[0].object_annotations
for object_annotation in object_annotations:
print("Entity description: {}".format(object_annotation.entity.description))
if object_annotation.entity.entity_id:
print("Entity id: {}".format(object_annotation.entity.entity_id))
print(
"Segment: {}s to {}s".format(
object_annotation.segment.start_time_offset.seconds
+ object_annotation.segment.start_time_offset.nanos / 1e9,
object_annotation.segment.end_time_offset.seconds
+ object_annotation.segment.end_time_offset.nanos / 1e9,
)
)
print("Confidence: {}".format(object_annotation.confidence))
# Here we print only the bounding box of the first frame in the segment
frame = object_annotation.frames[0]
box = frame.normalized_bounding_box
print(
"Time offset of the first frame: {}s".format(
frame.time_offset.seconds + frame.time_offset.nanos / 1e9
)
)
print("Bounding box position:")
print("\tleft : {}".format(box.left))
print("\ttop : {}".format(box.top))
print("\tright : {}".format(box.right))
print("\tbottom: {}".format(box.bottom))
print("\n")
please modify 'gcs_uri = 'gs://video_intel/'' to gcs_uri = 'gs://video_intel/.*'

Use metadata in filename when saving harbor input Liquidsoap

So I have an instance of Liquidsoap, which I am using to stream to an Icecast server.
I'd like to record any live broadcasts that take place automatically, which I am now doing and is working well.
What I'd like to do is use the metadata (specifically the songname) of the live show when creating the mp3 archive.
#!/usr/local/bin/liquidsoap
set("log.file",true)
set("log.file.path","/var/log/liquidsoap/radiostation.log")
set("log.stdout",true)
set("log.level",3)
#-------------------------------------
set("harbor.bind_addr","0.0.0.0")
#-------------------------------------
backup_playlist = playlist("/home/radio/playlists/playlist.pls",conservative=true,reload_mode="watch")
output.dummy(fallible=true,backup_playlist)
#-------------------------------------
live_dj = input.harbor(id="live",port=9000,password="XXX", "live")
date = '%m-%d-%Y'
time = '%H:%M:%S'
output.file(%mp3, "/var/www/recorded-shows/#{Title} - Recorded On #{date} At #{time}.mp3", live_dj, fallible=true)
#time_stamp = '%m-%d-%Y, %H:%M:%S'
#output.file(%mp3, "/var/www/recorded-shows/live_dj_#{time_stamp}.mp3", live_dj, fallible=true)
#-------------------------------------
on_fail = single("/home/radio/fallback/Enei -The Moment Feat DRS.mp3")
#-------------------------------------
source = fallback(track_sensitive=false,
[live_dj, backup_playlist, on_fail])
# We output the stream to icecast
output.icecast(%mp3,id="icecast",
mount="myradio.mp3",
host="localhost", password="XXX",
icy_metadata="true",description="cool radio",
url="http://myradio.fm",
source)
I have added #{title} where I would like my song title to appear, sadly though I am unable to get this populate.
My Dj's use BUTT and the show title is connected as part of their connection, so the data should be available pre recording.
Any advice is much appreciated!
This is far from being as easy as it seems.
The title metadata is dynamic, thus not available as a variable on script initialization
The filename argument of output.file is compiled when the script is initialized
A solution would consist in:
Defining a variable reference title to populate with live metadata
Output to a temporary file
Rename the file on close using on_close argument with output.file (in this case, we can just prepend the title)
This would give the following code (on a Linux box, change mv with ren on Windows):
date = '%m-%d-%Y'
time = '%H:%M:%S'
# Title is a reference
title = ref ""
# Populate with metadata
def get_title(m)
title := m['title']
end
live_dj = on_metadata(get_title,live_dj)
# Rename file on close
def on_close(filename)
# Generate new file name
new_filename = "#{path.dirname(filename)}/#{!title} - #{basename(filename)}"
# Rename file
system("mv '#{filename}' '#{new_filename}'")
end
output.file(on_close=on_close, %mp3, "/var/www/recorded-shows/Recorded On #{date} At #{time}.mp3", live_dj, fallible=true)
I tested a similar scenario and it works just well. Just beware that this will create a new file every time a DJ disconnects or updates the title. Also keep in mind that time stamps will be resolved by output.file.
This is based on the following example from a Liquidsoap dev: https://github.com/savonet/liquidsoap/issues/661#issuecomment-439935854)

How to read word each page?

I know doc.Save() function save all page in one HTML file.
doc.RenderToScale() function save each page to the independent image file.
but i want read or save each page in independent HTML file,I had not idea,can you help me?
You can use the following code sample to convert each page to HTML or any other format supported by Aspose.Words.
String srcDoc = Common.DATA_DIR + "src.docx";
String dstDoc = Common.DATA_DIR + "dst {PAGE_NO}.html";
Document doc = new Document(srcDoc);
LayoutCollector layoutCollector = new LayoutCollector(doc);
// This will build layout model and collect necessary information.
doc.updatePageLayout();
// Split nodes in the document into separate pages.
DocumentPageSplitter splitter = new DocumentPageSplitter(layoutCollector);
// Save each page to disk as separate documents.
for (int page = 1; page <= doc.getPageCount(); page++)
{
Document pageDoc = splitter.getDocumentOfPage(page);
pageDoc.save(dstDoc.replace("{PAGE_NO}", page+""));
}
It depends on 3 other classes, which you can find in this zip file.
I work with Aspose as Developer Evangelist.