I am trying to explore model tuning through tensorboard profiling tab and was trying to generate files through tensorboard call back as shared below.
log_dir="logs/profile/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir,
histogram_freq=1, profile_batch = 3)
model.fit(train_data,
steps_per_epoch=20,
epochs=10,
callbacks=[tensorboard_callback])
and it has generated following files in my colab. Have then downloaded these files into my local PC to view in tensorboard but nothing is getting displayed in Profile tab. All the other tab showing information.
logs/profile/ logs/profile/20190907-130136/
logs/profile/20190907-130136/train/
logs/profile/20190907-130136/train/events.out.tfevents.1567861315.340ae5d21d3b.profile-empty
logs/profile/20190907-130136/train/events.out.tfevents.1567861301.340ae5d21d3b.119.129998.v2
logs/profile/20190907-130136/train/plugins/
logs/profile/20190907-130136/train/plugins/profile/
logs/profile/20190907-130136/train/plugins/profile/2019-09-07_13-01-55/
logs/profile/20190907-130136/train/plugins/profile/2019-09-07_13-01-55/local.trace
The script is located in https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/r2/tensorboard_profiling_keras.ipynb
Wanted to attach the files but there is no option to attach he files here...Can anyone please help why profile info from this script is not getting displayed into local PC tensorboard profile tab?
Apparently you need to use Chrome to view the profile information: https://github.com/tensorflow/tensorboard/issues/2874
I think you must install tensorboard plugin profile.
using this link to guid.
Related
I have created a Pub Sub Function in the Console and I want to upload a folder with my project using the console and not using terminal, every time I have an update.
I use Python.
In the Docs they say I can find a button to upload ZIP, but there is nothing like this.
https://cloud.google.com/functions/docs/deploying/console
Question is :
How do I upload my project from Console ? I can see the default source code in console.
Do I need to call my entry file main.py or index.py ?
Do I need to set up requirement.txt file by myself? I can't see it in my project in my machine.
You have to click 'edit' button to edit the Function, then in the 'Source' tab, left to the source, there is a drop down, where you can see "Upload Zip".
Doing this in the Terminal seems to be easier :
sudo gcloud functions deploy Project_name
I've created a code, that works for pushing images with appium locally on both android and iOS devices.
the images are in the appium project's /src/main/resources/images folder
String basePath = System.getProperty("user.dir");
String imagePath = "/src/main/resources/images/testImage.jpeg";
File imageToPush= new File(basePath+imagePath);
The problem is, that when this code runs on AWS I cannot find the images ( and don't know how/where to find them).
I've tried multiple ways to construct the basePath but so far with no success
According to Appium push file documentation, you can provide the directory to place the files.
You can try to push the file using the Android SD card directory or the iOS application data folder directory as the parameter. You can then access the files in Android SD Card or the iOS application data folder.
Alternatively, you can also use the Add Extra Data feature to add the images in form of a zip file when you schedule a run on the console or you provide a extraDataPackageArn in the schedule-run CLI command. extraDataPackageArn is generated from the create-upload CLI command. In this case, you can use create-upload CLI command on the images in form of a zip file you want to push to the device to get the extraDataPackageArn.
You can then access the extra data in Android SD card or the iOS application data folder according to https://aws.amazon.com/device-farm/faqs/,
Q: I’d like to supply media or other data for my app to consume. How do I do that?
You can provide a .zip archive up to 4 GB in size. On Android, it will be extracted to the root of external memory; on iOS, to your app’s sandbox.
For Android expansion files (OBB), we will automatically place the file into the location appropriate to the OS version.
I just created a model on ML Engine with:
gcloud ml-engine models create test_model --enable-logging
I went into the GUI and created a version. I'm hitting this model for predictions but where do I go in the GUI to find the logs for online predictions?
Thanks!
The logs can be found in StackDriver Logging:
Go to https://console.cloud.google.com
Click on the "hamburger" icon in the top left
Find the "Logging" option under "STACKDRIVER"
Click on "Logs" (you can get directly here with a link similar to: https://console.cloud.google.com/logs/viewer?project=my_project, just subsituate your actual project name)
Locate the drop down menu that allows you to select your logs.
Hover over "Cloud ML Model Version
Either click the model you're interested in or hover over it, if you want to select a specific version
(Optional) If selecting a specific version, click on it.
That said, I'll file a feature request to have a link in a more convenient place alongside the model and/or version.
I finally managed to install Oracle Apex 5.1.2 but I have problem with creating a workspace. Whenever I try to do so at the end I get an error:
I tried to create this workspace with following values:
The strange thing is that when I try to use Yes as option to Reuse Existing Schema no schemas are listed. Is it possible that Apex somehow doesn't have access to managing schemas?
I am using APEX with ORDS. At home page I get info that I have 1 workspace and 1 schema.
I've tried:
Using strong passwords as mentioned here
Changing provisioning type to request: Effect is the same. If user request a space and I accept it I get the exact same error.
Enabled OMF with parameter DB_CREATE_FILE_DEST = '/u01/app/oracle/oradata' -> *.dbf files are not created before and after the change in directory.
The root cause of this problem was installing APEX both on CDB$ROOT, so as a result, and on PDB1. I uninstalled APEX from root, repaired with #utlrp.sql script as in this tutorial and installed APEX again, but only on PDB1. Workspace was successfully created.
I had the same problem (apex 18.1/ords) in a database without CDB configured. The solution in my case was to run #apex_rest_config.sql script.
After that, the workspace is created without any problem.
If you don't want to reinstall apex to move it from the CDB to the PDB I suggest you try setting PDB mapping in your ords config file.
https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/20.2/aelig/configuring-REST-data-services.html#GUID-694B2F89-CE4F-4AB0-88E2-EB35D03DEC3C
I did it by adding
<entry key="db.serviceNameSuffix"></entry>
to the end of my defaults.xml (you can find its location by running
$ java -jar ords.war configdir ).
Then access apex with /yourpdb in the path: e.g.
http://server:port/ords/pdb1
This will run apex from that PDB instead of from the CDB and will create the workspace in there, that should work OK. It did for me.
I had same problem at ORACLE 12c, according to this link my problem has been solved. The problem is the users can't create workspace in CDB, so you must change session container to pdf files by the following steps :
$root> cd ~/TEMP/apex
$root> sqlplus
Enter user-name: sys as sysdba
Enter password:
SQL> exec dbms_xdb.sethttpport(0); /*set port*/
SQL> alter session set container=YOURAPPEXPDB;
SQL> exec dbms_xdb.sethttpport(8181);
SQL> alter system register;
//install oracle apex again
to remove oracle apex i use this link, its perfectly worked for me.
Currently putting together a rudimentary automated build process using node, for our iOS apps. I'd like to programmatically extract provisioning profile UUIDs from Appcelerator's 'ti info' cli command.
Something along the lines of this fictional idea:
var my_app_profile_ID = output_from.ti.info('com.mydomain.myapp' , distribution);
...which would consult 'ti info' and return the distribution provisioning profile ID (or adhoc if specified) for the given app ID.
Does such a thing exist / can someone suggest a way to achieve this please?
You can get the output of ti info as JSON:
$ ti info -t ios -o json
This should get you the info to determine the provisioning profile.