I would like to embed an activity graph created by hgactivity inside my hgweb webinterface. What's the best method to do so?
Here's a screenshot of a hgactivity graph:
It shows the number of commits through time to a Mercurial repository.
The difficulty you'll have is where to put the chart so it can be served. If you're okay with having a standard view that everyone sees you could use a cron job to run hg activity and save the image to a standard filename in with the hgweb static files (the css, etc.). Then just tweak your hgweb template to include an img tag that references the image file. If your cron job is overwriting that file periodically (daily, hourly?) you'll be good to go.
If you need something more dynamic (user specific queries, particular date ranges, etc.) you might want to look at (my) hg chart extension. It's not as full features as hg activity but it does have the advantage of spitting out google chart API urls rather than image files. Example:
https://chart.apis.google.com/chart?cht=lxy&chs=400x400&chd=e:AAAKAaAjAtA6BHBQBaBkBtB3CACKCUChCqC0C9DHDRDaDkDuD3EBEOEXEhExE7FIFRFbFlFuF4GBGOGeGyG7HFHOHbHlHyIFIVIiIyI8JMJcJlJyJ8KcK2LGLWL8MQMwNDNTNgNqNzOAONOaOjOtO3PAPKPUPdPnPwP6QEQNQXQhQqQ0Q-RORXRnR0SBSLSUSeSrS0S-TITRTeTuT7UIUVUeUoU1VFVPVbVoVyV8WFWPWYWiWsW1W.XJXSXcXmXvX5YGYSYfYpYzY8ZGZTZcZpZzZ8aGaQaZajata6bDbNbWbgbwcDcQcacjc0c9dHdQdadkdtd3eBeNeae3fEfOfXfnf0gOgegug4hBhVhhhrh1h-iIiSibiliyjFjVjlj.kSkckpk1lClSlflvmDmMmWmfmpmznAnJnTncnmnwn5oDoNoWogoqo2pApKpTpdpnpwp9qHqUqdqnq3rArRrkr0r-sKsXshsqs0tLtbtkt0uEuRuou7vFvOvYvivrv4wFwPwfwowyw7xFxPxYxlxvx4yFyVyfypyyy8zGzPzZzizsz20D0M0W0g0p0z081J1T1d1m1w152J2g3Q3q3z4E4Q4g4t5B5K5U5k5u536B6R6r677E7R7h707-8O8b8l8x879F9S9b9o9y97-P-f-o-y-8.F.P.Y.i.s.1..,VnFsKVETK.eWNyCaLTTrSnBdN.MKMVTTHuL8SLLBAbENHZD.HrE8CEKSC1G1H9CiSeJiMb..ItFLFDmnDBIhMKCVFcDbFaCAOuNUEsBtepD3DuBTA6DfGjBoDdDLAuHpAVFWEjI5CYCzAtGWGqFTAhfrDFGxHbFVNZBjE7EBAbDjEaK2CjJXAnHeDpFyGhRSD2OWGJajC.KGHreDISCqGtKVHUCZKbFtCHhId8GrB2EpHRJqItR5A5OSSrOJHgDpKmBHA4D2C1BbE4KBHbCtFHKQW7QpQuKRJDMSEGfDDrDZAeB2VqEPGkHlFHJrHuFFJ-IcB5DQFaGZAaArATA4AJALDaBmCTCkCoAlEtAkEPHpCwE.ETGbFfC9BZJtMJBNBwBPCZHzA3CEAUEiCBBqPdcDIwLnPjFPH3B9S-GNFbDqDaOfdOKcGDKaHeK8IODGJdDXCUCdHADbBQDKCIB1DGAzDCWKLREaCGAFAeA7DEPCA0BZC5FSc0OTC9N7ANKGDGQMEPPfN.BSFHBwJeHiH-FvJlXxEuF1K-M0COEbHHDfB-FKA-TpaADISdHoXiMUMGETE2HnBFBqIYAVATAWA2F5DOEELxNmElS-EDBFFRBBHaEFAyE2AbI9SHDKDSDSFqBtCyFQFZFeBCHhAuCKAibPDlCjXXMRDYKXCq&chxt=y,x&chxl=1:%7c05/03/05%7c03/17/06%7c01/30/07%7c12/15/07%7c10/29/08&chxr=0,0,7166
which looks like:
Then there are no files to save or serve. You tweak your template to invoke a little code that runs hg chart, insert the URL into the page's HTML, and let google create and serve the image.
I came up with the following setup:
Add a folder activity to the template static
Add a changegroup hook called activity in hgwebconfig:
[hooks]
changegroup.activity = hg activity --filename /usr/share/mercurial/templates/static/activity/${PWD##*/}.png
The ${PWD##*/} will be replaced by the folder name of the repository (a hook script is run in the root of a repository).
Upon triggering (push or pull of one or more changesets) an activity graph is placed in the static/activity folder of the (default) template folder.
Now you can add the following HTML to the template page of your preference
<img src="{staticurl}/activity/{repo}.png"/>
This will load the most recent activity graph for the current repository.
Caveat:
You need at least one push after activation of this hook before the image is created.
I started a project that has this build in. You can see a demo on
http://hg.python-works.com it's pylons based and have activity graph.
Related
By default GitLab adds issue ID from branch name to the merge request description, see Merge requests to close issues:
Merge requests to close issues
To create a merge request to close an issue when it’s merged, you can either:
Add a note in the MR description.
In the issue, select Create a merge request. Then, you can either:
Create a new branch and a draft merge request in one action. The branch is named issuenumber-title by default, but you can choose any name, and GitLab verifies that it’s not already in use. The merge request inherits the milestone and labels of the issue, and is set to automatically close the issue when it is merged.
Create a new branch only, with its name starting with the issue number.
But I want to use a custom merge request description template, see Create a merge request template:
Create a merge request template
Similarly to issue templates, create a new Markdown (.md) file inside the .gitlab/merge_request_templates/ directory in your repository. Commit and push to your default branch.
Research
GitLab Flavored Markdown doesn't contain any markup for the issue ID from the branch name.
Markdown Style Guide for about.GitLab.com doesn't contain any markup for the issue ID from the branch name.
GitLab quick actions doesn't contain any action for the issue ID from the branch name.
Question
How to add issue ID from branch name to the merge request description template?
I'm looking for a similar problem, i see that Push Options can be a solution for this:
https://docs.gitlab.com/ee/user/project/push_options.html
for example:
merge_request.title="<title>" Set the title of the merge request.
merge_request.description="<description>" Set the description of the merge request.
It is true that this must be set up by the developer on the client side, but with git hooks and shell scripts, the content can be anything!
This may be a basic question but I cannot figure out the answer. I have a simple postman collection that is run through newman
newman run testPostman.json -r htmlextra
That generates a nice dynamic HTML report of the test run.
How can I then share that with someone else? i.e. via email. The HTML report is just created through a local URL and I can't figure out how to save it so it stays in its dynamic state. Right clicking and Save As .html saves the file, but you lose the ability to click around in it.
I realize that I can change the export path so it saves to some shared drive somewhere, but aside from that is there any other way?
It's been already saved to newman/ in the current working directory, no need to 'Save As' one more time. You can zip it and send via email.
If you want to change the location of generated report, check this.
I trained a XGBoost model using AI Platform as here.
Now I have the choice in the Console to download the model, as follows (but not Deploy it, since "Only models trained with built-in algorithms can be deployed from this page"). So, I click to download.
However, in the bucket the only file I see is a tar, as follows.
That tar (directory tree follows) holds only some training code, and not a model.bst, model.pkl, or model.joblib, or other such model file.
Where do I find model.bst or the like, which I can deploy?
EDIT:
Following the answer, below, we see that the "Download model" button is misleading as it sends us to the job directory, not the output directory (which is set arbitrarily in the codel the model is at census_data_20210527_215945/model.bst )
bucket = storage.Client().bucket(BUCKET_ID)
blob = bucket.blob('{}/{}'.format(
datetime.datetime.now().strftime('census_%Y%m%d_%H%M%S'),
model))
blob.upload_from_filename(model)
Only in-build algorithms automatically store the model in Google Cloud storage.
In your case, you have a custom training application.
You have to take care of saving the model on your own.
Referring to your example this is implemented as listed here.
The model is uploaded to Google Cloud Storage using the cloud storage client.
I have two separated servers: one is CD server and one is CM Server. I upload images on CM server and publish them. On the web database, although I saw the the images under Media Library item
But they aren't displayed on CD server (e.g on website), it indicates that the images not found. Please help me to know how I can solve that problem or I do need some configuration for that.
Many thanks.
Sitecore media items can carry actual media file either as:
Blob in the database - everything works automatically OOB
Files on the file system - one needs to configure either WebDeploy, or DFS
Database resources are costly, you might not want to waste them on something that can be achieved by free tools.
Since WebDeploy by default locates modified files by comparing file hashes between source, and target, it will become slower after a while.
You might have uploaded image in media library as a file. As such, image is stored as a File on file system. To verify this, your image item in media library will have a path value set in 'File Path' field of your image item. Such files have to be moved to file system of CD server as well.
If you uploaded your images in bulk, you can store them as blob in DB by default rather than as a File in file system using following setting-
<setting name="Media.UploadAsFiles" value="false">
I've submitted a training job to the cloud using the RESTful API and see in the console logs that it completed successfully. In order to deploy the model and use it for predictions I have saved the final model using tf.train.Saver().save() (according to the how-to guide).
When running locally, I can find the graph files (export-* and export-*.meta) in the working directory. When running on the cloud however, I don't know where they end up. The API doesn't seem to have a parameter for specifying this, it's not in the bucket with the trainer app, and I can't find any temporary buckets on the cloud storage created by the job.
When you set up your Cloud ML environment you set up a bucket for this purpose. Have you looked in there?
https://cloud.google.com/ml/docs/how-tos/getting-set-up
Edit (for future record): As Robert mentioned in comments, you'll want to pass the output location to the job as an argument. Couple of things to be mindful of:
Use a unique output location per job, so one job doesn't clobber over the outputs of another.
The recommendation is to specify the parent output path, and use it to contain the exported model in a subpath called 'model', as well as organizing other outputs like checkpoints and summaries within that path. That makes it easier to manage all the outputs.
While not required, I'll also suggest staging the training code in a packages subpath of the output, which helps correlate the source with the outputs it produces.
Finally(!), also keep in mind when you use hyperparameter tuning, you'll need to append the trial id to the output path for outputs produced by individual runs.