I want to run multiple Graph Engine instances on the same machine by listening on different ports.
Please take a look here: https://www.graphengine.io/docs/manual/Config/config-v2.html#server
When defining multiple servers on the same machine, you can put the assemblies into different folders. In the config file, you can then identify each of them with a combination of address and assembly path.
There are two binding rules for an instance:
The Endpoint property matches one of the network interfaces of the machine on which the Graph Engine instance is running.
If AssemblyPath is specified, it matches the directory where the running Graph Engine instance resides.
Also please check the version of your GraphEngine.Core package. Only the latest package supports the new configuration file format.
Related
I'll give a bit of a background as to the setup we have and why. Currently myself and a friend want to collaborate on an Unreal Engine Project. To do this I've set up an Amazon Lightsail Instance with Windows Server running. I've then installed Perforce onto this Server and added two users. Both of us are able to connect to this server from our local machines (great I thought!). Our goal was to attach two 'virtual' disks of 32gb to this server via Lightsails Storage option. I've formatted these discs and they are detected as Disk D and E on the Server. Our goal was to have two depots, one on Disk E and one on Disk D, the reason for this being the C disk was only 20gb (12gb Free after Windows).
I have tried multiple things (not got much hair left after this) to try and map the depots created to each HDD but have had little success and need your wisdom!
I've followed both the process indicated in this support guide (https://community.perforce.com/s/article/2559) via CMD as well as changing the depot storage location in P4Admin on the Server via RDP to the virtual disks D and E respectively.
Example change is from //UE_WIP/... to D:/UE_WIP/... (I have create a folder UE_WIP and UE_LIVE on each HDD).
When I open up P4V on my local machine I'm able to happily connect (as per screenshot) and set workstation to my local machine (detects both depots). This is when we're getting stuck. I then open up a new unreal engine file and save the unreal engine file to the the following local directory E:/DELETE/Perforce/Test/ and open up source control (See image 04). This is great, it detects the workspace and all is connecting to the server.
When I click submit to source control I get the following 'Failed Checking Source Control' when I try adding via P4V manually marking the new content folder for add I get the following 'file(s) not in client view.
All we want is the ability to send an Unreal Engine up to either the WIP Drive Depot or the Live Drive Depot. To resolve this does it require:
Two different workstations (one set up for LIVE and one for WIP)
Do we need to add some local folders to our directory? E:/DELETE/Perforce/UE_WIP & E:/DELETE/Perforce/UE_LIVE?
Do we need to tweak something on the Perforce Server?
Do we need to tweak something in Unreal Engine?
Any and all help would be massively appreciated.
Best,
Ben
https://imgur.com/a/aaMPTvI - Image gallery of issues
Your screenshots don't show how (or if?) you set up your local workspace (i.e. the thing that tells Perforce where the files are on your local workstation).
See: https://www.perforce.com/perforce/r13.1/manuals/p4v/Defining_a_client_view.html
The Perforce server acts as a layer of abstraction between the backend storage (i.e. the depots you've set up) and the client machines where you actually do your work. The location of the depot files doesn't matter at all to the client (any more than, say, the backend filesystem of a web server matters to your web browser); all that matters is how you set up the workspace, which is a simple matter of "here's where my local files are" (the Root) and "here's how my local paths map to depot paths" (the View).
You get the "file not in view" error if you try to add a local file to the depot and it's not in the View you've defined. The fix there is generally to simply fix the Root and/or View to accurately describe where you local files are. One View can easily map to multiple depots (as long as they're on a single server).
(edit)
Specifically, in your case, all of the files you're trying to add are under the path:
E:\DELETE\Perforce\Test\Saved\...
Since you've set up your workspace as:
Client: bsmith
Root: E:\DELETE\Perforce\bsmith
View:
//WIP/... //bsmith/WIP/...
//LIVE/... //bsmith/LIVE/...
then your bsmith workspace consists of these two local paths:
E:\DELETE\Perforce\bsmith\WIP\...
E:\DELETE\Perforce\bsmith\LIVE\...
The files you're trying to add aren't even under your Root, much less under either of the View mappings. That's what the "not in client view" error messages mean.
If you want to add the files where they are, modify your Root and View so that you define your workspace as being where your files are; if you want to have the files in one of the local directories above that you've already defined as being where your workspace lives, you'll have to move them there. If you put your files in bsmith\WIP, then when you add them they'll go to the WIP depot; if you put them in bsmith\LIVE, then they'll go to the LIVE depot, per your View.
Either way, once they're in your workspace, you can add them to the depot. Simple as that!
I need to move more than 50 compute instances from a Google Cloud project to another one, and I was wondering if there's some tool that can take care of this.
Ideally, the needed steps could be the following (I'm omitting regions and zones for the sake of simplicity):
Get all instances in source project
For each instance get machine sizing and the list of attached disks
For each disk create a disk-image
Create a new instance, of type machine sizing, in target project using the first disk-image as source
Attach remaining disk-images to new instance (in the same order they were created)
I've been checking on both Terraform and Ansible, but I have the feeling that none of them supports creating disk images, meaning that I could only use them for the last 2 steps.
I'd like to avoid writing a shell script because it doesn't seem a robust option, but I can't find tools that can help me doing the whole process either.
Just as a side note, I'm doing this because I need to change the subnet for all my machines, and it seems like you can't do it on already created machines but you need to clone them to change the network.
There is no tool by GCP to migrate the instances from one project to another one.
I was able to find, however, an Ansible module to create Images.
In Ansible:
You can specify the “source_disk” when creating a “gcp_compute_image” as mentioned here
Frederic
I am just looking into using GCP for cloud computing stuff. So far I have been using AWS and the boto3 library and was trying to use the google python client API for launching instances.
So an example I came across was from their docs here. The instance machine type is specified as:
machine_type = "zones/%s/machineTypes/n1-standard-1" % zone
and then it passed to the configuration as:
config = {
'name': name,
'machineType': machine_type,
....
I wonder how does one go about specifying machines with GPU and custom RAM and processors etc. from the python API?
The Python API is basically a wrapper around the REST API, so in the example code you are using, the config object is being built using the same schema as would be passed in the insert request.
Reading that document shows that the guestAccelerators structure is the relevant one for GPUs.
Custom RAM and CPUs are more interesting. There is a format for specifying a custom machine type name (you can see it in the gcloud documentation for creating a machine type). The format is:
[GENERATION]custom-[NUMBER_OF_CPUs]-[RAM_IN_MB]
Generation refers to the "n1" or "n2" in the predefined names. For n1, this block is empty, for n2, the prefix is "n2-". That said, experimenting with gcloud seems to indicate that "n1-" as a prefix also works as you would expect.
So, for a 1 CPU n1 machine with 5GB of ram, it would be a custom-1-5120. This is what you would replace the n1-standard-1 in your example with.
You are, of course, subject to the limits of how to specify a custom machine such as the fact that RAM must be a multiple of 256MB.
Finally, there's a neat little feature at the bottom of the console "create instance" page:
Clicking on the relevant link will show you the exact REST object you need to create the machine you have defined in the console at that very moment, so it can be very useful to see how a particular parameter is used.
You can create a Compute Engine instance using the Compute Engine API. Specifically, we can use the insert API request. This accepts a JSON payload in a REST request that describes the desired VM instance that you desire. A full specification of the request is found in the docs. It includes:
machineType - specs of different (common) machines including CPUs and memory
disks - specs of disks to be added including size and type
guestAccelerators - specs for GPUs to add
many more options ...
One can also create a template description of the machine structure you want and simplify the creation of an instance by naming the template to use and thereby abstracting the configuration details out of code and into configuration.
Beyond using REST requests (which can be passed from a python), you also have the capability to create Compute Engines from:
GCP Console - web interface
gcloud - command line (which I suspect can also be driven from within Python)
Deployment Manager - configuration driven deployment which includes Python as a template language
Terraform - popular environment for creating Infrastructure as Code environments
I am building out a Sitecore farm with multiple Content Delivery servers. In the current process, I stand up the CD server and go through the manual steps of commenting out connection strings and enabling or disabling config files as detailed here per each virtual machine/CD server:
https://doc.sitecore.net/Sitecore%20Experience%20Platform/xDB%20configuration/Configure%20a%20content%20delivery%20server
But since I have multiple servers, is there any sort of global configuration file where I could dictate the settings I want (essentially a settings template for CD servers), or a tool where I could load my desired settings/template for which config files are enabled/disabled etc.? I have used the SIM tool for instance installation, but unsure if it offers the loading of a pre-determined "template" for a CD server.
It just seems in-efficient to have to stand up a server then config each one manually versus a more automated process (ex. akin to Sitecore Azure, but in this case I need to install the VMs on-prem).
There's nothing directly in Sitecore to achieve what you want. Depending on what tools you are using then there are some options to reach that goal though.
Visual Studio / Build Server
You can make use of SlowCheetah config transforms to configure non-web.config files such as ConnetionStrings and AppSettings. You will need a different build profiles for each environment you wish to create a build for and add the appropriate config transforms and overrides. SlowCheetah is available as a nuget package to add to your projects and also a Visual Studio plugin which provides additional tooling to help add the transforms.
Continuous Deployment
If you are using a continuous deployment tool like Octopus Deploy then you can substitute variables in files on a per environment and machine role basis (e.g. CM vs CD). You also have the ability to write custom PowerShell steps to modify/transform/delete files as required. Since this can also run on a machine role basis you can write a step to remove unnecessary connection strings (master, reporting, tracking.history) on CD environments as well as delete the other files specified in the Sitecore Configuration Guide.
Sitecore Config Overrides
Anything within the <sitecore> node in web.config can be modified and patch using Include File Patching Facilities built into Sitecore. If you have certain settings which need to be modified or deleted for a CD environment then you can create a CD-specific override, which I place in /website/App_Config/Include/z.ProjectName/WebCD and use a post-deployment PowrrShell script in Octopus deploy to delete this folder on CM environment. There are example of patches within the Include folder, such as SwitchToMaster.config. In theory you could write a patch file to remove all the config sections mentioned in the depoyment guide, but it would be easier to write a PowerShell step to delete these instead.
I tend to use all the above to aid in deploying to various environments for different server roles (CM vs CD).
Strongly recommend you take a look at Desired State Configuration which will do exactly what you're talking about. You need to set up the actual configuration at least once of course, but then it can be deployed to as many machines as you'd like. Changes to the config are automatically flowed to all machines built from the config, and any changes made directly to the machines (referred to as configuration drift) are automatically corrected. This can be combined with Azure, which now has capability to act as a "pull-server" through the Automation features.
There's a lot of reading to do to get up to speed with this feature-set but it will solve your problem.
This is not a Sitecore tool per se.
I'm interested in having my mesos-slave instances inherit attributes from the EC2 tags that the slave is running on. After some searching, I don't think such a setup exists. I would like to write one and contribute it back to the community.
Our slaves are running Ubuntu and we're using the mesos packages from the mesosphere repo. This creates a beautiful mesos-init-wrapper that allows mesos configuration (command line arguments) to be represented as files in /etc/mesos-slave/ or /etc/mesos/. I want to write a script which will:
Use the ec2 API to get the instance tags (see here)
Generate corresponding files in [/etc/mesos/attributes/][3]
Run this script at an early run-level
Mesos community folks: is this the right way to go? Is it reasonable to build an implementation that is tied to mesos-init-wrapper?
Thanks!
Advait
mesos-aws-tags is now available here: https://github.com/goguardian/mesos-aws-tags