There is no neo4j.template file - google-cloud-platform

I've set up VM instance for neo4j in GCP. I want to modify the configuration according to the neo4j official document below though, there is no file named neo4j.template under /etc/neo4j folder. There are only neo4j.conf and pre-neo4j.sh files.
Is it something we have to create on our own after creating VM instance?
https://neo4j.com/developer/neo4j-cloud-vms/#vm-config

Related

How to retrieve heapdump in PCF using SMB

I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory .
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
Kindly help.
I need to -XXHeapdumoOutofmemory and -XXHeapdumoFilepath option in PCF manifest yml to create heapdump on OutOfMemory
You don't need to set these options. The Java buildpack will take care of this for you. By default, it installs a jvmkill agent which will automatically do this.
https://github.com/cloudfoundry/java-buildpack/blob/main/docs/jre-open_jdk_jre.md#jvmkill
In addition, the jvmkill agent is smart enough that if you bind a SMB or NFS volume service to your application, it will automatically save the heap dumps to that location. From the doc link above...
If a Volume Service with the string heap-dump in its name or tag is bound to the application, terminal heap dumps will be written with the pattern <CONTAINER_DIR>/<SPACE_NAME>-<SPACE_ID[0,8]>/<APPLICATION_NAME>-<APPLICATION_ID[0,8]>/<INSTANCE_INDEX>--<INSTANCE_ID[0,8]>.hprof
The key is that you name the bound volume service appropriately, i.e. the name must contain the string heap-dump.
You may also do the same thing with non-terminal heap dumps using the Java Memory Agent that the Java buildpack can install for you upon request.
I understand I can use SMB or NFS in vm args but how to retrieve the heapdump file when app goes OutOfMemory and not accessible.
To retrieve the heap dumps you need to somehow access the file server. I say "somehow" because it entirely depends on what you are allowed to do in your environment.
You may be permitted to mount the SMB/NFS volume directly to your PC. You could then access the files directly.
You may be able to retrieve the files through some other protocol like HTTP or FTP or SFTP.
You may be able to mount the SMB or NFS volume to another application, perhaps using the static file buildpack, to serve up the files for you.
You may need to request the files from an administrator with access.
Your best best is to talk with the admin for your SMB or NFS server. He or she can inform you about the options that are available to you in your environment.

Can I run a Cloud build on my own VM intances

Cloud build uses worker pool of VM and that is not able to access my on-prem Compute Engine resources So, is there any way to run cloud build on my own VM or any solution for these?
While waiting for the custom worker-pool feature you mentioned in your previous question to become available to public, you can use the custom builder remote-builder.
You'll need to first build the builder image that you'll be able to use then in your Cloud Builds steps. When using the remote-builder image, the following will happen:
A temporary SSH key will be created in your Container Builder
workspace
A instance will be launched with your configured flags
The workpace will be copied to the remote instance
Your command will be run inside that instance's workspace
The workspace will be copied back to your Container Builder
workspace
The build steps using this builder image will therefore run on a VM instance in your project's network and will be able to access other resources, provided your network configuration allows it.
Edit: The cos image used in the example cloudbuild.yaml file seems to include it so you'd be able to run it directly. In case you'd like to customize your instances with specific software, you have several options:
you can create an instance template (based on a custom image that includes the software or with a startup script that will install it at boot time) and specify that instance template in INSTANCE_ARGS in your cloudbuild.yaml.
you can use a standard image and just pass the startup script installing the software as INSTANCE_ARGS.
you can install it within a shell script executed in your build step.
Why can't you just fix the access issue? You can configure cloud build to create build workers within your VPC within your cloud infrastructure:
See the following video which explain how this works:
https://youtu.be/IUKCbq1WNWc?t=820
Hope this helps.

Using Google Cloud Shell Editor on an instance

I am using the Beta version of Google's newer file browser along with the web based shell window to access my Google Cloud instance (https://cloud.google.com/shell/docs/features#code_editor).
I want to use the new file editor, when it initially loads it shows the files in my dev shell instances, when I boot up the actual instance I want to work in the files still show those from my persistent storage.
Can I get this window to show the files on the instance, so I can edit them on the fly?
As you can see in the screenshot below, files shown in the top left window do not match those in the active directory on the instance, can I tell the file browser to look at the instance?
No, unfortunately you cannot view/edit files on the remote instance to which you are connecting. Think of Google Cloud Shell as your workstation in the cloud and the web editor runs right on that workstation: when you connect to a remote machine you cannot see it's filesystem directly.
You could, however, install a web editor on your remote instance. Google Cloud Shell uses open-source Orion editor that's pre-installed on the Cloud Shell VM.
You can run vscode in your browser locally with connection to remote google cloud vm instance. Needs to download code-server and the repo supplies a binary version. After downloading, you caninstall it on the GCP vm instance and run vscode in your browser.
Hope this blog and video will also help.

Creating DCOS service with artifacts from hdfs

I'm trying to create DCOS services that download artifacts(custom config files etc.) from hdfs. I was using simple ftp server before for it but I wanted to use hdfs. It is allowed to use "hdfs://" in artifact uri but it doesn't work correctly.
Artifact fetch ends with error because there's no "hadoop" command. Weird. I read that I need to provide own hadoop for it.
So I downloaded hadoop, set up necessary variables in /etc/profile. I can run "hadoop" without any problem when ssh'ing to node but service still ends with the same error.
It seems that environment variables configured in service are used after the artifact fetch because they don't work at all. Also, it looks like services completely ignore /etc/profile file.
So my question is: how do I set up everything so my service can fetch artifacts stored on hdfs?
The Mesos fetcher supports local Hadoop clients, please check your agent configuration and in particular your --hadoop_home setting.

Environment specific application.properties in springboot Application

I'm trying to automate the process of deploying code using github and jenkins job to deploy my Springboot Application on AWS .
I want to know where should I place the application.properties file in case I m deploying a war file on Tomcat and don't want this file to be pushed onto github as it may contain some database credentials , not to be exposed.
Should I put separate application-prod.properties file in Tomcat (AWS) so that my war file will be independent of these properties ?
See my answer here.
In a nutshell, you externalise the properties and then pass one or more profiles that will activate one or more Spring Configuration classes. Each Configuration class will load one or more property file. In your case, if you only have one environment, you can just create a configuration file for one profile.
Then, on your AWS instance, you will deploy the configuration file separately. At runtime, you will need to pass to your Spring Boot application the active profile(s). You can do this by passing the VM argument: -Dspring.profiles.active=[your-profile]
I'm completing the final lectures on an online course that shows how to create from scratch a Spring Boot website with Thymeleaf, Spring Security, Email and Data JPA, how to process credit card payments with Stripe and how to deploy to AWS. You can register your interest here.
how about using spring-cloud-starter-config instead of local properties ?
If using spring-cloud-start-config, all configurations should be loaded from your config-center instead of reading them locally.
Even if you have multiple different environments, spring-cloud-starter-config could handle it with different profiles.
What's more, spring-cloud-starter-config could use local environment variables too.
By the way, the only local resource could be bootstrap.yml if you are using spring-cloud-starter-config.
Wish I can help you!