vTiger cron jobs not triggering workflows. How can I troubleshoot? - vtiger

I'm running a Vtiger instance on a cloud server.
All was well until about two weeks ago when the workflows stopped sending.
I have verified that the cron jobs are executing. However, this is a shared hosting environment so as far as I know I don't have access to the cron logs.
I have checked that the Bash script that triggers the job has the right permissions. And yet, it is still not runnig.
Does anybody have experience with vTiger and has successfully troubleshooted this? Judging by the vTiger community forums this seems to be a recurrent problem.

You can fix this issue by modify vtigercron.php. check the follwoing link and replace that with current file.
https://gist.github.com/hamidrabiei/2dc80e603b8f9cf28c9639f314dd74e6
after replacement you can run workflows via wget command and set new cronjob
*/5 * * * * wget --spider "http://yourvtigerurl.com/vtigercron.php" >/dev/null 2>&1

try running the php script from the command line. Just go to your vtiger directory and then run php -f vtigercron.php and see what the output is.
Sometimes shared hosting companies change php versions without notifying its users. It can be the case that the php for cli (command line) was changed and that the vtigercron.php script doesn't work for that particular version of php.

Related

Run a python script on an EC2 instance without uploading the python file to the instance

I currently have an EC2 instance running with directories of tenants and files in those directories. I have to run a python script to go into those directories, find out how much memory each tenant is using, how many files each tenant has, and send that informatin to a grafana dashboard. I have completed the python script and it will go into each tenant, calculate what I need, and send that information to the grafana dashboard if i manually upload the python script to the instance and run it from the command line.
The goal is to automate this process so the script will run every 15 minutes without ever having to upload the python script to the instance. There is a lot of red tape and I am unable to make the script a part of the AMI when the instance is launched, and I haven't found any examples of people trying to do this before.
First of all is what I am trying to do even possible? Ideally I'd like to run the script from a lambda because that would make it very easy to schedule every 15 minutes and my dependencies for the python script would be very easy to put into place. Suggestions have been brought up about using CodeDeploy but I don't know enough about it to know how that would help.
I have created a python script that works and will run on an ec2 instance if it is uploaded and run from the command line, but I haven't been able to run the script "remotely" as I wold like to.
If you are doing specific tasks within the EC2 file or memory system, running it as a Lambda is not going to work. Instead look into using SSM to run a remote script on an EC2 instance.
https://aws.amazon.com/getting-started/hands-on/remotely-run-commands-ec2-instance-systems-manager/

AWS Batch Failing to launch Dockerfile - standard_init_linux.go:219: exec user process caused: exec format error

I am attempting to use AWS Batch to launch a linux server, which will in essence perform the fetch and go example included within AWS (to download a SH from S3 and run it).
Does AWS Batch work at all for anyone?
The aws fetch_and_go example always fails, even followed someone elses guide online which mimicked the aws example.
I have tried creating Dockerfile for amazonlinux:latest and ubuntu:20.04 with numerous RUN and CMD.
The scripts always seem to fail with the error:
standard_init_linux.go:219: exec user process caused: exec format error
I thought at first this was relevant to my deployment access rights maybe within the amazonlinux so have played with chmod 777, chmod -x etc on the she file.
The final nail in the coffin, my current script is litterely:
FROM ubuntu:20.04
Launch this using AWS Batch, no command or parameters passed through and it still fails with the same error code. This is almost hinting to me that there is either a setup issue with my AWS Batch (which im using default wizard settings, except changing to an a1.medium server) or that AWS Batch has some major issues.
Has anyone had any success with AWS Batch launching their own Dockerfiles ? Could they share their examples and/or setup parameters?
Thank you in advance.
A1 instances are ARM based first-generation Graviton CPU. It is highly likely the image you are trying to run something that is expecting x86 CPU (Intel or AMD). Any instance class with a "g" in it ("c6g" or "m5g") are Graviton2 which is also ARM based and will not work for the default examples.
You can test whether a specific container will run by launching an A1 instance yourself and running the container (after installing docker). My guess is that you will get the same error. Running on Intel or AMD instances should work.
To leverage Batch with ARM your containerized application will need to work on ARM. If you point me to the exact example, I can give more details on how to adjust to run on A1 or Graviton2 instances.
I had the same issue, and it was because I build the image locally on my M1 Mac.
Try adding --platform linux/amd64 to your docker build command before pushing if this is your case.
In addition to the other comment. You can create multi-arch images yourself which will provide the correct architecture.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/

Check which ebextensions have been run

I have an environment in Amazon Elastic Beanstalk that is proving problematic. I'm trying to check which ebextensions have run because there is an oddity where I see a logrotate conf file created but the contents are not what I've written.
Does anyone know which log file I can find that information in? I've tried /var/log/eb-engine.log but that doesn't seem to have anything about running ebextensions. eb-activity, and eb-commandprocessor are mentioned in the docs but they don't exist on the instance.
Platform: Ruby 2.6 running on 64bit Amazon Linux 2/3.0.3
As Amazon Linux 2 is still being worked on some log files aren't available yet. So says AWS Support.
However, you can see which ebextensions have been run by looking in cfn-init.log, and cfn-init-cmd.log.
I was searching for the file names rather than the command names so I couldn't see where the results were being logged.

How to start PostgREST as a ubuntu service in a cloud server?

According to PostgREST doc, we can start postgrest with
postgrest postgres://user:pass#host:port/db -a anon_user [other flags]
It is fine to run locally; however, how to register it as a system service and run in the cloud server?
The basic idea is to have a init (shell) script (just like every other system service)
This script no longer works because the structure of the github repository changed and some files are not there anymore, but you can get a basic idea of what needs to happen
https://github.com/ruslantalpa/blogdemo/blob/master/provision/postgrest.sh
the init scripts can be found here
https://github.com/begriffs/postgrest/tree/5d904dfd66c75133f2383eefbfa8b152a669625e/debian

WebHCat on Amazon's EMR?

Is it possible or advisable to run WebHCat on an Amazon Elastic MapReduce cluster?
I'm new to this technology and I was wonder if it was possible to use WebHCat as a REST interface to run Hive queries. The cluster in question is running Hive.
I wasn't able to get it working but WebHCat is actually installed by default on Amazon's EMR instance.
To get it running you have to do the following,
chmod u+x /home/hadoop/hive/hcatalog/bin/hcat
chmod u+x /home/hadoop/hive/hcatalog/sbin/webhcat_server.sh
export TEMPLETON_HOME=/home/hadoop/.versions/hive-0.11.0/hcatalog/
export HCAT_PREFIX=/home/hadoop/.versions/hive-0.11.0/hcatalog/
/home/hadoop/hive/hcatalog/webhcat_server.sh start
You can then confirm that it's running on port 50111 using curl,
curl -i http://localhost:50111/templeton/v1/status
To hit 50111 on other machines you have to open the port up in the EC2 EMR security group.
You then have to configure the users you going to "proxy" when you run queries in hcatalog. I didn't actually save this configuration, but it is outlined in the WebHCat documentation. I wish they had some concrete examples there but basically I ended up configuring the local 'hadoop' user as the one that run the queries, not the most secure thing to do I am sure, but I was just trying to get it up and running.
Attempting a query then gave me this error,
{"error":"Server IPC version 9 cannot communicate with client version
4"}
The workaround was to switch off of the latest EMR image (3.0.4 with Hadoop 2.2.0) and switch to a Hadoop 1.0 image (2.4.2 with Hadoop 1.0.3).
I then hit another issues where it couldn't find the Hive jar properly, after struggling with the configuration more, I decided I had dumped enough time into trying to get this to work and decided to communicate with Hive directly (using RBHive for Ruby and JDBC for the JVM).
To answer my own question, it is possible to run WebHCat on EMR, but it's not documented at all (Googling lead me nowhere which is why I created this question in the first place, it's currently the first hit when you search "WebHCat EMR") and the WebHCat documentation leaves a lot to be desired. Getting it to work seems like a pain, though my hope is that by writing up the initial steps someone will come along and take it the rest of the way and post a complete answer.
I did not test it but, it should be doable.
EMR allows to customise the bootstrap actions, i.e. the scripts run where the nodes are started. You can use bootstrap actions to install additional software and to change the configuration of applications on the cluster
See more details at http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-plan-bootstrap.html.
I would create a shell script to install WebHCat and test your script on a regular EC2 instance first (outside the context of EMR - just as a test to ensure your script is OK)
You can use EC2's user-data properties to test your script, typically :
#!/bin/bash
curl http://path_to_your_install_script.sh | sh
Then - once you know the script is working - make it available to the cluster on a S3 bucket and follow these instructions to include your script as custom bootstrap action of your cluster.
--Seb