I have HDFS-HA(namenode high availability) setup in my hadoop cluster(using Apache Ambari).
Now, I have one scenario in which my ambari-server machine(which also consist one Namenode i.e. active/Primary) went offline so that my other Namenode(Standby) was active and running but after some time it went offline too for some reason.Services were offline I mean.I was unable to do any operation.What if I have to start the services manually that is used to start using ambari.
I mean using command-line or something
Services can be started from the command line but they should not be in an Ambari environment typically. This is because Ambari does more then just start the service when you issue the start/restart command for any given service. Ambari also makes sure the most up to date configuration is written to each node along with other various house keeping type tasks.
You can look at the logs in Ambari when you start/restart a service to see exactly what Ambari does with respect to writing the configuration, other house keeping, and the exact command to start/restart the given service.
Related
I am working on a project where I can see all of the dags are queued up and not moving (appx over 24H or more)
Looks like its scheduler is broken but I need to confirm that.
So here are my questions
How to see if scheduler is broken
How to reset my airflow (web server) scheduler?
Expecting some help regarding how to reset airflow schedulers
The answer will depend a lot on how you are running Airflow (standalone, in Docker, Astro CLI, managed solution...?).
If your scheduler is broken the Airflow UI will usually tell you the time since the last heartbeat like this:
There is also an API endpoint for a scheduler health check at http://localhost:8080/health (if Airflow is running locally).
Check the scheduler logs. By default they are in a file at $AIRFLOW_HOME/logs/scheduler.
You might also want to look at how to do health checks in Airflow in general.
In terms of resetting it is usually best to restart the scheduler and again this will depend on how you started it in the first place. If you are using a standalone instance and have the processes in the foreground simply do ctr+c or close the terminal to stop it. If you are running airflow in docker restart the container, for the Astro CLI there is astro dev restart.
I am attempting to take my existing cloud composer environment and connect to a remote SQL database (Azure SQL). I've been banging at my head at this for a few days and I'm hoping someone can point out where my problem lies.
Following the documentation found here I've spun up a GKE Service and SQL Proxy workload. I then created a new airflow connection as show here using the full name of the service azure-sqlproxy-service:
I test run one of my DAG tasks and get the following:
Unable to connect: Adaptive Server is unavailable or does not exist
Not sure on the issue I decide to remote directly into one of the workers, whitelist that IP on the remote DB firewall, and try to connect to the server. With no command line MSSQL client installed I launch python on the worker and attempt to connect to the database with the following:
connection = pymssql.connect(host='database.url.net',user='sa',password='password',database='database')
From which I get the same error above with both the Service and the remote IP entered in as host. Even ignoring the service/proxy shouldn't this airflow worker be able to reach the remote database? I can ping websites but checking the remote logs the DB doesn't show any failed logins. With the generic error and not many ideas on what to do next I'm stuck. A few google results have suggested switching libraries but I'm not quite sure how, or if I even need to, within airflow.
What troubleshooting steps could I take next to get at least a single worker communicating to the DB before moving on the the service/proxy?
After much pain I've found that Cloud composer uses ubuntu 1804 which currently breaks pymssql as per here:
https://github.com/pymssql/pymssql/issues/687
I tried downgrading to 2.1.4 to no success. Needing to get this done I've followed the instructions outlined in this post to use pyodbc.
Google Composer- How do I install Microsoft SQL Server ODBC drivers on environments
I am currently running JMeter in 5 local VMs in which one acts as master and 4 as slaves. I want to move them to amazon servers. Can anyone suggest step by step configuration methods. Searched internet and couldn't find a documentation with full clarity. Or can anyone share a good documentation link on this?
jmeter version: 3.2
My requirements are:
1 master and 4 slaves.
the master should have Linux GUI because I need JMETER GUI to run the test, since we are analyzing real time running data.
First of all, double check you looked for instructions well enough, i.e. there is JMeter ec2 Script project which automates the process of installation and configuration of JMeter remote engines.
In general, the process doesn't differ from configuring JMeter in distributed mode locally, Amazon EC2 instances are basically the same machines as local ones and require the same configuration steps. Just make sure to open the following ports:
1099
the port you define as server.rmi.localport
the ports you define as client.rmi.localport
It has to be done both in Linux Firewall and AWS Security Groups
Check out the following material:
Remote Testing
JMeter Distributed Testing Step-by-step
JMeter Distributed Testing with Docker
Load Testing with Jmeter and Amazon EC2
Is there a way to get the metrics for Jetty server, for example, the queue length, processing time, etc.
I looked at the QueuedThreadPool class, and tracked the the calling chain for getQueueSize, but I didn't find it got exposed.
Thanks.
"Queue Length" is a very vague term and can apply to many many different things.
As for seeing the metrics for Jetty, enable JMX, restart your server, and take a look.
Goto your ${jetty.base} and add the jmx module.
$ cd /path/to/mybase
$ java -jar /path/to/jetty-dist/start.jar --add-to-start=jmx
Once you have restarted, use the jmc command that comes with your JDK (or if you are using an ancient JDK use jconsole) and attach to the running Jetty process to see the various things exposed by Jetty for monitoring.
I'm still cheap.
I have a software development environment which is a bog-standard Ubuntu 11.04 plus a pile of updates from Canonical. I would like to set it up such that I can use an Amazon EC2 instance for the 2 hours per week when I need to do full system testing on a server "in the wild".
Is there a way to set up an Amazon EC2 server image (Ubuntu 11.04) so that whenever I fire it up, it starts, automatically downloads code updates (or conversely accepts git push updates), and then has me ready to fire up an instance of the application server. Is it also possible to tie that server to a URL (e.g ec2.1.mydomain.com) so that I can hit my web app with a browser?
Furthermore, is there a way that I can run a command line utility to fire up my instance when I'm ready to test, and then to shut it down when I'm done? Using this model, I would be able to allocate one or more development servers to each developer and only pay for them when they are being used.
Yes, yes and more yes. Here are some good things to google/hunt down on SO and SF
--ec2 command line tools,
--making your own AMI's from running instances (to save tedious and time consuming startup gumf),
--route53 APIs for doing DNS magic,
--ubunutu cloud-init for startup scripts,
--32bit micro instances are your friend for dev work as they fall in the free usage bracket
All of what James said is good. If you're looking for something requiring less technical know-how and research, I'd also consider:
juju (sudo apt-get install -y juju). This lets you start up a series of instances. Basic tutorial is here: https://juju.ubuntu.com/docs/user-tutorial.html