Iam using docker-elk in github and running the docker-elk container.my logs are showing in kibana.
Now i want to use file beat instead of logstash-forwarder in docker-elk.for that i selected elastic/beats in github and built a docker image.Now this is included in my docker-compose.yml.now
when iam running the container logstash running,elastic search running but file beat exited with code 0.
This is my docker-compose.yml
elasticsearch:
image: elasticsearch:latest
command: elasticsearch -Des.network.host=0.0.0.0
ports:
- "9200:9200"
logstash:
image: logstash:2.0
command: logstash agent --config /etc/logstash/conf.d/ -l /var/log/logstash/logstash.log --debug
volumes:
- ./logstash/config:/etc/logstash/conf.d
- ./logstash/patterns/nginx:/etc/logstash/patterns/nginx
ports:
- "5000:5000"
links:
- elasticsearch
kibana:
build: kibana/
volumes:
- ./kibana/config/kibana.yml:/opt/kibana/config/kibana.yml
ports:
- "5601:5601"
links:
- elasticsearch
beats:
image: pavankuamr/beats
volumes:
- ./logstash/beats:/etc/filebeat
- /var/log/nginx:/var/log/nginx
links:
- logstash
- elasticsearch
environment:
- ES_HOST=elasticsearch
- LS_HOST=logstash
- LS_TCP_PORT=5044
This is my filebeat.yml
filebeat:
prospectors:
paths:
- /var/log/nginx/access.log
input_type: log
registry_file: /var/lib/filebeat/registry
config_dir: /etc/filebeat/conf.d
elasticsearch:
enabled: false
hosts: ["localhost:9200"]
logstash:
# The Logstash hosts
enabled: true
hosts: ["localhost:5044"]
This is my logstash.conf
input {
beats {
port => 5044
type => "logs"
}
file {
type => "nginx"
start_position => "beginning"
path => [ "/var/log/nginx/access.log" ]
}
file {
type => "nginxerror"
start_position => "beginning"
path => [ "/var/log/nginx/error.log" ]
}
}
filter {
if [type] == "nginx" {
grok {
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "%{NGINX_ACCESS}" }
remove_tag => ["_grokparsefailure"]
add_tag => ["nginx_access"]
}
geoip {
source => "remote_addr"
}
}
if [type] == "nginxerror" {
grok {
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "%{NGINX_ERROR}" }
remove_tag => ["_grokparsefailure"]
add_tag => ["nginx_error"]
}
}
}
output {
elasticsearch {
hosts => "elasticsearch:9200"
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Change hosts: ["localhost:9200"] for hosts: ["logstash:9200"] into logstash output on filebeat.yml
Related
I have a question about dicts and lists.
What I want to achieve is, have the key and value from a seperate list saved as fact/hostvar for each matching host.
I'm getting a list from a Confluence API that looks like this (abbreviated):
[
{
"title": "MACHINE1",
"_links": {
"tinyui": "/x/1234"
}
},
{
"title": "MACHINE2",
"_links": {
"tinyui": "/x/5678"
}
},
{
"title": "MACHINE3",
"_links": {
"tinyui": "/x/9876"
}
}
]
What worked to get each individual item (just to debug, and show that the loop itself works) is:
- name: DEBUG specific item in list of get_children.json.results
debug:
msg: "{{ item.title }} {{ item._links.tinyui }}"
loop:
"{{ get_children.json.results }}"
delegate_to: 127.0.0.1
Ansible Output (here: output for only one machine):
"msg": "MACHINE1 /x/1234"
Machine Hostnames:
Yes, they are lowercase in my inventory, and in the above list output they are uppercase. But I guess a simple item.title|lower would do fine.
machine1
machine2
machine3
How can I now match the item.title with ansible_hostname and save above API Output as a fact for each machine?
And for clarification: item.title|lower == ansible_hostname
I hope it gets clear to what I want to achieve and thanks to everyone in advance :)
EDIT: Thanks to both answers I managed to get it to work. Using '(?i)^'+VAR+'$' and some other conditional checks you guys posted definitely helped. :)
In a nutshell, given the inventories/tinyui/main.yml inventory:
---
all:
hosts:
machine1:
machine2:
machine3:
i.do.not.exist:
The folowing tinyui.yml playbook:
---
- hosts: all
gather_facts: false
vars:
# In real life, this is returned by your API call
get_children:
json:
results: [
{
"title": "MACHINE1",
"_links": {
"tinyui": "/x/1234"
}
},
{
"title": "MACHINE2",
"_links": {
"tinyui": "/x/5678"
}
},
{
"title": "MACHINE3",
"_links": {
"tinyui": "/x/9876"
}
}
]
# This won't be defined before you call the API which
# returns and registers the correct result. If there is
# no match for host in the returned json, '!no uri!' will
# be returned below. Adapt with a default uri if needed
tinyui: "{{ get_children.json.results
| selectattr('title', '==', inventory_hostname | upper)
| map(attribute='_links.tinyui')
| default(['!no uri!'], true) | first }}"
tasks:
# In real life, you would have called your API
# and registered the result in `get_children` e.g.
# - name: get info from confluence
# uri:
# url: <confluence api endpoint url>
# <more parameters here>
# run_once: true
# delegate_to: localhost
# register: get_children
- name: Display tinyui for host
debug:
msg: "tinyui for host {{ inventory_hostname }} is {{ tinyui }}"
Gives:
$ ansible-playbook -i inventories/tinyui/ tinyui.yml
PLAY [all] ***********************************************************************************************************************
TASK [Display tinyui for host] ***************************************************************************************************
ok: [machine1] => {
"msg": "tinyui for host machine1 is /x/1234"
}
ok: [machine2] => {
"msg": "tinyui for host machine2 is /x/5678"
}
ok: [machine3] => {
"msg": "tinyui for host machine3 is /x/9876"
}
ok: [i.do.not.exist] => {
"msg": "tinyui for host i.do.not.exist is !no uri!"
}
PLAY RECAP ***********************************************************************************************************************
i.do.not.exist : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
machine1 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
machine2 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
machine3 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
You can filter the list of dictionaries by title, taking out the right dict.
You can do this with the following line:
"{{ hostsdata | selectattr('title', 'match', '(?i)^'+host+'$') | first }}"
With selectattr you filter your list of dicts by the title, where this must match '(?i)^'+host+'$'.
The (?i) is the ignore case inline flag for pattern matching, concatenated with the hostname (case does not matter because of ignore case flag). ^...$ specifies that the whole string must match, from start to end.
Since selectattr returns a list as result, you can use first to take out the first element of the list.
Instead of using (?i) you can also set the ignorecase parameter, which will look like this:
"{{ hostsdata | selectattr('title', 'match', '^'+host+'$', 'ignorecase=true') | first }}"
Both variants work equivalently.
Entire playbook:
---
- hosts: localhost
gather_facts: no
vars:
hostsdata:
- {
"title": "MACHINE1",
"_links": {
"tinyui": "/x/1234"
}
}
- {
"title": "MACHINE2",
"_links": {
"tinyui": "/x/5678"
}
}
- {
"title": "MACHINE3",
"_links": {
"tinyui": "/x/9876"
}
}
tasks:
- debug:
var: hostsdata
- name: Pick out host specific dict
set_fact:
machine_data: "{{ hostsdata | selectattr('title', 'match', '(?i)'+host) | first }}"
vars:
host: machine3
- debug:
var: machine_data
- debug:
msg: "{{ machine_data.title }} {{ machine_data._links.tinyui }}"
Resulting output:
TASK [debug] ***********************************************************************************************************
ok: [localhost] => {
"hostsdata": [
{
"_links": {
"tinyui": "/x/1234"
},
"title": "MACHINE1"
},
{
"_links": {
"tinyui": "/x/5678"
},
"title": "MACHINE2"
},
{
"_links": {
"tinyui": "/x/9876"
},
"title": "MACHINE3"
}
]
}
TASK [Pick out host specific dict] *************************************************************************************
ok: [localhost]
TASK [debug] ***********************************************************************************************************
ok: [localhost] => {
"machine_data": {
"_links": {
"tinyui": "/x/9876"
},
"title": "MACHINE3"
}
}
TASK [debug] ***********************************************************************************************************
ok: [localhost] => {
"msg": "MACHINE3 /x/9876"
}
To filter multiple machines, here is another example:
- debug:
msg: "{{ md.title }} {{ md._links.tinyui }}"
when: md | length
vars:
md: "{{ hostsdata | selectattr('title', 'match', '(?i)^'+item+'$') | first | default('') }}"
with_items:
- MachINE1
- MACHINE2
- machine3
- unknown
Add a default('') and a when: to skip a non-existent hostname.
Output:
TASK [debug] ***********************************************************************************************************
ok: [localhost] => (item=MachINE1) => {
"msg": "MACHINE1 /x/1234"
}
ok: [localhost] => (item=MACHINE2) => {
"msg": "MACHINE2 /x/5678"
}
ok: [localhost] => (item=machine3) => {
"msg": "MACHINE3 /x/9876"
}
skipping: [localhost] => (item=unknown)
Convert the titles to lowercase
titles: "{{ get_children.json.results|
map(attribute='title')|
map('lower')|
map('community.general.dict_kv', 'title')|
list }}"
gives
titles:
- title: machine1
- title: machine2
- title: machine3
Replace the lowercase titles and create a dictionary
title_links: "{{ get_children.json.results|
zip(titles)|
map('combine')|
items2dict(key_name='title', value_name='_links') }}"
gives
title_links:
machine1:
tinyui: /x/1234
machine2:
tinyui: /x/5678
machine3:
tinyui: /x/9876
Put these declarations, for example, into the group_vars
shell> cat group_vars/all/title_links.yml
titles: "{{ get_children.json.results|
map(attribute='title')|
map('lower')|
map('community.general.dict_kv', 'title')|
list }}"
title_links: "{{ get_children.json.results|
zip(titles)|
map('combine')|
items2dict(key_name='title', value_name='_links') }}"
Now, you can use the dictionary. For example, given the inventory
shell> cat hosts
10.1.0.11 ansible_hostname=machine1
10.1.0.12 ansible_hostname=machine2
10.1.0.13 ansible_hostname=machine3
the playbook
- hosts: all
gather_facts: false
vars:
get_children:
json:
results:
- _links: {tinyui: /x/1234}
title: MACHINE1
- _links: {tinyui: /x/5678}
title: MACHINE2
- _links: {tinyui: /x/9876}
title: MACHINE3
tasks:
- debug:
msg: "My links: {{ title_links[ansible_hostname] }}"
gives (abridged)
TASK [debug] *********************************************************
ok: [10.1.0.11] =>
msg: 'My links: {''tinyui'': ''/x/1234''}'
ok: [10.1.0.12] =>
msg: 'My links: {''tinyui'': ''/x/5678''}'
ok: [10.1.0.13] =>
msg: 'My links: {''tinyui'': ''/x/9876''}'
I am trying to make cube.js production mode in Docker container work.
But I am getting
Error: connect ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1159:16)
I have my redis:
docker run -e ALLOW_EMPTY_PASSWORD=yes --name my-redis -p 6379:6379 -d redis
I have my cubestore:
docker run -p 3030:3030 cubejs/cubestore
my cube.js:
module.exports = {
jwt: {
key: 'key',
},
contextToAppId: ({ securityContext }) =>
`CUBEJS_APP_${securityContext.username}`,
scheduledRefreshContexts: async () => [
{
securityContext: {
username: 'public'
},
},
{
securityContext: {
username: 'obondar',
},
},
],
};
.env
CUBEJS_DB_HOST=server
CUBEJS_DB_PORT=5432
CUBEJS_DB_NAME=db
CUBEJS_DB_USER=name
CUBEJS_DB_PASS=password
CUBEJS_DB_TYPE=postgres
CUBEJS_SCHEDULED_REFRESH_DEFAULT=true
CUBEJS_API_SECRET=key
CUBEJS_CUBESTORE_HOST=localhost
What am I missing?
Can someone help, please?
I just finished my project in Strapi and deployed in AWS, when I run my Public ipv4 :1337 says: 'server is running successully' but when I want to log in admin panel just spinning and not showing panel.
server.js
module.exports = ({ env }) => ({
host: env('HOST', '0.0.0.0'),
port: env.int('PORT', 1337),
cron: { enabled: true},
url: env('URL', 'http://localhost'),
admin: {
auth: {
secret: env('ADMIN_JWT_SECRET', 'MY_JWT_SECRET'),
},
},
});
I got a working configuration locally. Running test with Node webdriverio
But I want to automate this with de device farm
const webdriverIO = require("webdriverio");
const opts = {
path: '/wd/hub',
port: 4723,
capabilities: {
platformName: "Android",
platformVersion: "9",
deviceName: "emulator-5554",
app: __dirname +"/app-debug.apk",
appPackage: "com.ayga.cooktop",
appActivity: ".MainActivity",
automationName: "UiAutomator2"
}
};
const main = async () => {
const client = await webdriverIO.remote(opts);
.... actual tests
};
void main();
My build file:
version: 0.1
phases:
install:
commands:
- nvm install 12.16.1
- echo "Navigate to test package directory"
- cd $DEVICEFARM_TEST_PACKAGE_PATH
- npm install *.tgz
- export APPIUM_VERSION=1.14.2
- avm $APPIUM_VERSION
- ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
pre_test:
commands:
- echo "Start appium server"
- >-
appium --log-timestamp
--default-capabilities "{\"deviceName\": \"$DEVICEFARM_DEVICE_NAME\", \"platformName\":\"$DEVICEFARM_DEVICE_PLATFORM_NAME\",
\"app\":\"$DEVICEFARM_APP_PATH\", \"udid\":\"$DEVICEFARM_DEVICE_UDID\", \"platformVersion\":\"$DEVICEFARM_DEVICE_OS_VERSION\",
\"chromedriverExecutable\":\"$DEVICEFARM_CHROMEDRIVER_EXECUTABLE\"}"
>> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
test:
commands:
- echo "Navigate to test source code"
- cd $DEVICEFARM_TEST_PACKAGE_PATH/node_modules/*
- echo "Start Appium Node test"
- node index.js
post_test:
commands:
artifacts:
- $DEVICEFARM_LOG_DIR
my test runs, but i was not able to find the apk
How to I tell AWS the apk, deviceName and etc
const opts = {
path: '/wd/hub',
port: 4723,
capabilities: {
platformName: "Android",
platformVersion: "9",
deviceName: "emulator-5554",
app: __dirname +"/app-debug.apk",
appPackage: "com.ayga.cooktop",
appActivity: ".MainActivity",
automationName: "UiAutomator2"
}
};
You have to set the capabilities to the AWS provided ENVVARS just like below.
const apkInfo = {
appPackage: "com.ayga.cooktopbt",
appActivity: "com.ayga.cooktop.MainActivity",
automationName: "UiAutomator2"
};
const awsOptions = {
path: '/wd/hub',
port: 4723,
capabilities: {
...apkInfo,
platformName: process.env.DEVICEFARM_DEVICE_PLATFORM_NAME,
deviceName: process.env.DEVICEFARM_DEVICE_NAME,
app: process.env.DEVICEFARM_APP_PATH,
}
};
I am trying to deploy a meteor app to an AWS server, but am getting this message:
Started TaskList: Configuring App
[52.41.84.125] - Pushing the Startup Script
nodemiral:sess:52.41.84.125 copy file - src: /
Users/Olivia/.nvm/versions/node/v7.8.0/lib/node_modules/mup/lib/modules/meteor/assets/templates/start.sh, dest: /opt/CanDu/config/start.sh, vars: {"appName":"CanDu","useLocalMongo":0,"port":80,"bind":"0.0.0.0","logConfig":{"opts":{"max-size":"100m","max-file":10}},"docker":{"image":"abernix/meteord:base","imageFrontendServer":"meteorhacks/mup-frontend-server","imagePort":80},"nginxClientUploadLimit":"10M"} +0ms
[52.41.84.125] x Pushing the Startup Script: FAILED Failure
Previously I had been able to deploy using mup, but now I am getting this message. The only major thing I've changed is the Python path in my .noderc. I am also able to SSH into my amazon server directly from the terminal. My mup file is:
module.exports = {
servers: {
one: {
host: '##.##.##.###',
username: 'ec2-user',
pem: '/Users/Olivia/.ssh/oz-pair.pem'
// password:
// or leave blank for authenticate from ssh-agent
}}meteor: {
name: 'CanDu',
path: '/Users/Olivia/repos/bene_candu_v2',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
mobileSettings: {
public: {
"astronomer": {
"appId": "<key>",
"disableUserTracking": false,
"disableRouteTracking": false,
"disableMethodTracking": false
},
"googleMaps": "<key>",
"facebook":{
"permissions":["email","public_profile","user_friends"]
}
},
},
},
env: {
ROOT_URL: 'http://ec2-##-##-##-###.us-west-2.compute.amazonaws.com',
MONGO_URL: 'mongodb://. . . "
},
/*ssl: {
crt: '/opt/keys/server.crt', // this is a bundle of certificates
key: '/opt/keys/server.key', // this is the private key of the certificate
port: 443, // 443 is the default value and it's the standard HTTPS port
upload: false
},*/
docker: {
image: 'abernix/meteord:base'
},
deployCheckWaitTime: 60
}
};
And I have checked to make sure there are no trailing commas, and have tried increasing the wait time. etc. The error message I'm getting is pretty unhelpful. Does anyone have any insight? Thank you so much!