How to stop sending var/log/syslog from filebeat to logstash - amazon-web-services

Below is my filebeat.yml file where it should send logs only from the below mentioned /home/ubuntu/logs/test-app/path.log path. But it is all the logs including var/log/syslog and /var/log/auth.log folders. Please give me clarification on how to avoid sending system logs.
filebeat.yml
filebeat.inputs:
- type: syslog
enabled: false
- type: log
enabled: true
paths:
- home/ubuntu/logs/test-app/path.log
logging:
level: info
to_files: true
to_syslog: false
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.logstash:
hosts: ["ip:5044"]

check if you are enabling the system module ,
filebeat modules list | head
cat /etc/filebeat/modules.d/system.yml
and use filestream input instead of logs as the latter will be deprecated
https://www.elastic.co/guide/en/beats/filebeat/8.2/filebeat-input-filestream.html

Related

Data Prepper Pipelines + OpenSearch Trace Analytics

I'm using the latest version of AWS OpenSearch but somehow, when I'm trying to go to the Trace analytics Dashboard it does not show the traces sent by the Data Prepper.
Manual OpenTelemetry instrumented application
Data Prepper is running in a Docker (opensearchproject/data-prepper:latest)
OpenSearch is running on the latest version
Sample Configuration
data-prepper-config.yaml
ssl: false
pipelines.yaml
entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
processor:
- otel_trace_raw:
sink:
- opensearch:
hosts: [ "https://opensearch-domain" ]
username: "admin"
password: "admin"
index_type: trace-analytics-raw
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
processor:
- service_map_stateful:
sink:
- opensearch:
hosts: ["https://opensearch-domain"]
username: "admin"
password: "admin"
index_type: trace-analytics-service-map
remote-collector.yaml
...
exporters:
otlp/data-prepper:
endpoint: data-prepper-address:21890
service:
pipelines:
traces:
receivers: [otlp]
exporters: [otlp/data-prepper]
When I try to go to the Query Workbench and run the query SELECT * FROM otel-v1-apm-span, I'm getting the list of received trace spans. But I'm unable to see a chart or something on the Trace Analytics Dashboard (both Traces and Services). It's just an empty dashboard.
I'm also getting a warning:
WARN org.opensearch.dataprepper.plugins.processor.oteltrace.OTelTraceRawProcessor - Missing trace group for SpanId: xxxxxxxxxxxx
The traceGroupFields are also empty.
"traceGroupFields": {
"endTime": null,
"durationInNanos": null,
"statusCode": null
}
Is there something wrong with my setup? Any help is appreciated.

how to solve ansible unknown url?

roles/tasks/main.yml code
- name: Download and unpack node exporter binary to /usr/local/bin
unarchive:
src: https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz
dest: /usr/local/bin/
remote_src: yes
extra_opts: [--strip-components=1]
owner: "ec2-user"
group: "ec2-user"
node-exporter.yml code
---
- hosts: node-exporter
become: false
gather_facts: false
roles:
- roles
error message
fatal: [ip]: FAILED! => {"changed": false, "msg": "Failure downloading https://github.com/prometheus/node_exporter/releases/download/v1.2.2/node_exporter-1.2.2.linux-amd64.tar.gz, Request failed: <urlopen error [Errno -2] Name or service not known>"}
if I run "ansible -m ping node-exporter", I receive pong. and "ping www.google.com" working well
but, this code not working
Help me how to solve this problem or recommend me any code ....
(I use amazon linux)
Something weird is happening there. At your role code you have https://github.com/ and in the error ssh://github.com, are you sure that you are using the last version of your code or something like that?

Promtail End Point Failure At /loki/api/v1/push On EC2 Instance Via Docker

I am using aws Instance and I am trying to run promtail in order to fetch logs and forward it to loki server. Promtail, Loki and Grafana are being run through Docker.
The Loki Server is runing on port 3100, Promtail on 3400 and Loki on 8001. Since It is an AWS platform what needs to be done so that It stops throwing error at http://43.206.43.87:3100/loki/api/v1/push end point.
here is my promtail-config.yaml
server:
http_listen_port: 3400
grpc_listen_port: 0
positions:
filename: /tmp/positions.yaml
clients:
- url: http://43.206.43.87:3100/loki/api/v1/push
scrape_configs:
- job_name: system
static_configs:
- targets:
- 43.206.43.87
labels:
job: varlogs
__path__: /var/log/*log
Here is my loki-config.yaml
auth_enabled: false
server:
http_listen_port: 3100
grpc_listen_port: 0
common:
path_prefix: /tmp/loki
storage:
filesystem:
chunks_directory: /tmp/loki/chunks
rules_directory: /tmp/loki/rules
replication_factor: 1
ring:
instance_addr: 43.206.43.87
kvstore:
store: inmemory
schema_config:
configs:
- from: 2020-10-24
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h
ruler:
alertmanager_url: http://localhost:9093
Please help me out
All I had to do was change the loki-config.yaml file in that
I had to write
instance_addr: localhost
and everything worked

Error creating peer channel Amazon Managed Blockchain Hyperledger Fabric v1.4

I hope someone could help me with the following problem.
I am using Amazon Managed Blockchain with the framework Hyperledge Fabric v1.4 and I followed this documentation https://docs.aws.amazon.com/managed-blockchain/latest/hyperledger-fabric-dev/get-started-create-channel.html.
This is the error I get when I try to create the channel with that command line:
Command line:
docker exec cli peer channel create -c mychannel -f /opt/home/mychannel.pb -o $ORDERER --cafile /opt/home/managedblockchain-tls-chain.pem --tls
Error:
2022-01-17 10:34:47.356 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Error: got unexpected status: BAD_REQUEST -- error validating channel creation transaction for new channel 'mychannel', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
The admin certificate is in a folder "admin-msp".
My configxt.yaml (I did not get any error executing a previous step with "docker exec cli configtxgen -outputCreateChannelTx /opt/home/mychannel.pb -profile OneOrgChannel -channelID mychannel --configPath /opt/home/"):
Organizations:
- &Org1
Name: m-Q37N3LRUKNFDXBZ7GARMYFBYIE
ID: m-Q37N3LRUKNFDXBZ7GARMYFBYIE
Policies: &Org1Policies
Readers:
Type: Signature
Rule: "OR('Org1.member')"
# If your MSP is configured with the new NodeOUs, you might
# want to use a more specific rule like the following:
# Rule: "OR('Org1.admin', 'Org1.peer', 'Org1.client')"
Writers:
Type: Signature
Rule: "OR('Org1.member')"
# If your MSP is configured with the new NodeOUs, you might
# want to use a more specific rule like the following:
# Rule: "OR('Org1.admin', 'Org1.client')"
Admins:
Type: Signature
Rule: "OR('Org1.admin')"
# MSPDir is the filesystem path which contains the MSP configuration.
MSPDir: /opt/home/admin-msp
# AnchorPeers defines the location of peers which can be used for
# cross-org gossip communication. Note, this value is only encoded in
# the genesis block in the Application section context.
AnchorPeers:
- Host: 127.0.0.1
Port: 7051
Capabilities:
Channel: &ChannelCapabilities
V1_4_3: true
V1_3: false
V1_1: false
Orderer: &OrdererCapabilities
V1_4_2: true
V1_1: false
Application: &ApplicationCapabilities
V1_4_2: true
V1_3: false
V1_2: false
V1_1: false
Channel: &ChannelDefaults
Policies:
# Who may invoke the 'Deliver' API
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
# Who may invoke the 'Broadcast' API
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
# By default, who may modify elements at this config level
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ChannelCapabilities
Application: &ApplicationDefaults
Policies: &ApplicationDefaultPolicies
LifecycleEndorsement:
Type: ImplicitMeta
Rule: "ANY Readers"
Endorsement:
Type: ImplicitMeta
Rule: "ANY Readers"
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ApplicationCapabilities
Profiles:
OneOrgChannel:
<<: *ChannelDefaults
Consortium: AWSSystemConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- <<: *Org1
My docker-compose-cli.yaml file:
version: '2'
services:
cli:
container_name: cli
image: hyperledger/fabric-tools:1.4
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=info # Set logging level to debug for more verbose logging
- CORE_PEER_ID=cli
- CORE_CHAINCODE_KEEPALIVE=10
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/home/managedblockchain-tls-chain.pem
- CORE_PEER_LOCALMSPID=$Member
- CORE_PEER_MSPCONFIGPATH=/opt/home/admin-msp
- CORE_PEER_ADDRESS=$MyPeerNodeEndpoint
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash
volumes:
- /var/run/:/host/var/run/
- /home/ec2-user/fabric-samples/chaincode:/opt/gopath/src/github.com/
- /home/ec2-user:/opt/home
Thanks in advance :).

How to correctly use dynamic inventories with Ansible?

I am trying to provide initial configuration and software installation to a newly created AWS EC2 instance by using Ansible. If I run my playbooks independently it works just as I want. However, if I try to automate it into a single playbook by using two imports, it doesn't work (probably because the dynamic inventory can't get the newly created IP address?)...
Running together:
[WARNING]: Could not match supplied host pattern, ignoring:
aws_region_eu_central_1
PLAY [variables from dynamic inventory] ****************************************
skipping: no hosts matched
Running separately:
TASK [Gathering Facts] *********************************************************
[WARNING]: Platform linux on host XX.XX.XX.XX is using the discovered Python
interpreter at /usr/bin/python, but future installation of another Python
interpreter could change the meaning of that path. See https://docs.ansible.com
/ansible/2.10/reference_appendices/interpreter_discovery.html for more
information.
ok: [XX.XX.XX.XX]
This is my main playbook:
- import_playbook: server-setup.yml
- import_playbook: server-configuration.yml
server-setup.yml:
---
# variables from dynamic inventory
- name: variables from dynamic inventory
remote_user: ec2-user
hosts: localhost
roles:
- ec2-instance
server-configuration.yml:
---
# variables from dynamic inventory
- name: variables from dynamic inventory
remote_user: ec2-user
become: true
become_method: sudo
become_user: root
ignore_unreachable: true
hosts: aws_region_eu_central_1
gather_facts: false
pre_tasks:
- pause:
minutes: 5
roles:
- { role: epel, sudo: true }
- { role: nodejs, sudo: true }
This is my ansible.cfg file:
[defaults]
inventory = test_aws_ec2.yaml
private_key_file = master-key.pem
enable_plugins = aws_ec2
host_key_checking = False
pipelining = True
log_path = ansible.log
roles_path = /roles
forks = 1000
and finally my hosts.ini:
[local]
localhost ansible_python_interpreter=/usr/local/bin/python3