Replicate netplan nameserver configuration for centos - centos7

First of a warning: I'm a junior level with little experience using centos.
I'm running a puppet environment with a few different machines some example modules I'm running is consul and puppet-dns for the ubuntu machines I have used netplan to configure up my dns clients.
Dns Server machine
include dns::server
# Forwarders
dns::server::options { '/etc/bind/named.conf.options':
dnssec_enable => false,
dnssec_validation => no,
forwarders => [ 'IP1' ],
}
dns::zone { 'consul':
zone_type => forward,
forward_policy => only,
allow_forwarder => [ '127.0.0.1 port 8600' ],
}
DNS Client setup
/^(Debian|Ubuntu)$/: {
class { 'netplan':
config_file => '/etc/netplan/50-cloud-init.yaml',
ethernets => {
'ens3' => {
'dhcp4' => true,
'nameservers' => {
'search' => ['node.consul'],
'addresses' => [ "$dir_ip" ],
}
}
},
netplan_apply => true,
}
In order to replicate this on Centos7, I came accross ifcfg files
(/etc/sysconfig/network-scripts/ifcfg-ens3) however, I am not sure how to replicate the result from above within one of this files. Does anyone have experience with this ?

After some reading, I decided to edit /etc/resolv.conf with the help of puppetmod: saz-resolv_conf
class { 'resolv_conf':
nameservers => ["$dir_ip"],
searchpath => ['node.consul'],
}
I was a bit skeptical about this at first since the file had automated items from OpenStack, however, everything is working as expected.

Related

How to create a distribution on CloudFront using Paws

My goal is to create a basic CloudFront distribution using the Paws sdk, and as of yet I have been unable to get past an error 400 with the following configuration:
use Paws;
use Data::Printer;
my $cloudfront = Paws->service('CloudFront');
my $CreateDistributionResult = $cloudfront->CreateDistribution(
DistributionConfig => {
CallerReference => "1578211502",
Origins => {
Quantity => 1,
Items => [{
DomainName => "foo.s3-website-ap-southeast-2.amazonaws.com",
Id => 'S3-Website-foo.s3-website-ap-southeast-2.amazonaws.com'}]
},
DefaultCacheBehavior => {
ForwardedValues => {
Cookies => {Forward => 'none'},
QueryString => 0
},
TargetOriginId => 'S3-Website-foo.s3-website-ap-southeast-2.amazonaws.com',
TrustedSigners => {
Enabled => 0,
Quantity => 0
},
ViewerProtocolPolicy => 'redirect-to-https',
MinTTL => 0
},
Comment => "",
Enabled => 1
});
p $CreateDistributionResult;
The above is the complete set of only the required fields as defined in the api documentation here and here. However, when I run the above it crashes with the following:
[foo#bar~]# perl aws.pl
Paws::CloudFront is not stable / supported / entirely developed at /root/perl5/lib/perl5/Paws/CloudFront.pm line 2.
Bad Request
Trace begun at /root/perl5/lib/perl5/Paws/Net/RestXMLResponse.pm line 24
Paws::Net::RestXMLResponse::process('Paws::Net::RestXMLResponse=HASH(0x2f275b8)', 'Paws::CloudFront::CreateDistribution=HASH(0x2fbe6e0)', 'Paws::Net::APIResponse=HASH(0x30c0ec0)') called at /root/perl5/lib/perl5/Paws/Net/Caller.pm line 46
Paws::Net::Caller::caller_to_response('Paws::Net::Caller=HASH(0x16d7bb8)', 'Paws::CloudFront=HASH(0x2a615f8)', 'Paws::CloudFront::CreateDistribution=HASH(0x2fbe6e0)', 'Paws::Net::APIResponse=HASH(0x30c0ec0)') called at /root/perl5/lib/perl5/Paws/Net/RetryCallerRole.pm line 19
Paws::Net::RetryCallerRole::do_call('Paws::Net::Caller=HASH(0x16d7bb8)', 'Paws::CloudFront=HASH(0x2a615f8)', 'Paws::CloudFront::CreateDistribution=HASH(0x2fbe6e0)') called at /root/perl5/lib/perl5/Paws/CloudFront.pm line 49
Paws::CloudFront::CreateDistribution('Paws::CloudFront=HASH(0x2a615f8)', 'DistributionConfig', 'HASH(0x2f9c500)') called at aws.pl line 6
What is a correct minimal call that would work here?
you don't have anything in the comment argument. Please try passing value there.
AWS is a bit stingy with these kinds of things. Also please let me know if that fixes the issue or not.

Push data directly from Filebeats to AWS ES managed service

My issue is that I am trying to stream data from Filebeat to AWS ElasticSearch.
I approached this by providing the AWS endpoint in the beats output entry.
I tried both port 80 and 443 to no avail.
I checked this post, and from this I suppose that it is possible to push directly to AWSbut still cannot figure out how.
It would be really helpful if any of you has been through this and could shed some light!
Thank you!
Turns out it was a problem with permissions.
Make sure that the logs filebeat is trying to stream have the same permission as the filebeat.yml.
You can simply issue a chmod 777 to both files.
Finally, make sure, to prepend :443 after AWS ES endpoint.
I was using 7.10 version of Filebeat and Logstash.
Below blog
help me lot.
Steps are as:
Open filebeat.yml in any editor of your choice from location
/etc/filebeat/ on Linux or
C:\Program Files\filebeat-7.10.0 on windows
filebeat:
inputs:
– paths:
– E:/nginx-1.20.1/logs/.log
input_type: log
filebeat.config.modules:
enabled: true
path: ${path.config}/modules.d/*.yml
output:
logstash:
hosts: [“localhost:5044”]
Logstash Configuration
input {
beats {
port => 5044
ssl => false
}
}
filter {
grok {
match => [ “message” , “%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}”]
overwrite => [ “message” ]
}
mutate {
convert => [“response”, “integer”]
convert => [“bytes”, “integer”]
convert => [“responsetime”, “float”]
}
geoip {
source => “clientip”
target => “geoip”
add_tag => [ “nginx-geoip” ]
}
date {
match => [ “timestamp” , “dd/MMM/YYYY:HH:mm:ss Z” ]
remove_field => [ “timestamp” ]
}
useragent {
source => “agent”
}
}
output {
elasticsearch {
hosts => [“https://arun-learningsubway-ybalglooophuhyjmik3zmkmiq4.ap-south-1.es.amazonaws.com:443”]
index => “arun_nginx”
document_type => “%{[#metadata][type]}”
user => “myusername”
password => “mypassword”
manage_template => false
template_overwrite => false
ilm_enabled => false
}
}

Cloudtrail logs to AWS Elasticsearch

Attempting to get cloudtrail logs of multiple aws accounts from s3 into elasticsearch and things appear to be working on and off until now where everything ground to halt. and error show is shown below
[2018-10-16T21:33:42,096][ERROR][logstash.outputs.elasticsearch] Attempted to send a bulk request to elasticsearch, but no there are no living connections in the connection pool. Perhaps Elasticsearch is unreachable or down? {:error_message=>"No Available connections", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError", :will_retry_in_seconds=>8}
[2018-10-16T21:33:44,406][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443/, :path=>"/"}
[2018-10-16T21:33:44,430][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"https://vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443/"}
[2018-10-16T21:33:51,426][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>413, :url=>"https://vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443/_bulk"}
Also here is my logstash config as am using logstash to do ingestion
```
input {
s3 {
bucket => "dummy-s3"
region => "eu-west-1"
type => "cloudtrail"
sincedb_path => "/tmp/logstash/cloudtrail"
exclude_pattern => "/CloudTrail-Digest/"
interval => 120
codec => "json"
}
}
filter {
if [type] == "cloudtrail" {
json {
source => "message"
}
split {
field => "Records"
add_tag => "splitted"
}
if ("splitted" in [tags]) {
date {
match => ["eventTime", "ISO8601"]
remove_tag => ["splitted"]
remove_field => ["timestamp"]
}
}
geoip {
source => "[Records][sourceIPAddress]"
target => "geoip"
add_tag => ["cloudtrail-geoip"]
}
mutate {
gsub => [
"eventSource", "\.amazonaws\.com$", "",
"apiVersion", "_", "-"
]
}
}
}
output {
elasticsearch {
hosts => ["vpc-sec-dummytext.eu-west-1.es.amazonaws.com:443"]
ssl => true
index => "cloudtrail-%{+YYYY.MM.dd}"
doc_as_upsert => true
template_overwrite => true
}
stdout {
codec => rubydebug
}
}
}
When log-stash start or restarted from ubuntu ec2 logs as ingested for a few minutes then stops
Any help will really be appreciated.

Yii2 Web Service not returning a single row from database

I've made a web service using yii2 basic template I got a table called 'ely_usuario' when I call it with:
http://localhost/basic/web/index.php/ely-usuario/
it works fine and returns me all the rows in ely_usuario table
but when I try to get just one record, for example:
http://localhost/basic/web/index.php/ely-usuario/29
it doesn't work, show me a not found page, I've made the model class using gii
here's my Controller:
<?php
namespace app\controllers;
use yii\rest\ActiveController;
class ElyUsuarioController extends ActiveController
{
public $modelClass = 'app\models\ElyUsuario';
}
My configs:
'urlManager' => [
'enablePrettyUrl' => true,
'enableStrictParsing' => false,
'showScriptName' => false,
'rules' => [
['class' => 'yii\rest\UrlRule', 'controller' => 'ely-usuario'],
],
],
Another weird thing that you might noticed is that 'enableStrictParsing' is false, in the yii2 guide it says to be true but for me it only works with false
Thanks
You need to change the code in your configs.I hope you will get an idea from the following code.It works fine for me!
'urlManager' => [
'enablePrettyUrl' => true,
'showScriptName' => false,
'rules' => [
'<controller:(ely-usuario)>/<action>/<id:\d+>' => '<controller>/<action>',
],
],
And in your controller please check your specific action.It must be coded like as:
public function actionTransactions($id=null){
if($id!=null){
//retrieve single row
}else{
//retrieve multiple rows
}
Also please check this link for reference:
Why RESTfull API request to view return 404 in Yii2?
I hope it helps!

How to resolve Recoverable fatal error: Object of class Drupal\Core\Link could not be converted to string in Drupal\Component\Utility\Xss::filter?

I'm following the How to Configure SimpleSAMLphp for Drupal 8 on Acquia instruction. I'm at the bottom where it says, "SimpleSAMLphp_auth module settings. I personally recommend to store configuration for SimpleSAMLphp_auth module settings in settings.php." Once I copied the code he has in that code snippet to my settings.php file (pasted it at the bottom) and push it to Acquia, I got this error when I tried to login via the dev.mysite.com/user url.
The website encountered an unexpected error. Please try again later. Recoverable fatal error: Object of class Drupal\Core\Link could not be converted to string in Drupal\Component\Utility\Xss::filter() (line 67 of core/lib/Drupal/Component/Utility/Xss.php).
The code shown below is what I have in my settings.php file.
$config['simplesamlphp_auth.settings'] = [
// Basic settings.
'activate' => TRUE, // Enable or Disable SAML login.
'auth_source' => 'default-sp',
'login_link_display_name' => 'Login with your SSO account',
'register_users' => TRUE,
'debug' => FALSE,
// Local authentication.
'allow' => [
'default_login' => TRUE,
'set_drupal_pwd' => TRUE,
'default_login_users' => '',
'default_login_roles' => [
'authenticated' => FALSE,
'administrator' => 'administrator',
],
],
'logout_goto_url' => '',
// User info and syncing.
// `unique_id` is specified in Transient format, otherwise this should be `UPN`
// Please talk to your SSO adminsitrators about which format you should be using.
'unique_id' => 'http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn',
'user_name' => 'uid',
'mail_attr' => 'mail',
'sync' => [
'mail' => FALSE,
'user_name' => FALSE,
],
];
If I commented out this whole block of code in my setings.php file then I can login to my dev.mysite.com/user drupal site. One other thing I'm not clear is, do I "Check Activate authentication via SimpleSAMLphp option" first then copied the code snippet to my settings.php file and push to Acquia or the other way around?
Any help is much appreciated.
It seems that update to version 8.x-3.0-rc2 resolves the error above. However, looks like it introduces another issues, "This site can't be reached" and redirected the site to port 80 instead.