Why does this use of DSC File Resource using shared folder only succeed once? - windows-server-2012-r2

I am testing the use of the DSC File resource for copying a directory of files from a shared folder to another machine.
My problem is that this works once but running the same code a second time fails. If I restart the target machine the script will again run correctly but fail a second time.
Can anyone tell me why this is and whether I need to be doing something differently?
The machines I am using are called:
"S1" => Server 2012 R2 (This has the shared folder and user setup for read access)
"S2" => Virtual Server 2012 R2 running on S1 (This is the target machine)
The script I am running is this:
$ConfigurationData = #{
AllNodes = #(
#{
NodeName="*"
PSDscAllowPlainTextPassword = $true
}
#{
NodeName = "S2"
}
)
}
Configuration Test {
param (
[Parameter(Mandatory=$true)]
[PSCredential]$credential
)
Node $AllNodes.NodeName {
File DirectoryCopy {
DestinationPath = "C:\Shared\Files"
SourcePath = "\\S1\Shared\Files"
Ensure = "present"
Credential = $credential
Type = "Directory"
Recurse = $true
}
}
}
$username = "dscUser"
$password="dscPassword!"|ConvertTo-SecureString -AsPlainText -Force
$credential = New-Object System.Management.Automation.PsCredential("$username",$password)
Test -OutputPath "C:\Scripts" -ConfigurationData $ConfigurationData -Credential $credential
Start-DscConfiguration -ComputerName S2 -path "C:\Scripts" -Verbose -Wait
The output of running this twice are this:
PS C:\repo> C:\Scripts\Test.ps1
Directory: C:\Scripts
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 16/10/2015 11:12 1646 S2.mof
VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' = ro
ot/Microsoft/Windows/DesiredStateConfiguration'.
VERBOSE: An LCM method call arrived from computer S1 with user sid S-1-5-21-1747786857-595474378-2516325245-500.
VERBOSE: [S2]: LCM: [ Start Set ]
VERBOSE: [S2]: LCM: [ Start Resource ] [[File]DirectoryCopy]
VERBOSE: [S2]: LCM: [ Start Test ] [[File]DirectoryCopy]
VERBOSE: [S2]: [[File]DirectoryCopy] Building file list from cache.
VERBOSE: [S2]: LCM: [ End Test ] [[File]DirectoryCopy] in 0.2500 seconds.
VERBOSE: [S2]: LCM: [ Start Set ] [[File]DirectoryCopy]
VERBOSE: [S2]: [[File]DirectoryCopy] Building file list from cache.
VERBOSE: [S2]: LCM: [ End Set ] [[File]DirectoryCopy] in 0.2660 seconds.
VERBOSE: [S2]: LCM: [ End Resource ] [[File]DirectoryCopy]
VERBOSE: [S2]: LCM: [ End Set ]
VERBOSE: [S2]: LCM: [ End Set ] in 0.6720 seconds.
VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 1.59 seconds
PS C:\repo> C:\Scripts\Test.ps1
Directory: C:\Scripts
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a--- 16/10/2015 11:13 1646 S2.mof
VERBOSE: Perform operation 'Invoke CimMethod' with following parameters, ''methodName' = SendConfigurationApply,'className' = MSFT_DSCLocalConfigurationManager,'namespaceName' = ro
ot/Microsoft/Windows/DesiredStateConfiguration'.
VERBOSE: An LCM method call arrived from computer S1 with user sid S-1-5-21-1747786857-595474378-2516325245-500.
VERBOSE: [S2]: LCM: [ Start Set ]
VERBOSE: [S2]: LCM: [ Start Resource ] [[File]DirectoryCopy]
VERBOSE: [S2]: LCM: [ Start Test ] [[File]DirectoryCopy]
VERBOSE: [S2]: [[File]DirectoryCopy] An error occurs when accessing the network share with the specified credential. Please make sure the credential is c
orrect and the network share is accessible. Note that Credential should not be specified with the local path.
VERBOSE: [S2]: [[File]DirectoryCopy] The related file/directory is: \\S1\Shared\Files.
A specified logon session does not exist. It may already have been terminated. An error occurs when accessing the network share with the specified credential. Please make sure
the credential is correct and the network share is accessible. Note that Credential should not be specified with the local path. The related file/directory is: \\S1\Shared\Files.
+ CategoryInfo : NotSpecified: (:) [], CimException
+ FullyQualifiedErrorId : Windows System Error 1312
+ PSComputerName : S2
VERBOSE: [S2]: LCM: [ End Set ]
LCM failed to move one or more resources to their desired state.
+ CategoryInfo : NotSpecified: (root/Microsoft/...gurationManager:String) [], CimException
+ FullyQualifiedErrorId : MI RESULT 1
+ PSComputerName : S2
VERBOSE: Operation 'Invoke CimMethod' complete.
VERBOSE: Time taken for configuration job to complete is 3.027 seconds
Any help with this is appreciated as its driving me nuts.
Thanks.

I (think) I have found the answer.
When specifying the username I should have used 'S1\dscUser' instead of 'dscUser'.
These machines are not in a domain.

Related

Elastic Beanstalk post install script won't execute correctly

I want to publish a .NET core application to Elastic Beanstalk and they will be running on Windows Server. I want to make some changes to IIS settings... more precisely Queue Length of Application Pool.
I have aws-windows-deployment-manifest.json file with the following content
{
"manifestVersion": 1,
"deployments": {
"aspNetCoreWeb": [
{
"name": "my-dotnet-core-app",
"scripts": {
"postInstall": {
"file": "SetupScripts/setupAppPool.ps1"
}
}
}
]
}
}
Inside setupAppPool.ps1 script is the following content:
Import-Module WebAdministration
$defaultAppPool = Get-ItemProperty IIS:\AppPools\DefaultAppPool
#$defaultAppPool.PSPath
Write-Host "Display Queue Length before change: " -NoNewline
(Get-ItemProperty IIS:\AppPools\DefaultAppPool\).queueLength
#Value changed here
Set-ItemProperty -Path $defaultAppPool.PSPath -Name queueLength -Value 3000
Write-Host "Display Queue Length after change: " -NoNewline
(Get-ItemProperty IIS:\AppPools\DefaultAppPool\).queueLength
If it will be a simple scritp like hostname it executes with no problem, however this one fails with the following error:
2022-07-12 17:41:25,025 [INFO] Running config InfoTask-TailLogs
AWS.DeploymentCommands.2022.07.12-17.40.30.log:
Starting deployment for my-dotnet-core-app of type AspNetCoreWeb
Parameters:
appBundle: .
iisPath: /
iisWebSite: Default Web Site
Starting restart of my-dotnet-core-app
---------- Executing command "C:\Windows\system32\iisreset.exe /restart" ----------
---------- CWD "" ----------
Attempting stop...
Internet services successfully stopped
Attempting start...
Internet services successfully restarted
---------- Command complete with exit code 0 ----------
Starting ASP.NET Core web deployment my-dotnet-core-app at C:\inetpub\AspNetCoreWebApps\my-dotnet-core-app with IIS path Default Web Site/
Copying C:\staging\. to C:\inetpub\AspNetCoreWebApps\my-dotnet-core-app
Removing existing application from IIS
Adding application to IIS
Commit changes to IIS
---------- Executing command "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy unrestricted -NonInteractive -NoProfile -Command "& { & \"C:\staging\SetupScripts/setupAppPool.ps1\"; exit $LastExitCode }" " ----------
---------- CWD "C:\inetpub\AspNetCoreWebApps\my-dotnet-core-app" ----------
Get-ItemProperty : Cannot retrieve the dynamic parameters for the cmdlet. Retrieving the COM class factory for
component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed due to the following error: 80040154 Class not
registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).
At C:\staging\SetupScripts\setupAppPool.ps1:2 char:19
+ $defaultAppPool = Get-ItemProperty IIS:\AppPools\DefaultAppPool
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-ItemProperty], ParameterBindingException
+ FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.GetItemPropertyCommand
Get-ItemProperty : Cannot retrieve the dynamic parameters for the cmdlet. Retrieving the COM class factory for
component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed due to the following error: 80040154 Class not
registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).
At C:\staging\SetupScripts\setupAppPool.ps1:7 char:2
+ (Get-ItemProperty IIS:\AppPools\DefaultAppPool\).queueLength
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-ItemProperty], ParameterBindingException
+ FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.GetItemPropertyCommand
Set-ItemProperty : Cannot bind argument to parameter 'Path' because it is null.
At C:\staging\SetupScripts\setupAppPool.ps1:10 char:24
+ Set-ItemProperty -Path $defaultAppPool.PSPath -Name queueLength -Valu ...
+ ~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (:) [Set-ItemProperty], ParameterBindingValidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerShell.Commands.SetItemProp
ertyCommand
Get-ItemProperty : Cannot retrieve the dynamic parameters for the cmdlet. Retrieving the COM class factory for
component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed due to the following error: 80040154 Class not
registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).
At C:\staging\SetupScripts\setupAppPool.ps1:13 char:2
+ (Get-ItemProperty IIS:\AppPools\DefaultAppPool\).queueLength
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-ItemProperty], ParameterBindingException
+ FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.GetItemPropertyCommand
Display Queue Length before change: Display Queue Length after change:
---------- Command complete with exit code 0 ----------
Error messages running the command: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy unrestricted -NonInteractive -NoProfile -Command "& { & \"C:\staging\SetupScripts/setupAppPool.ps1\"; exit $LastExitCode }"
Get-ItemProperty : Cannot retrieve the dynamic parameters for the cmdlet. Retrieving the COM class factory for
component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed due to the following error: 80040154 Class not
registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).
At C:\staging\SetupScripts\setupAppPool.ps1:2 char:19
+ $defaultAppPool = Get-ItemProperty IIS:\AppPools\DefaultAppPool
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-ItemProperty], ParameterBindingException
+ FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.GetItemPropertyCommand
Get-ItemProperty : Cannot retrieve the dynamic parameters for the cmdlet. Retrieving the COM class factory for
component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed due to the following error: 80040154 Class not
registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).
At C:\staging\SetupScripts\setupAppPool.ps1:7 char:2
+ (Get-ItemProperty IIS:\AppPools\DefaultAppPool\).queueLength
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-ItemProperty], ParameterBindingException
+ FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.GetItemPropertyCommand
Set-ItemProperty : Cannot bind argument to parameter 'Path' because it is null.
At C:\staging\SetupScripts\setupAppPool.ps1:10 char:24
+ Set-ItemProperty -Path $defaultAppPool.PSPath -Name queueLength -Valu ...
+ ~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (:) [Set-ItemProperty], ParameterBindingValidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerShell.Commands.SetItemProp
ertyCommand
Get-ItemProperty : Cannot retrieve the dynamic parameters for the cmdlet. Retrieving the COM class factory for
component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed due to the following error: 80040154 Class not
registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)).
At C:\staging\SetupScripts\setupAppPool.ps1:13 char:2
+ (Get-ItemProperty IIS:\AppPools\DefaultAppPool\).queueLength
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Get-ItemProperty], ParameterBindingException
+ FullyQualifiedErrorId : GetDynamicParametersException,Microsoft.PowerShell.Commands.GetItemPropertyCommand
---------- Executing command "C:\Windows\system32\iisreset.exe /start" ----------
---------- CWD "" ----------
Attempting start...
Internet services successfully started
---------- Command complete with exit code 0 ----------
AWSCommandWrapper.log:
How can I solve this problem?

Using a (Zustand) function mock with Jest results in "TypeError: create is not a function"

I'm following the Zustand wiki to implement testing, but the provided solution is not working for a basic test for app rendering. My project is built on top of the Electron React Boilerplate boilerplate project.
Here's the full error. Jest is using node with experimental-vm-modules because I followed the the Jest docs to support ESM modules.
$ cross-env NODE_OPTIONS=--experimental-vm-modules jest
(node:85003) ExperimentalWarning: VM Modules is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
jest-haste-map: Haste module naming collision: myproject
The following files share their name; please adjust your hasteImpl:
* <rootDir>/package.json
* <rootDir>/src/package.json
FAIL src/__tests__/App.test.tsx
● Test suite failed to run
TypeError: create is not a function
12 | }
13 |
> 14 | const useNotifs = create<NotifsState>(
| ^
15 | // devtools(
16 | (set) => ({
17 | notifStore: notifsDefault.notifStore,
at src/state/notifs.ts:14:19
at TestScheduler.scheduleTests (node_modules/#jest/core/build/TestScheduler.js:333:13)
at runJest (node_modules/#jest/core/build/runJest.js:387:19)
at _run10000 (node_modules/#jest/core/build/cli/index.js:408:7)
Test Suites: 1 failed, 1 total
Tests: 0 total
Snapshots: 0 total
Time: 11.882 s
Ran all test suites.
error Command failed with exit code 1.
At the top of the notifs.ts file, Zustand is imported normally with import create from 'zustand'.
Jest config in package.json:
...
"moduleNameMapper": {
"\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$": "<rootDir>/config/mocks/fileMock.js",
"\\.(css|less|sass|scss)$": "identity-obj-proxy",
"zustand": "<rootDir>/src/__mocks__/zustand.js",
},
"transformIgnorePatterns": [
"node_modules/(?!(zustand)/)",
"<rootDir>/src/node_modules/"
],
"moduleDirectories": [
"node_modules",
"src/node_modules"
],
"moduleFileExtensions": [
"js",
"jsx",
"ts",
"tsx",
"json"
],
"moduleDirectories": [
"node_modules",
"src/node_modules"
],
"extensionsToTreatAsEsm": [
".ts",
".tsx"
],
...
I have left the ./src/__mocks__/zustand.js file exactly the same as from the Zustand wiki Testing page. I receive the same error whether or not I have zustand in the transformIgnorePatterns.
My Babel configuration includes [require('#babel/plugin-proposal-class-properties'), { loose: true }], in the plugins section, and output.library.type is 'commonjs2'
My tsconfig.json has compilerOptions.module set to "CommonJS", and the project's package.json "type" field is set to "commonjs".
Dependency versions:
"#babel/core": "^7.12.9",
"#babel/preset-env": "^7.12.7",
"#babel/preset-react": "^7.12.7",
"#babel/preset-typescript": "^7.12.7",
"#babel/register": "^7.12.1",
"#babel/plugin-proposal-class-properties": "^7.12.1",
"#testing-library/jest-dom": "^5.11.6",
"#testing-library/react": "^11.2.2",
"babel-jest": "^27.0.6",
"babel-loader": "^8.2.2",
"jest": "^27.0.6",
"regenerator-runtime": "^0.13.9",
"source-map-support": "^0.5.19",
"typescript": "^4.0.5",
"webpack": "^5.5.1",
"zustand": "^3.5.5"
I don't know what else could be relevant, just let me know if anything else is needed. Any and all help appreciated, thanks for your time.
To do this you should use the actual store of your app
const initialStoreState = useStore.getState()
beforeEach(() => {
useStore.setState(initialStoreState, true)
})
useStore.setState({ me: memberMockData, isAdmin: true })
The documentation seems off. So don't follow it.
use jest 28.0.0-alpha.0 will simply resolve the issue.
I think the problem is zustand uses '.mjs' as the entry point.

Newman / Postman - Cannot replace value for a key in the Environment JSON, from command line

I'm new to both Postman and Newman.
I have created my simple test which uses the Environment Variables JSON for some properties values.
It runs fine when the value for this key is hardcoded in the environment.json but it's failing if I'm trying to pass/replace the value for the key from the command-line.
I do not have global variable json, and if possible, prefer not to use it.
Here is my command-line:
newman run "C:\Users\Automation\Postman\postman_autotest.json" --folder "AUTO" --global-var "client_secret=XXXX" --environment "C:\Users\Automation\Postman\postman_environment.json"
This value is essential for the API to work/connect, thus I'm getting 400 error back.
here is this key in the environment.json
{
"id": "673a4256-f5a1-7497-75aa-9e47b1dbad4a",
"name": "Postman Env Vars",
"values": [
{
"key": "client_secret",
"value": "",
"description": {
"content": "",
"type": "text/plain"
},
"enabled": true
}
],
"_postman_variable_scope": "environment",
"_postman_exported_at": "2019-04-03T20:31:04.829Z",
"_postman_exported_using": "Postman/6.7.4"
}
Just a thought... You can use a wrapper powershell script to replace the key at runtime then delete the file.
[CmdletBinding()]
Param (
[Parameter(Mandatory)]
[string]$Secret
)
$envFile = "C:\Users\Automation\Postman\postman_environment.json"
$envFileWithKey = "C:\Users\Automation\Postman\postman_environment_w_key.json"
$json = Get-Content $envFile -Raw | ConvertFrom-Json
$json.values[0].key = $Secret
ConvertTo-Json $json -Depth 10 | Out-File $envFileWithKey -Force
newman run "C:\Users\Automation\Postman\postman_autotest.json" --folder "AUTO" --environment $envFileWithKey
Remove-Item -Path $envFileWithKey
Then just:
.\RunAutomation.ps1 -Secret "this_is_a_secret_sshhhhhh"

Logstash conf error - amazon_es

I am trying to configure for the first time my logstash.conf file with an output to amazon_es.
My whole logstash.conf file is here:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/testdb"
# The user we wish to execute our statement as
jdbc_user => "root"
jdbc_password => "root"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/mnt/c/Users/xxxxxxxx/mysql-connector-java-5.1.45/mysql-connector-java-5.1.45-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
statement => "SELECT * FROM testtable"
}
}
output {
amazon_es {
hosts => ["search-xxxxx.eu-west-3.es.amazonaws.com"]
region => "eu-west-3"
aws_access_key_id => 'xxxxxxxxxxxxxxxxxxxxxx'
aws_secret_access_key => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
index => "test-migrate"
document_type => "data"
}
}
I have 3 elements selected from my database, but the first time i run the script, only the first element is indexed in elastic search. The second time i run it, all 3 elements are indexed. I get the error each time i run logstash with this conf file.
EDIT 2:
[2018-02-08T14:31:18,270][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/modules/fb_apache/configuration"}
[2018-02-08T14:31:18,279][DEBUG][logstash.plugins.registry] Adding plugin to the registry {:name=>"fb_apache", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x47c515a1 #module_name="fb_apache", #directory="/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/modules/fb_apache/configuration", #kibana_version_parts=["6", "0", "0"]>}
[2018-02-08T14:31:18,286][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/modules/netflow/configuration"}
[2018-02-08T14:31:18,287][DEBUG][logstash.plugins.registry] Adding plugin to the registry {:name=>"netflow", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x6f1a5910 #module_name="netflow", #directory="/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/modules/netflow/configuration", #kibana_version_parts=["6", "0", "0"]>}
[2018-02-08T14:31:18,765][DEBUG][logstash.runner ] -------- Logstash Settings (* means modified) ---------
[2018-02-08T14:31:18,765][DEBUG][logstash.runner ] node.name: "DEVFE-AMT"
[2018-02-08T14:31:18,766][DEBUG][logstash.runner ] *path.config: "logstash.conf"
[2018-02-08T14:31:18,766][DEBUG][logstash.runner ] path.data: "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/data"
[2018-02-08T14:31:18,767][DEBUG][logstash.runner ] modules.cli: []
[2018-02-08T14:31:18,768][DEBUG][logstash.runner ] modules: []
[2018-02-08T14:31:18,768][DEBUG][logstash.runner ] modules_setup: false
[2018-02-08T14:31:18,768][DEBUG][logstash.runner ] config.test_and_exit: false
[2018-02-08T14:31:18,769][DEBUG][logstash.runner ] config.reload.automatic: false
[2018-02-08T14:31:18,769][DEBUG][logstash.runner ] config.reload.interval: 3000000000
[2018-02-08T14:31:18,769][DEBUG][logstash.runner ] config.support_escapes: false
[2018-02-08T14:31:18,770][DEBUG][logstash.runner ] metric.collect: true
[2018-02-08T14:31:18,770][DEBUG][logstash.runner ] pipeline.id: "main"
[2018-02-08T14:31:18,771][DEBUG][logstash.runner ] pipeline.system: false
[2018-02-08T14:31:18,771][DEBUG][logstash.runner ] pipeline.workers: 8
[2018-02-08T14:31:18,771][DEBUG][logstash.runner ] pipeline.output.workers: 1
[2018-02-08T14:31:18,772][DEBUG][logstash.runner ] pipeline.batch.size: 125
[2018-02-08T14:31:18,772][DEBUG][logstash.runner ] pipeline.batch.delay: 50
[2018-02-08T14:31:18,772][DEBUG][logstash.runner ] pipeline.unsafe_shutdown: false
[2018-02-08T14:31:18,772][DEBUG][logstash.runner ] pipeline.java_execution: false
[2018-02-08T14:31:18,773][DEBUG][logstash.runner ] pipeline.reloadable: true
[2018-02-08T14:31:18,773][DEBUG][logstash.runner ] path.plugins: []
[2018-02-08T14:31:18,773][DEBUG][logstash.runner ] config.debug: false
[2018-02-08T14:31:18,776][DEBUG][logstash.runner ] *log.level: "debug" (default: "info")
[2018-02-08T14:31:18,783][DEBUG][logstash.runner ] version: false
[2018-02-08T14:31:18,784][DEBUG][logstash.runner ] help: false
[2018-02-08T14:31:18,784][DEBUG][logstash.runner ] log.format: "plain"
[2018-02-08T14:31:18,786][DEBUG][logstash.runner ] http.host: "127.0.0.1"
[2018-02-08T14:31:18,793][DEBUG][logstash.runner ] http.port: 9600..9700
[2018-02-08T14:31:18,793][DEBUG][logstash.runner ] http.environment: "production"
[2018-02-08T14:31:18,794][DEBUG][logstash.runner ] queue.type: "memory"
[2018-02-08T14:31:18,796][DEBUG][logstash.runner ] queue.drain: false
[2018-02-08T14:31:18,804][DEBUG][logstash.runner ] queue.page_capacity: 67108864
[2018-02-08T14:31:18,809][DEBUG][logstash.runner ] queue.max_bytes: 1073741824
[2018-02-08T14:31:18,822][DEBUG][logstash.runner ] queue.max_events: 0
[2018-02-08T14:31:18,823][DEBUG][logstash.runner ] queue.checkpoint.acks: 1024
[2018-02-08T14:31:18,836][DEBUG][logstash.runner ] queue.checkpoint.writes: 1024
[2018-02-08T14:31:18,837][DEBUG][logstash.runner ] queue.checkpoint.interval: 1000
[2018-02-08T14:31:18,846][DEBUG][logstash.runner ] dead_letter_queue.enable: false
[2018-02-08T14:31:18,854][DEBUG][logstash.runner ] dead_letter_queue.max_bytes: 1073741824
[2018-02-08T14:31:18,859][DEBUG][logstash.runner ] slowlog.threshold.warn: -1
[2018-02-08T14:31:18,868][DEBUG][logstash.runner ] slowlog.threshold.info: -1
[2018-02-08T14:31:18,873][DEBUG][logstash.runner ] slowlog.threshold.debug: -1
[2018-02-08T14:31:18,885][DEBUG][logstash.runner ] slowlog.threshold.trace: -1
[2018-02-08T14:31:18,887][DEBUG][logstash.runner ] keystore.classname: "org.logstash.secret.store.backend.JavaKeyStore"
[2018-02-08T14:31:18,896][DEBUG][logstash.runner ] keystore.file: "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/config/logstash.keystore"
[2018-02-08T14:31:18,896][DEBUG][logstash.runner ] path.queue: "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/data/queue"
[2018-02-08T14:31:18,911][DEBUG][logstash.runner ] path.dead_letter_queue: "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/data/dead_letter_queue"
[2018-02-08T14:31:18,911][DEBUG][logstash.runner ] path.settings: "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/config"
[2018-02-08T14:31:18,926][DEBUG][logstash.runner ] path.logs: "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/logs"
[2018-02-08T14:31:18,926][DEBUG][logstash.runner ] --------------- Logstash Settings -------------------
[2018-02-08T14:31:18,998][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-02-08T14:31:19,067][DEBUG][logstash.agent ] Setting up metric collection
[2018-02-08T14:31:19,147][DEBUG][logstash.instrument.periodicpoller.os] Starting {:polling_interval=>5, :polling_timeout=>120}
[2018-02-08T14:31:19,293][DEBUG][logstash.instrument.periodicpoller.jvm] Starting {:polling_interval=>5, :polling_timeout=>120}
[2018-02-08T14:31:19,422][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-02-08T14:31:19,429][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-02-08T14:31:19,453][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2018-02-08T14:31:19,464][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[2018-02-08T14:31:19,519][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.0"}
[2018-02-08T14:31:19,537][DEBUG][logstash.agent ] Starting agent
[2018-02-08T14:31:19,565][DEBUG][logstash.agent ] Starting puma
[2018-02-08T14:31:19,580][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2018-02-08T14:31:19,654][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/CONTRIBUTORS", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/Gemfile", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/Gemfile.lock", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/LICENSE", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/NOTICE.TXT", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/bin", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/config", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/data", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/lib", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/logs", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/logstash-core", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/logstash-core-plugin-api", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/modules", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/tools", "/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/vendor"]}
[2018-02-08T14:31:19,658][DEBUG][logstash.api.service ] [api-service] start
[2018-02-08T14:31:19,662][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/mnt/c/Users/anthony.maffert/l/logstash-6.2.0/logstash.conf"}
[2018-02-08T14:31:19,770][DEBUG][logstash.agent ] Converging pipelines state {:actions_count=>1}
[2018-02-08T14:31:19,776][DEBUG][logstash.agent ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main}
[2018-02-08T14:31:19,948][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-02-08T14:31:21,157][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"jdbc", :type=>"input", :class=>LogStash::Inputs::Jdbc}
[2018-02-08T14:31:21,557][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"plain", :type=>"codec", :class=>LogStash::Codecs::Plain}
[2018-02-08T14:31:21,580][DEBUG][logstash.codecs.plain ] config LogStash::Codecs::Plain/#id = "plain_32fc0754-0187-437b-9d4d-2611eaba9a45"
[2018-02-08T14:31:21,581][DEBUG][logstash.codecs.plain ] config LogStash::Codecs::Plain/#enable_metric = true
[2018-02-08T14:31:21,581][DEBUG][logstash.codecs.plain ] config LogStash::Codecs::Plain/#charset = "UTF-8"
[2018-02-08T14:31:21,612][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_connection_string = "jdbc:mysql://localhost:3306/testdb"
[2018-02-08T14:31:21,613][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_user = "root"
[2018-02-08T14:31:21,616][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_password = <password>
[2018-02-08T14:31:21,623][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_driver_library = "/mnt/c/Users/anthony.maffert/Desktop/DocumentsUbuntu/mysql-connector-java-5.1.45/mysql-connector-java-5.1.45-bin.jar"
[2018-02-08T14:31:21,624][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_driver_class = "com.mysql.jdbc.Driver"
[2018-02-08T14:31:21,631][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#statement = "SELECT * FROM testtable"
[2018-02-08T14:31:21,633][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#id = "ff7529f734e0813846bc8e3b2bcf0794d99ff5cb61b947e0497922b083b3851a"
[2018-02-08T14:31:21,647][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#enable_metric = true
[2018-02-08T14:31:21,659][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#codec = <LogStash::Codecs::Plain id=>"plain_32fc0754-0187-437b-9d4d-2611eaba9a45", enable_metric=>true, charset=>"UTF-8">
[2018-02-08T14:31:21,663][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#add_field = {}
[2018-02-08T14:31:21,663][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_paging_enabled = false
[2018-02-08T14:31:21,678][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_page_size = 100000
[2018-02-08T14:31:21,679][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_validate_connection = false
[2018-02-08T14:31:21,693][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_validation_timeout = 3600
[2018-02-08T14:31:21,694][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#jdbc_pool_timeout = 5
[2018-02-08T14:31:21,708][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#sequel_opts = {}
[2018-02-08T14:31:21,708][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#sql_log_level = "info"
[2018-02-08T14:31:21,715][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#connection_retry_attempts = 1
[2018-02-08T14:31:21,716][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#connection_retry_attempts_wait_time = 0.5
[2018-02-08T14:31:21,721][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#parameters = {}
[2018-02-08T14:31:21,723][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#last_run_metadata_path = "/home/maffer_a/.logstash_jdbc_last_run"
[2018-02-08T14:31:21,731][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#use_column_value = false
[2018-02-08T14:31:21,731][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#tracking_column_type = "numeric"
[2018-02-08T14:31:21,745][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#clean_run = false
[2018-02-08T14:31:21,746][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#record_last_run = true
[2018-02-08T14:31:21,808][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#lowercase_column_names = true
[2018-02-08T14:31:21,808][DEBUG][logstash.inputs.jdbc ] config LogStash::Inputs::Jdbc/#columns_charset = {}
[2018-02-08T14:31:21,830][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"stdout", :type=>"output", :class=>LogStash::Outputs::Stdout}
[2018-02-08T14:31:21,893][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"json_lines", :type=>"codec", :class=>LogStash::Codecs::JSONLines}
[2018-02-08T14:31:21,901][DEBUG][logstash.codecs.jsonlines] config LogStash::Codecs::JSONLines/#id = "json_lines_e27ae5ff-5352-4061-9415-c75234fafc91"
[2018-02-08T14:31:21,902][DEBUG][logstash.codecs.jsonlines] config LogStash::Codecs::JSONLines/#enable_metric = true
[2018-02-08T14:31:21,902][DEBUG][logstash.codecs.jsonlines] config LogStash::Codecs::JSONLines/#charset = "UTF-8"
[2018-02-08T14:31:21,905][DEBUG][logstash.codecs.jsonlines] config LogStash::Codecs::JSONLines/#delimiter = "\n"
[2018-02-08T14:31:21,915][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#codec = <LogStash::Codecs::JSONLines id=>"json_lines_e27ae5ff-5352-4061-9415-c75234fafc91", enable_metric=>true, charset=>"UTF-8", delimiter=>"\n">
[2018-02-08T14:31:21,924][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#id = "4fb47c5631fa87c6a839a6f476077e9fa55456c479eee7251568f325435f3bbc"
[2018-02-08T14:31:21,929][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#enable_metric = true
[2018-02-08T14:31:21,939][DEBUG][logstash.outputs.stdout ] config LogStash::Outputs::Stdout/#workers = 1
[2018-02-08T14:31:23,217][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"amazon_es", :type=>"output", :class=>LogStash::Outputs::AmazonES}
[2018-02-08T14:31:23,287][DEBUG][logstash.codecs.plain ] config LogStash::Codecs::Plain/#id = "plain_673a059d-4236-4f10-ba64-43ee33e050e4"
[2018-02-08T14:31:23,288][DEBUG][logstash.codecs.plain ] config LogStash::Codecs::Plain/#enable_metric = true
[2018-02-08T14:31:23,288][DEBUG][logstash.codecs.plain ] config LogStash::Codecs::Plain/#charset = "UTF-8"
[2018-02-08T14:31:23,294][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#hosts = ["search-XXXXXXXXXXXXXX.eu-west-3.es.amazonaws.com"]
[2018-02-08T14:31:23,294][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#region = "eu-west-3"
[2018-02-08T14:31:23,295][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#aws_access_key_id = "XXXXXXXXXXX"
[2018-02-08T14:31:23,295][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#aws_secret_access_key = "XXXXXXXXXXXXX"
[2018-02-08T14:31:23,296][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#index = "test-migrate"
[2018-02-08T14:31:23,299][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#document_type = "data"
[2018-02-08T14:31:23,299][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#id = "7c6401c2f72c63f8d359a42a2f440a663303cb2cbfefff8fa32d64a6f571a527"
[2018-02-08T14:31:23,306][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#enable_metric = true
[2018-02-08T14:31:23,310][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#codec = <LogStash::Codecs::Plain id=>"plain_673a059d-4236-4f10-ba64-43ee33e050e4", enable_metric=>true, charset=>"UTF-8">
[2018-02-08T14:31:23,310][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#workers = 1
[2018-02-08T14:31:23,310][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#manage_template = true
[2018-02-08T14:31:23,317][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#template_name = "logstash"
[2018-02-08T14:31:23,325][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#template_overwrite = false
[2018-02-08T14:31:23,326][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#port = 443
[2018-02-08T14:31:23,332][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#protocol = "https"
[2018-02-08T14:31:23,333][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#flush_size = 500
[2018-02-08T14:31:23,335][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#idle_flush_time = 1
[2018-02-08T14:31:23,340][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#action = "index"
[2018-02-08T14:31:23,341][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#path = "/"
[2018-02-08T14:31:23,341][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#max_retries = 3
[2018-02-08T14:31:23,341][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#retry_max_items = 5000
[2018-02-08T14:31:23,342][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#retry_max_interval = 5
[2018-02-08T14:31:23,342][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#doc_as_upsert = false
[2018-02-08T14:31:23,342][DEBUG][logstash.outputs.amazones] config LogStash::Outputs::AmazonES/#upsert = ""
[2018-02-08T14:31:23,426][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-02-08T14:31:23,476][DEBUG][logstash.outputs.amazones] Normalizing http path {:path=>"/", :normalized=>"/"}
[2018-02-08T14:31:23,791][INFO ][logstash.outputs.amazones] Automatic template management enabled {:manage_template=>"true"}
[2018-02-08T14:31:23,835][INFO ][logstash.outputs.amazones] Using mapping template {:template=>{"template"=>"logstash-*", "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"message_field"=>{"match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "ignore_above"=>256}}}}}], "properties"=>{"#version"=>{"type"=>"string", "index"=>"not_analyzed"}, "geoip"=>{"type"=>"object", "dynamic"=>true, "properties"=>{"location"=>{"type"=>"geo_point"}}}}}}}}
[2018-02-08T14:31:24,480][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-02-08T14:31:24,482][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-02-08T14:31:25,242][ERROR][logstash.outputs.amazones] Failed to install template: [400] {"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"No handler for type [string] declared on field [#version]"}],"type":"mapper_parsing_exception","reason":"Failed to parse mapping [_default_]: No handler for type [string] declared on field [#version]","caused_by":{"type":"mapper_parsing_exception","reason":"No handler for type [string] declared on field [#version]"}},"status":400}
[2018-02-08T14:31:25,246][INFO ][logstash.outputs.amazones] New Elasticsearch output {:hosts=>["search-XXXXXXXXXXXX.eu-west-3.es.amazonaws.com"], :port=>443}
[2018-02-08T14:31:25,619][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x42da9cf8 run>"}
[2018-02-08T14:31:25,712][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
Thu Feb 08 14:31:26 GMT 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.
[2018-02-08T14:31:26,716][INFO ][logstash.inputs.jdbc ] (0.008417s) SELECT version()
[2018-02-08T14:31:26,858][INFO ][logstash.inputs.jdbc ] (0.002332s) SELECT count(*) AS `count` FROM (SELECT * FROM testtable) AS `t1` LIMIT 1
[2018-02-08T14:31:26,863][DEBUG][logstash.inputs.jdbc ] Executing JDBC query {:statement=>"SELECT * FROM testtable", :parameters=>{:sql_last_value=>2018-02-08 14:23:01 UTC}, :count=>3}
[2018-02-08T14:31:26,873][INFO ][logstash.inputs.jdbc ] (0.000842s) SELECT * FROM testtable
[2018-02-08T14:31:27,022][DEBUG][logstash.inputs.jdbc ] Closing {:plugin=>"LogStash::Inputs::Jdbc"}
[2018-02-08T14:31:27,023][DEBUG][logstash.pipeline ] filter received {"event"=>{"#timestamp"=>2018-02-08T14:31:26.918Z, "personid"=>4004, "city"=>"Cape Town", "#version"=>"1", "firstname"=>"Richard", "lastname"=>"Baron"}}
[2018-02-08T14:31:27,023][DEBUG][logstash.pipeline ] filter received {"event"=>{"#timestamp"=>2018-02-08T14:31:26.919Z, "personid"=>4003, "city"=>"Cape Town", "#version"=>"1", "firstname"=>"Sharon", "lastname"=>"McWell"}}
[2018-02-08T14:31:27,023][DEBUG][logstash.pipeline ] filter received {"event"=>{"#timestamp"=>2018-02-08T14:31:26.890Z, "personid"=>4005, "city"=>"Cape Town", "#version"=>"1", "firstname"=>"Jaques", "lastname"=>"Kallis"}}
[2018-02-08T14:31:27,032][DEBUG][logstash.pipeline ] output received {"event"=>{"#timestamp"=>2018-02-08T14:31:26.918Z, "personid"=>4004, "city"=>"Cape Town", "#version"=>"1", "firstname"=>"Richard", "lastname"=>"Baron"}}
[2018-02-08T14:31:27,035][DEBUG][logstash.pipeline ] output received {"event"=>{"#timestamp"=>2018-02-08T14:31:26.890Z, "personid"=>4005, "city"=>"Cape Town", "#version"=>"1", "firstname"=>"Jaques", "lastname"=>"Kallis"}}
[2018-02-08T14:31:27,040][DEBUG][logstash.pipeline ] output received {"event"=>{"#timestamp"=>2018-02-08T14:31:26.919Z, "personid"=>4003, "city"=>"Cape Town", "#version"=>"1", "firstname"=>"Sharon", "lastname"=>"McWell"}}
[2018-02-08T14:31:27,047][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x42da9cf8 sleep>"}
[2018-02-08T14:31:27,053][DEBUG][logstash.pipeline ] Shutting down filter/output workers {:pipeline_id=>"main", :thread=>"#<Thread:0x42da9cf8 run>"}
[2018-02-08T14:31:27,062][DEBUG][logstash.pipeline ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x3f1899bb#[main]>worker0 run>"}
[2018-02-08T14:31:27,069][DEBUG][logstash.pipeline ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x41529ca4#[main]>worker1 run>"}
[2018-02-08T14:31:27,070][DEBUG][logstash.pipeline ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x1c56e6d6#[main]>worker2 run>"}
[2018-02-08T14:31:27,083][DEBUG][logstash.pipeline ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x2f767b45#[main]>worker3 sleep>"}
[2018-02-08T14:31:27,083][DEBUG][logstash.pipeline ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x2017b165#[main]>worker4 run>"}
[2018-02-08T14:31:27,098][DEBUG][logstash.pipeline ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x65923ecd#[main]>worker5 sleep>"}
[2018-02-08T14:31:27,099][DEBUG][logstash.pipeline ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x1714b839#[main]>worker6 run>"}
[2018-02-08T14:31:27,113][DEBUG][logstash.pipeline ] Pushing shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0xcbee48c#[main]>worker7 run>"}
[2018-02-08T14:31:27,116][DEBUG][logstash.pipeline ] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x3f1899bb#[main]>worker0 run>"}
{"#timestamp":"2018-02-08T14:31:26.919Z","personid":4003,"city":"Cape Town","#version":"1","firstname":"Sharon","lastname":"McWell"}
{"#timestamp":"2018-02-08T14:31:26.918Z","personid":4004,"city":"Cape Town","#version":"1","firstname":"Richard","lastname":"Baron"}
{"#timestamp":"2018-02-08T14:31:26.890Z","personid":4005,"city":"Cape Town","#version":"1","firstname":"Jaques","lastname":"Kallis"}
[2018-02-08T14:31:27,153][DEBUG][logstash.pipeline ] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x41529ca4#[main]>worker1 run>"}
[2018-02-08T14:31:27,158][DEBUG][logstash.pipeline ] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x1c56e6d6#[main]>worker2 run>"}
[2018-02-08T14:31:27,200][DEBUG][logstash.outputs.amazones] Flushing output {:outgoing_count=>1, :time_since_last_flush=>1.927723, :outgoing_events=>{nil=>[["index", {:_id=>nil, :_index=>"test-migrate", :_type=>"data", :_routing=>nil}, #<LogStash::Event:0x1bacf548>]]}, :batch_timeout=>1, :force=>nil, :final=>nil}
[2018-02-08T14:31:27,207][DEBUG][logstash.pipeline ] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x2f767b45#[main]>worker3 sleep>"}
[2018-02-08T14:31:27,251][DEBUG][logstash.instrument.periodicpoller.os] Stopping
[2018-02-08T14:31:27,271][DEBUG][logstash.instrument.periodicpoller.jvm] Stopping
[2018-02-08T14:31:27,273][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Stopping
[2018-02-08T14:31:27,281][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Stopping
[2018-02-08T14:31:27,356][DEBUG][logstash.agent ] Shutting down all pipelines {:pipelines_count=>1}
[2018-02-08T14:31:27,362][DEBUG][logstash.agent ] Converging pipelines state {:actions_count=>1}
[2018-02-08T14:31:27,363][DEBUG][logstash.agent ] Executing action {:action=>LogStash::PipelineAction::Stop/pipeline_id:main}
[2018-02-08T14:31:27,385][DEBUG][logstash.pipeline ] Stopping inputs {:pipeline_id=>"main", :thread=>"#<Thread:0x42da9cf8 sleep>"}
[2018-02-08T14:31:27,389][DEBUG][logstash.inputs.jdbc ] Stopping {:plugin=>"LogStash::Inputs::Jdbc"}
[2018-02-08T14:31:27,399][DEBUG][logstash.pipeline ] Stopped inputs {:pipeline_id=>"main", :thread=>"#<Thread:0x42da9cf8 sleep>"}
You should try to add the index template yourself. Copy this ES 6.x template on your local file system and then add the template setting to your amazon_es output, it should work:
amazon_es {
hosts => ["search-xxxxx.eu-west-3.es.amazonaws.com"]
region => "eu-west-3"
aws_access_key_id => 'xxxxxxxxxxxxxxxxxxxxxx'
aws_secret_access_key => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
index => "test-migrate"
document_type => "data"
template => '/path/to/template.json'
}

Azure VM Extensions

I'm executing the following:
Set-AzureRmVMExtension `
-VMName 'servername' `
-ResourceGroupName 'rgname' `
-Name 'JoinAD' `
-ExtensionType 'JsonADDomainExtension' `
-Publisher 'Microsoft.Compute' `
-TypeHandlerVersion '1.0' `
-Location 'West Europe' `
-Settings #{'Name' = 'domain.com'; 'OUPath' = 'OU=Server 2012 R2,OU=Servers,DC=domain,DC=com'; 'User' = 'domain.com\username'; 'Restart' = 'true'; 'Options' = 3} `
-ProtectedSettings #{'Password' = 'password'}
and get this error:
Set-AzureRmVMExtension : Long running operation failed with status
'Failed'. StartTime: 18.04.2016 18:03:30 EndTime: 18.04.2016 18:04:50
OperationID: 76825458-6c50-404d-bb1a-b27c722b1760 Status: Failed
ErrorCode: VMExtensionProvisioningError ErrorMessage: VM has reported
a failure when processing extension 'JoinAD'. Error message: "Join
completed for Domain 'ddomain.com'". At line:1 char:1
+ Set-AzureRmVMExtension `
+ ~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : CloseError: (:) [Set-AzureRmVMExtension], ComputeCloudException
+ FullyQualifiedErrorId : Microsoft.Azure.Commands.Compute.SetAzureVMExtensionCommand
What am I missing?
Kept having trouble with extension, therefore opted to perform the domain join using PowerShell Add-Computer without extensions.
One possibly cause might be that NSG configurations block connectivity to the internet and with that to services running in the Azure data center.