Error sas viya : FAILED! => {"msg": "'dict object' has no attribute 'sas_all'"} - sas

Error fatal: [localhost]: FAILED! => {"msg": "'dict object' has no attribute 'sas_all'"}
when i used command
ansible-playbook site.yml
, I getting an Error enter image description here

Related

Using notebook (Databricks) show error java.io.IOException: Error getting access token from metadata server

I'm using https://community.cloud.databricks.com/ (notebook) when I try to access Storage GCP through the Python command as below:
df = spark.read.format(csv).load(gs://test-gcs-doc-bucket-pr/test)
Error:
java.io.IOException: Error getting access token from metadata server at: 169.254.169.xxx/computeMetadata/v1/instance/service-accounts/default/token
Databricks Spark Configuration:
spark.hadoop.fs.gs.auth.client_id "10"
spark.hadoop.fs.gs.auth.auth_uri "https://accounts.google.com/o/oauth2/auth"
spark.databricks.delta.preview.enabled true
spark.hadoop.google.cloud.auth.service.account.enable true
spark.hadoop.fs.gs.auth.service.account.email "test-gcs.iam.gserviceaccount.com"
spark.hadoop.fs.gs.auth.token_uri "https://oauth2.googleapis.com/token"
spark.hadoop.fs.gs.project_id "oval-replica-9999999"
spark.hadoop.fs.gs.auth.service.account.private_key "--BEGIN"
spark.hadoop.fs.gs.auth.service.account.private_key_id "3f869c98d389bb28c5b13a0e31785e73d8b ```

AWS Instance get removed from SSM Managed instances after SSM Agent update

I'm able to launch an instance and have it become managed by Systems Manager. That is, until it automatically updates the SSM Agent. The update is successful, updating from 3.0.161.0 to 3.0.854.0. The OS is Ubuntu 20.04.
I can still connect to the instance using the Session Manager despite it not appearing in the Managed Instances list.
Not sure if this is relevant, but here is the /var/log/amazon/ssm/errors.log:
2021-03-21 03:56:50 ERROR [GetDocumentState # docmanager.go.147] [ssm-agent-worker] [MessagingDeliveryService] encountered error with message json: cannot unmarshal string into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type contracts.PreconditionArgument while reading Interim state of command from file - 28785d7e-bfaf-414b-bd02-fc3a1746610b
2021-03-21 03:56:50 ERROR [GetDocumentState # docmanager.go.147] [ssm-agent-worker] [OfflineService] encountered error with message json: cannot unmarshal string into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type contracts.PreconditionArgument while reading Interim state of command from file - 28785d7e-bfaf-414b-bd02-fc3a1746610b
2021-03-21 03:56:51 ERROR [Process # backend.go.139] [ssm-document-worker] [28785d7e-bfaf-414b-bd02-fc3a1746610b] [DataBackend] failed to unmarshal plugin config: json: cannot unmarshal object into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type string
2021-03-21 03:56:51 ERROR [Messaging # messaging.go.145] [ssm-document-worker] [28785d7e-bfaf-414b-bd02-fc3a1746610b] messaging pipeline process datagram encountered error: json: cannot unmarshal object into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditionsof type string
2021-03-21 03:56:50 ERROR [GetDocumentState # docmanager.go.147] [ssm-agent-worker] [MessageGatewayService] encountered error with message json: cannot unmarshal string into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type contracts.PreconditionArgument while reading Interim state of command from file - 28785d7e-bfaf-414b-bd02-fc3a1746610b
2021-03-21 03:56:50 ERROR [GetDocumentState # docmanager.go.147] [ssm-agent-worker] [MessagingDeliveryService] encountered error with message json: cannot unmarshal string into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type contracts.PreconditionArgument while reading Interim state of command from file - 28785d7e-bfaf-414b-bd02-fc3a1746610b
2021-03-21 03:56:50 ERROR [GetDocumentState # docmanager.go.147] [ssm-agent-worker] [OfflineService] encountered error with message json: cannot unmarshal string into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type contracts.PreconditionArgument while reading Interim state of command from file - 28785d7e-bfaf-414b-bd02-fc3a1746610b
2021-03-21 03:56:50 ERROR [GetDocumentState # docmanager.go.147] [ssm-agent-worker] [MessageGatewayService] encountered error with message json: cannot unmarshal string into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type contracts.PreconditionArgument while reading Interim state of command from file - 28785d7e-bfaf-414b-bd02-fc3a1746610b
2021-03-21 03:56:51 ERROR [GetDocumentState # docmanager.go.147] [ssm-agent-worker] [MessagingDeliveryService] encountered error with message json: cannot unmarshal string into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type contracts.PreconditionArgument while reading Interim state of command from file - 28785d7e-bfaf-414b-bd02-fc3a1746610b
2021-03-21 03:56:51 ERROR [GetDocumentState # docmanager.go.147] [ssm-agent-worker] [OfflineService] encountered error with message json: cannot unmarshal string into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type contracts.PreconditionArgument while reading Interim state of command from file - 28785d7e-bfaf-414b-bd02-fc3a1746610b
2021-03-21 03:56:51 ERROR [GetDocumentState # docmanager.go.147] [ssm-agent-worker] [MessageGatewayService] encountered error with message json: cannot unmarshal string into Go struct field Configuration.InstancePluginsInformation.Configuration.Preconditions of type contracts.PreconditionArgument while reading Interim state of command from file - 28785d7e-bfaf-414b-bd02-fc3a1746610b
Any idea what could be the cause of this issue?
I believe this was a bug from AWS. I use Terraform and with the exact same configs as I had tested yesterday it now works as expected.

Ansible error - implementation error: unknown type string requested for name

I am getting the unknown type string error when I execute the ansible playbook
implementation error: unknown type string requested for name
I am trying to display my name using a ansible playbook. The bg code is python.
---
- name: Test hello module
hosts: localhost
tasks:
- name: run the hello module
hello:
name: 'Test'
register: helloout
- name: dump test output
debug:
msg: '{{ helloout }}'
#!/usr/bin/python
def main():
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True, type='string')
),
supports_check_mode=False
)
name = module.params['name']
module.exit.json(changed=False, meta=name)
from ansible.module_utils.basic import *
if __name__ == '__main__':
main()
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAY [Test hello module] ****************************************************************************************************************************
TASK [Gathering Facts] ******************************************************************************************************************************
ok: [localhost]
TASK [run the hello module] *************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "implementation error: unknown type string requested for name"}
PLAY RECAP ******************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
In the AnsibleModule() method argument argument_spec, the type you are looking for is actually str and not string:
module = AnsibleModule(
argument_spec=dict(
name=dict(required=True, type='str')
),
supports_check_mode=False
)
You can see the list of accepted type specifications for the argument in the documentation.

Logstash Output Amazon ES Error

I'm using logstash 2.3.4 and Amazon Elasticsearch Service (2.3) .
My config
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:mysql://awsmigration.XXXXXXXX.ap-southeast-1.rds.amazonaws.com:3306/table_receipt?zeroDateTimeBehavior=convertToNull&autoReconnect=true&useSSL=false"
# The user we wish to execute our statement as
jdbc_user => "XXXXXXXX"
jdbc_password => "XXXXXXXX"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/opt/logstash/drivers/mysql-connector-java-5.1.39/mysql-connector-java-5.1.39-bin.jar"
# The name of the driver class for Postgresql
jdbc_driver_class => "com.mysql.jdbc.Driver"
# our query
statement => "SELECT * from Receipt"
jdbc_paging_enabled => true
jdbc_page_size => 200
}
}
output {
#stdout { codec => json_lines }
amazon_es {
hosts => ["search-XXXXXXXX.ap-southeast-1.es.amazonaws.com"]
region => "ap-southeast-1"
index => "slurp_receipt"
document_type => "Receipt"
document_id => "%{uid}"
}
}
After running a command
bin/logstash agent -f db.conf
I got this error :
Attempted to send a bulk request to Elasticsearch configured at '["https://search-XXXXXXXX.ap-southeast-1.es.amazonaws.com:443"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:client_config=>{:hosts=>["https://search-slurp-wjgudsrlz66esh6hyrijaagamu.ap-southeast-1.es.amazonaws.com:443"], :region=>"ap-southeast-1", :aws_access_key_id=>nil, :aws_secret_access_key=>nil, :transport_options=>{:request=>{:open_timeout=>0, :timeout=>60}, :proxy=>nil}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::AWS, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"https", :user=>nil, :password=>nil, :port=>443}}, :error_message=>"undefined method `credentials' for nil:NilClass", :error_class=>"NoMethodError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.1.36/lib/aws-sdk-core/signers/v4.rb:24:in `initialize'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/aws_v4_signer_impl.rb:36:in `signer'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/aws_v4_signer_impl.rb:48:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.2/lib/faraday/rack_builder.rb:139:in `build_response'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.2/lib/faraday/connection.rb:377:in `run_request'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/aws_transport.rb:49:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:257:in `perform_request'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/aws_transport.rb:45:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/client.rb:128:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.18/lib/elasticsearch/api/actions/bulk.rb:90:in `bulk'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/http_client.rb:53:in `bulk'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:321:in `submit'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:318:in `submit'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:351:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/vendor/local_gems/b0f0ff24/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:311:in `receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/outputs/base.rb:83:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/outputs/base.rb:83:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:301:in `output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:232:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:201:in `start_workers'"], :level=>:error}
May i know how to solve this problems?
thank you

Error when Cognito tries to refresh credentials by itself

I am getting an error code that looks like this but don't really know why.
So up till this point i was using amazons end to end implementation of developer authentication. Everything seems to work but as soon as i try to use dynamodb to do something i get this error.
AWSiOSSDKv2 [Error] AWSCredentialsProvider.m line:528 | __40-[AWSCognitoCredentialsProvider refresh]_block_invoke352 | Unable to refresh. Error is [Error Domain=com.amazonaws.service.cognitoidentity.DeveloperAuthenticatedIdentityProvider Code=0 "(null)"]
The request failed. Error: [Error Domain=com.amazonaws.service.cognitoidentity.DeveloperAuthenticatedIdentityProvider Code=0 "(null)"]
Any help?
UPDATE 1: LOG OUTPUT FROM COGNITOSYNCDEMO
I removed out the information i thought should be private and replaced it with [redacted info]
2016-02-19 15:32:42.594 CognitoSyncDemo[2895:67542] initializing clients...
2016-02-19 15:32:43.028 CognitoSyncDemo[2895:67542] json: { "identityPoolId": "[redacted info]", "identityId": "[redacted info]", "token": "[redacted info]",}
2016-02-19 15:32:43.056 CognitoSyncDemo[2895:67542] Error in registering for remote notifications. Error: Error Domain=NSCocoaErrorDomain Code=3010 "REMOTE_NOTIFICATION_SIMULATOR_NOT_SUPPORTED_NSERROR_DESCRIPTION" UserInfo={NSLocalizedDescription=REMOTE_NOTIFICATION_SIMULATOR_NOT_SUPPORTED_NSERROR_DESCRIPTION}
2016-02-19 15:32:54.449 CognitoSyncDemo[2895:67542] AWSiOSSDKv2 [Debug] AWSCognitoSQLiteManager.m line:1455 | -[AWSCognitoSQLiteManager filePath] | Local database is: /Users/MrMacky/Library/Developer/CoreSimulator/Devices/29BB1E0D-538D-4167-9069-C02A0628F1B3/data/Containers/Data/Application/1A86E139-5484-4F29-A3FD-25F81DE055EB/Documents/CognitoData.sqlite3
2016-02-19 15:32:54.451 CognitoSyncDemo[2895:67542] AWSiOSSDKv2 [Debug] AWSCognitoSQLiteManager.m line:221 | __39-[AWSCognitoSQLiteManager getDatasets:]_block_invoke | query = 'SELECT Dataset, LastSyncCount, LastModified, ModifiedBy, CreationDate, DataStorage, RecordCount FROM CognitoMetadata WHERE IdentityId = ?'
2016-02-19 15:33:00.946 CognitoSyncDemo[2895:67542] json: { "identityPoolId": "[redacted info]", "identityId": "[redacted info]", "token": "[redacted info]",}
2016-02-19 15:33:00.947 CognitoSyncDemo[2895:67542] AWSiOSSDKv2 [Error] AWSCognitoService.m line:215 | __36-[AWSCognito refreshDatasetMetadata]_block_invoke180 | Unable to list datasets: Error Domain=com.amazon.cognito.AWSCognitoErrorDomain Code=-4000 "(null)"
Looking at the exception, it looks like you are trying to do push sync from the emulator. You cannot receive remote notifications on an Emulator.