OpenStack Keystone: "The service catalog is empty." after issuing any command - centos7

I'm just back after installing Keystone following the OpenStack docs: Install and configure of Keystone.
I'm running CentOS 7 and OpenStack Stein.
I've noted on this page a missing step on the recipe (keystone-manage db_sync that creates the tables on keystone DB and keystone-manage bootstrap ... well, I'd like to know what exactly performs this command!).
Well, issuing the command:
openstack domain create --description "An Example Domain" example
(as documented in Create a domain, projects, users, and roles) I get the following message:
The service catalog is empty.
Also issuing the command openstack domain list I get the same result.
On /var/log/keystone.log I have:
WARNING keystone.access_rules_config.backends.json [-] No config file found for access rules, application credential access rules will be unavailable.: IOError: [Errno 2] No such file or directory: '/etc/keystone/access_rules.json'
On /var/log/httpd/keystone_access.log I have all 201 (Created) status or 200 when raising commands.
So, it's sure I'm missing something in my configuration; I've already googled around with no results; any help will be strongly appreciated.

Related

Unable to connect to Huggingface from EC2 instance

I am running a python code in EC2 instance where I am loading a Huggingface model using the from_pretrained() method. I get the error
OSError: Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-config.json' to download pretrained model configuration file.
while trying to initialize the reader. To get over this, I downloaded the file manually and provided the local JSON path. That worked fine but then I see issues in loading the tokenizer too.
OSError: Couldn't reach server at '{}' to download vocabulary files.
I think my network settings of EC2 are not correct due to which I am unable to connect to external Huggingface repository.
I tried relaxing the inbound rules for EC2 to IP version|Type|Protocol|Port range|Destination=>IPv4|All|traffic|All|All|0.0.0.0/0 but even that doesn't help. The outbound rules are already IPv4|All|traffic|All|All|0.0.0.0/0.
I also tried creating an IAM role with policy AmazonS3ReadOnlyAccess and attached it to the EC2 instance but still getting the same error.
Could someone point what needs to be done to solve this. Thanks.
Here is how i fixed this issue.
i installed pyopenssl like this :
!pip install pyopenssl
then i restarted terminal and re-ran the code and it fixed the issue for me,thanks
might be your network is using proxy
this might help
$ proxies = {"http": 'foo.bar:3128', addyourproxy:'foo.bar:4012'}
$ from transformers import pipeline
$ qt_ans = pipeline('question-answering')

ERROR: gcloud crashed (ServerNotFoundError): Unable to find the server at www.googleapis.com

I am trying to sign in to the cloud sdk with the command: gcloud auth login, and I select my google account in the browser. After I click allow, in the terminal it says:
ERROR: gcloud crashed (ServerNotFoundError): Unable to find the server at www.googleapis.com
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
And when I run the command gcloud info --run-diagnostics it also stops with the error:
ERROR: Reachability Check failed.
Cannot reach https://www.googleapis.com/auth/cloud-platform (ServerNotFoundError)
Network connection problems may be due to proxy or firewall settings.
My config is the default one without any modifications.
I could sign in with no issues to the cloud sdk for a long time.
I am on windows 10.
I tried signing in both with the cloud sdk shell and the windows terminal, as administrators and not as administrators.
How do I fix this error?
UPDATE:
I run the tracert -4 www.googleapis.com and also -6 command and this is the result:
Unable to resolve target system name www.googleapis.com.
I am working from home, and I don't know what a network proxy is, I might be accidentally using one.
You may have enabled proxy with gcloud, use-> gcloud config list to get the proxy settings
To unset proxy use: gcloud config unset proxy/[params] where params are address, port etc.
You need to login into your gcloud SDK first using this command
gcloud auth login
It will open a google sign up page in the browser. Select your account and then you will get a conformation in you command line that you have been authenticated. Then try what you wanted to do.
I faced the same issue when connected to VPN. Disconnected from VPN and ran the below command and it worked.
gcloud auth login

AWS: ERROR: Pre-processing of application version xxx has failed and Some application versions failed to process. Unable to continue deployment

Hi I am trying to deploy a node application from cloud 9 to ELB but I keep getting the below error.
Starting environment deployment via CodeCommit
--- Waiting for Application Versions to be pre-processed --- ERROR: Pre-processing of application version app-491a-200623_151654 has
failed. ERROR: Some application versions failed to process. Unable to
continue deployment.
I have attached an image of the IAM roles that I have. Any solutions?
Go to your console and open up your elastic beanstalk console. Go to both applications and environments and delete them. Then in your terminal hit
eb init #Follow instructions
eb create --single ##Follow instructions.
It would fix the error, which is due to some application states which are failed. If you want to check those do
aws elasticbeanstalk describe-application-versions
I was searching for this answer as a result of watching a YouTube tutorial for how to pass the AWS Certified Developer Associate exam. If anyone else gets this error as a result of that tutorial, delete the 002_node_command.config file created in the tutorial and commit that change, as that is causing the error to occur.
A failure within the pre-processing phase, may be caused by an invalid manifest, configuration or .ebextensions file.
If you deploy an (invalid) application version using eb deploy and you enable the preprocess option, The details of the error will not be revealed.
You can remove the --process flag and enable the verbose option to improve error output.
in my case I deploy using this command:
eb deploy -l "XXX" -p
And can return a failure when I mess around with .ebextensions:
ERROR: Pre-processing of application version xxx has failed.
ERROR: Some application versions failed to process. Unable to continue deployment.
With that result I can't figure up what is wrong,
but deploying without -p (or --process)and adding -v (verbose) flag:
eb deploy -l "$deployname" -v
It returns something more useful:
Uploading: [##################################################] 100% Done...
INFO: Creating AppVersion xxx
ERROR: InvalidParameterValueError - The configuration file .ebextensions/16-my_custom_config_file.config in application version xxx contains invalid YAML or JSON.
YAML exception: Invalid Yaml: while scanning a simple key
in 'reader', line 6, column 1:
(... details of the error ...)
, JSON exception: Invalid JSON: Unexpected character (#) at position 0.. Update the configuration file.
Now I can fix the problem.

Filebeat and AWS Elasticsearch - Not Working

I have good experience in working with Elasticsearch, I have worked with version 2.4 and now trying to learn new Elasticsearch.
I am trying to implement Filebeat to send my apache and system logs to my Elasticsearch endpoint. To save my time I preferred to launch a t2.medium single node instance over AWS Elasticsearch Service under the public domain and I have attached the access policy to allow everyone to access the cluster.
The AWS Elasticsearch instance is up and running healthy.
I launched a Ubuntu(18.04) server, downloaded the filebeat tar and made the following configuration in filebeat.yml:
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["https://my-public-test-domain.ap-southeast-1.es.amazonaws.com:443"]
18.04- # Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
I enabled the required modules :
filebeat modules enable system apache
Then as per the filebeat documentation I changed the ownership of the filebeat file and started the filebeat with the following commands :
sudo chown root filebeat.yml
sudo ./filebeat -e
When I started the filebeat I faced the following permission and ownership issues :
Error loading config from file '/home/ubuntu/beats/filebeat-7.2.0-linux-x86_64/modules.d/system.yml', error invalid config: config file ("/home/ubuntu/beats/filebeat-7.2.0-linux-x86_64/modules.d/system.yml") must be owned by the user identifier (uid=0) or root
To resolve this I changed the ownership for the files which were throwing errors.
When I restarted the filebeat service , I started facing the following issue :
Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license: unauthorized access, could not connect to the xpack endpoint, verify your credentials
Going through this link , I found that to work with AWS Elasticsearch I will need Beats OSS versions.
So I again downloaded the OSS version for beat from this link and followed the same procedure as above, but still no luck. Now I am facing the following errors :
Error 1:
Attempting to reconnect to backoff(elasticsearch(https://my-public-test-domain.ap-southeast-1.es.amazonaws.com:443)) with 12 reconnect attempt(s)
Error 2:
Failed to connect to backoff(elasticsearch(https://my-public-test-domain.ap-southeast-1.es.amazonaws.com:443)): Connection marked as failed because the onConnect callback failed: 1 error: Error loading pipeline for fileset system/auth: This module requires an Elasticsearch plugin that provides the geoip processor. Please visit the Elasticsearch documentation for instructions on how to install this plugin. Response body: {"error":{"root_cause":[{"type":"parse_exception","reason":"No processor type exists with name [geoip]","header":{"processor_type":"geoip"}}],"type":"parse_exception","reason":"No processor type exists with name [geoip]","header":{"processor_type":"geoip"}},"status":400}
From the second error I can understand that the geoip plugin is not available because of which I facing this error.
What else needs to be done to get this working?
Has anyone been to successfully connect Beats to AWS Elasticsearch?
What other steps I could to take to mitigate the above issue?
Envrionment Details:
AWS Elasticsearch Version : 6.7
File Beat : 7.2.0
First, you need to use OSS version of filebeat with AWS ES https://www.elastic.co/downloads/beats/filebeat-oss
Second, AWS ElasticSearch does not provide GeoIP module, so you will need to edit pipelines for any of the default modules you want to use, and make sure GeoIP is removed/commented out.
For example in /usr/share/filebeat/module/system/auth/ingest/pipeline.json (that's the path when installed from deb package - your path will be different of course) comment out:
{
"geoip": {
"field": "source.ip",
"target_field": "source.geo",
"ignore_failure": true
}
},
Repeat the same for apache module.
I've spent hours trying to make filebeat iis module works with AWS elasticsearch. I kept getting ingest-geoip error, Below fixed the issue.
For windows iis logs, AWS elasticsearch remove geoip from filebeat module configuration:
C:\Program Files (x86)\filebeat\module\iis\access\ingest\default.json
C:\Program Files (x86)\filebeat\module\iis\access\manifest.yml
C:\Program Files (x86)\filebeat\module\iis\error\ingest\default.json
C:\Program Files (x86)\filebeat\module\iis\error\manifest.yml

Tensorboard Unavailable: Error executing an HTTP request: libcurl code 6

When I try to run tensorboard with a logdir in google cloud storage I get the following error (with various retry attempts):
Error executing an HTTP request: libcurl code 6 meaning 'Couldn't
resolve host name', error details: Couldn't resolve host 'metadata'
I have previously run gcloud auth and can be confident that I am authenticated correctly because I can read from the given logdir by running
gsutil ls gs://path/to/logdir
which works as expected.
Any idea how to proceed so that I can run tensorboard against this logdir?
This was happening because the GOOGLE_APPLICATION_CREDENTIALS environment variable was not set.
Seems that gsutil was authenticated ok with the gcloud auth ... command but that tensorboard also needed the GOOGLE_APPLICATION_CREDENTIALS env to be set to point to the key file