How to fix DSpace oai not showing special characters? - diacritics

My DSpace install is working fine, the metadata is being stored and showed correctly in any browser. The database has been confirmed as UTF-8. The problem is that the oai protocol shows accents and diacritics in letters (áéíóúüUñÑ etc) as ? like Dise?o instead of Diseño and all the entities that harvest our metadata report this problem. If you would like to see for yourselves, this is the link: http://repositorio.puce.edu.ec/oai/request?verb=Identify
I can't find any file that sets the encoding for the oai protocol nor any kind of solution for this problem.

Based on this thread: http://dspace.2283337.n4.nabble.com/OAI-tp4681419.html, you have to set -Dfile.encoding=UTF-8 in JAVA_OPTS. Then do a clean and force a rebuild of your OAI index as #terrywb mentioned (ie bin/dspace oai clean-cache and bin/dspace oai import -c -o.

When you run bin/dspace oai import -c make sure you are running it with UTF-8 locale. For example use LC_ALL=en_US.UTF-8. It's mentioned in the documentation and also filed as an issue https://jira.duraspace.org/browse/DS-2033

Have yow set the URIencoding into tomcat's server.xml?
<Connector connectionTimeout="20000" port="8080" protocol="HTTP/1.1"
redirectPort="8443" URIEncoding="UTF-8" />

Related

WSO2 vault-lookup Xpath expression returns Empty/Blank

when I do a simple expression like wso2:vault-lookup('my-token') it returns empty. But i have the my-token present in the vault.
Has anyone encountered this problem before ? Any possible workaround that you can suggest.
EI version 6.2.0
Can you try the following approach?
Run the ciphertool.sh with the following command.
bin/ciphertool.sh -Dorg.wso2.CipherTransformation=RSA/ECB/OAEPwithSHA1andMGF1Padding
Enter the plain text value which you need to encrypt and copy the encrypted value
Navigate to the carbon console and expand the registry browse section.
Go to the following path.
/_system/config/repository/components/secure-vault
Above is the location where the registry holds the secure vault properties and the values.
Inside the secure vault, create a new property with a name and paste the encrypted value which you acquired from the initial step.
Try to get the property from the mediation sequence.
wso2:vault-lookup('prop-name')
Since you have confirmed that this is working as expected in the vanilla version of the EI server, can you compare the configurations available in secret-conf.properties file located in [EI_HOME]/conf/security directory of the existing server and the vanilla pack (that this working).

Multiple lifecycles s3cmd

I want to have multiple lifecycles for many folders in my bucket.
This seems easy if I use web interface but this should be an automated process so, at least in my case, it must use s3cmd.
It works fine when I use:
s3cmd expire ...
But, somehow, everytime I run this my last lifecycle gets overwrited.
There's an issue on github:
https://github.com/s3tools/s3cmd/issues/863
My question is: is there another way?
You made me notice I had the exact same problem as you. Another way to access the expire rules with s3cmd is to show the lifecycle configuration of the bucket.
s3cmd getlifecycle s3://bucketname
This way you get some xml formatted text:
<?xml version="1.0" ?>
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Rule>
<ID>RULEIDENTIFIER</ID>
<Prefix>PREFIX</Prefix>
<Status>Enabled</Status>
<Expiration>
<Days>NUMBEROFDAYS</Days>
</Expiration>
</Rule>
<Rule>
<ID>RULEIDENTIFIER2</ID>
<Prefix>PREFIX2</Prefix>
<Status>Enabled</Status>
<Expiration>
<Days>NUMBEROFDAYS2</Days>
</Expiration>
</Rule>
</LifecycleConfiguration>
If you put that text in a file, changing the appropriate fields (put identifiers of your choice, set the prefixes you want and the number of days until expiration), you now can use the following command (changing FILE for the path where you put the rules):
s3cmd setlifecycle FILE s3://bucketname
That should work (in my case, now I see several rules when I execute the getlifecycle command, although I do not know yet if the objects actually expire or not).

Error provisioning namespace. ORA-20001 Request could not be processed at Oracle Apex

I finally managed to install Oracle Apex 5.1.2 but I have problem with creating a workspace. Whenever I try to do so at the end I get an error:
I tried to create this workspace with following values:
The strange thing is that when I try to use Yes as option to Reuse Existing Schema no schemas are listed. Is it possible that Apex somehow doesn't have access to managing schemas?
I am using APEX with ORDS. At home page I get info that I have 1 workspace and 1 schema.
I've tried:
Using strong passwords as mentioned here
Changing provisioning type to request: Effect is the same. If user request a space and I accept it I get the exact same error.
Enabled OMF with parameter DB_CREATE_FILE_DEST = '/u01/app/oracle/oradata' -> *.dbf files are not created before and after the change in directory.
The root cause of this problem was installing APEX both on CDB$ROOT, so as a result, and on PDB1. I uninstalled APEX from root, repaired with #utlrp.sql script as in this tutorial and installed APEX again, but only on PDB1. Workspace was successfully created.
I had the same problem (apex 18.1/ords) in a database without CDB configured. The solution in my case was to run #apex_rest_config.sql script.
After that, the workspace is created without any problem.
If you don't want to reinstall apex to move it from the CDB to the PDB I suggest you try setting PDB mapping in your ords config file.
https://docs.oracle.com/en/database/oracle/oracle-rest-data-services/20.2/aelig/configuring-REST-data-services.html#GUID-694B2F89-CE4F-4AB0-88E2-EB35D03DEC3C
I did it by adding
<entry key="db.serviceNameSuffix"></entry>
to the end of my defaults.xml (you can find its location by running
$ java -jar ords.war configdir ).
Then access apex with /yourpdb in the path: e.g.
http://server:port/ords/pdb1
This will run apex from that PDB instead of from the CDB and will create the workspace in there, that should work OK. It did for me.
I had same problem at ORACLE 12c, according to this link my problem has been solved. The problem is the users can't create workspace in CDB, so you must change session container to pdf files by the following steps :
$root> cd ~/TEMP/apex
$root> sqlplus
Enter user-name: sys as sysdba
Enter password:
SQL> exec dbms_xdb.sethttpport(0); /*set port*/
SQL> alter session set container=YOURAPPEXPDB;
SQL> exec dbms_xdb.sethttpport(8181);
SQL> alter system register;
//install oracle apex again
to remove oracle apex i use this link, its perfectly worked for me.

Proc http with https url

So, I want to use Google Url shortener Api, and I try to use
proc http
so, when I run this code
filename req "D:\input.txt";
filename resp "D:\output.txt";
proc http
url="https://www.googleapis.com/urlshortener/v1/url"
method="POST"
in=req
ct="application/JSON"
out=resp
;run;
(where D:\input.txt looks like {"longUrl": "http://www.myurl.com"} ) everything works greate on my home SAS Base 9.3. But, at work, on EG 4.3, I get:
NOTE: The SAS System stopped processing this step because of errors.
and no possible to debug. After googling, I found, that I have to set java system option like this
-jreoptions (-Djavax.net.ssl.trustStore=full-path-to-the-trust-store -Djavax.net.ssl.trustStorePassword=trustStorePassword)
But, where I can get "the certificate of the service to be trusted"- and password to it?
Edit: As I noticed in comments below, my work SAS installed into server, so I didn't have direct access to configuration. Also, It isn't good idea to change servers config. So, I try to google more, and found beautiful solution using cUrl, without X command (cause it block in my EG). Equivalent syntax is:
filename test pipe 'curl -X POST -d #D:\input.txt https://www.googleapis.com/urlshortener/v1/url --header "Content-Type:application/json"';
data _null_;
infile test missover lrecl= 32000;
input ;
file resp;
put _infile_;
run;
Hope it help someone
Where to get the certificate
Open the URL that you want the certificate from via Chrome. Click on the lock file in the URL bar, click on "details" tab and then click on "Save as file" in the bottom right. You will need to know what trust store you are going to use at this stage. See the following step.
The password and trust store is defined by you. It is in most cases nothing more than an encrypted zip file. There are a lot of tools out there that allow you to create a trust store, encrypt it and then import the certificates into it. The choice will depend on what OS you are using. There are some java based tools that OS independent, for example Portecle. It allows to define various trust stores on different OS and you can administer them remotely.
Regards,
Vasilij

Custom endpoint in AWS powershell

I am trying to use AWS Powershell with Eucalyptus.
I can do this with AWS CLI with parameter --endpoint-url.
Is it possible to set endpoint url in AWS powershell?
Can I create custom region with my own endpoint URL in AWS Powershell?
--UPDATE--
The newer versions of the AWS Tools for Windows PowerShell (I'm running 3.1.66.0 according to Get-AWSPowerShellVersion), has an optional -EndpointUrl parameter for the relevant commands.
Example:
Get-EC2Instance -EndpointUrl https://somehostnamehere
Additionally, the aforementioned bug has been fixed.
Good stuff!
--ORIGINAL ANSWER--
TL;TR
Download the default endpoint config file from here: https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json
Customize it. Example:
{
"version": 2,
"endpoints": {
"*/*": {
"endpoint": "your_endpoint_here"
}
}
}
After importing the AWSPowerShell module, tell the SDK to use your customized endpoint config. Example:
[Amazon.AWSConfigs]::EndpointDefinition = "path to your customized Amazon.endpoints.json here"
Note: there is a bug in the underlying SDK that causes endpoints that have a path component from being signed correctly. The bug affects this solution and the solution #HyperAnthony proposed.
Additional Info
Reading through the .NET SDK docs, I stumbled across a section that revealed that one can global set the region rules given a file: http://docs.aws.amazon.com/AWSSdkDocsNET/latest/V2/DeveloperGuide/net-dg-config-other.html#config-setting-awsendpointdefinition
Unfortunately, I couldn't find anywhere where the format of such a file is documented.
I then splunked through the AWSSDK.Core.dll code and found where the SDK loads the file (see LoadEndpointDefinitions() method at https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/RegionEndpoint.cs).
Reading through the code, if a file isn't explicitly specified on AWSConfigs.EndpointDefinition, it ultimately loads the file from an embedded resource (i.e. https://github.com/aws/aws-sdk-net/blob/master/sdk/src/Core/endpoints.json)
I don't believe that it is. This list of common parameters (that can be used with all AWS PowerShell cmdlets) does not include a Service URL, it seems instead to opt for a simple string Region to set the Service URL based on a set of known regions.
This AWS .NET Development forum post suggests that you can set the Service URL on a .NET SDK config object, if you're interested in a possible alternative in PowerShell. Here's an example usage from that thread:
$config=New-Object Amazon.EC2.AmazonEC2Config
$config.ServiceURL = "https://ec2.us-west-1.amazonaws.com"
$client=[Amazon.AWSClientFactory]::CreateAmazonEC2Client($accessKeyID,$secretKeyID,$config)
It looks like you can use it with most config objects when setting up a client. Here's some examples that have the ServiceURL property. I would imagine that this is on most all AWS config objects:
AmazonEC2Config
AmazonS3Config
AmazonRDSConfig
Older versions of the documentation (for v1) noted that this property will be ignored if the RegionEndpoint is set. I'm not sure if this is still the case with v2.