I am trying to setup Jetty to use NSS as its cryptographic engine. I have gotten it to the point where the server starts BUT any client that tries to connect seems to hang in the browser.
The setup process / code I am following is as follows (32-bit Windows 1.6 JVM).
NSS Database Creation
modutil.exe -create -dbdir C:\nssdb
modutil.exe -create -fips true -dbdir C:\nssdb
modutil.exe -create -changepw "NSS FIPS 140-2 Certificate DB" -dbdir C:\nssdb
Load NSS into Java
String config = "name = NSS\n";
config += "nssLibraryDirectory = C:\\nss\\lib\n";
config += "nssSecmodDirectory = C:\\nssdb\n";
config += "nssDbMode = readWrite\n";
config += "nssModule = fips";
InputStream stream = new ByteArrayInputStream(config.getBytes("UTF-8"));
Provider nss = new sun.security.pkcs11.SunPKCS11(stream);
Security.addProvider(nss);
int sunJssePosition = -1;
int currentIndex = 0;
for (Provider provider : Security.getProviders()) {
if ("SunJSSE".equals(provider.getName())) {
sunJssePosition = currentIndex + 1;
break;
}
currentIndex++;
}
Security.removeProvider("SunJSSE");
Provider sunJsse = new com.sun.net.ssl.internal.ssl.Provider(nss);
if (sunJssePosition == -1) {
Security.addProvider(sunJsse);
} else {
Security.insertProviderAt(sunJsse, sunJssePosition);
}
NSS Self Sign Certificate Generation
C:\nss\bin\certutil.exe -S -n 127.0.0.1 -x -t "u,u,u" -s "CN=127.0.0.1, OU=Foo, O=Bar, L=City, ST=NY, C=US" -m 25001 -d C:\nssdb
Jetty Startup
KeyStore ks = KeyStore.getInstance("PKCS11");
ks.load(null, "SuperSecret");
//Start setting up Jetty
Server server = new Server();
SslContextFactory sslContextFactory = new SslContextFactory();
//sslContextFactory.setKeyStoreProvider("SunPKCS11-NSS");
sslContextFactory.setKeyStore(ks);
//sslContextFactory.setKeyStorePassword(new String("SuperSecret"));
SslSelectChannelConnector sslConnector = new SslSelectChannelConnector(sslContextFactory);
sslConnector.setPort(443);
server.addConnector(sslConnector);
WebAppContext context = new WebAppContext();
//Blah Blah Blah, setup Jetty
server.setHandler(context);
server.start();
server.join();
Any ideas?
Edit: This seems extremely odd but I can access the server using Internet Explorer just fine. Firefox seems to be the one having an issue.
I have solved the issue. It turns out there are severe bugs in the Java 6 SSL implementation. The solution? Switch to Java 7!
Related
I have saas based Django app, I want, when a customer asks me to use my software, then i will auto provision new droplet and auto-deploy the app there, and the info should be saved in my database, like ip, customer name, database info etc.
This is my terraform script and it is working very well coz, the database is now running on
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
provider "digitalocean" {
token = "dop_v1_60f33a1<MyToken>a363d033"
}
resource "digitalocean_droplet" "web" {
image = "ubuntu-18-04-x64"
name = "web-1"
region = "nyc3"
size = "s-1vcpu-1gb"
ssh_keys = ["93:<The SSH finger print>::01"]
connection {
host = self.ipv4_address
user = "root"
type = "ssh"
private_key = file("/home/py/.ssh/id_rsa") # it works
timeout = "2m"
}
provisioner "remote-exec" {
inline = [
"export PATH=$PATH:/usr/bin",
# install docker-compse
# install docker
# clone my github repo
"docker-compose up --build -d"
]
}
}
I want, when i run the commands, it should be create new droplet, new database instance and connect the database with my django .env file.
Everything should be auto created. Can anyone please help me how can I do it?
or my approach is wrong? in this situation, what would be the best solution?
I have machine with nixos (provisioned using terraform, config), I want to connect to it using deployment.targetHost = ipAddress and deployment.targetEnv = "none"
But I can't configure nixops to use /secrets/stage_ssh_key ssh key
This is not working ( actually this is not documented, I have found it here https://github.com/NixOS/nixops/blob/d4e5b779def1fc9e7cf124930d0148e6bd670051/nixops/backends/none.py#L33-L35 )
{
stage =
{ pkgs, ... }:
{
deployment.targetHost = (import ./nixos-generated/stage.nix).terraform.ip;
deployment.targetEnv = "none";
deployment.none.sshPrivateKey = builtins.readFile ./secrets/stage_ssh_key;
deployment.none.sshPublicKey = builtins.readFile ./secrets/stage_ssh_key.pub;
deployment.none.sshPublicKeyDeployed = true;
environment.systemPackages = with pkgs; [
file
];
};
}
nixops ssh stage results in asking for password, expected - login without password
nixops ssh stage -i ./secrets/stage_ssh_key works as expected, password is not asked
How to reproduce:
download repo
rm -rf secrets/*
add aws keys in secrets/aws.nix
{
EC2_ACCESS_KEY="XXXX";
EC2_SECRET_KEY="XXXX";
}
nix-shell
make generate_stage_ssh_key
terraform apply
make nixops_create
nixops deploy asks password
I'm trying to run an Akka stream application, but get an exception when running on linux.
When I run it with Windows debugger it is working.
I tried both these commands:
java -jar ./myService.jar -Dconfig.resource=/opt/myservice/conf/application.conf
java -jar ./myService.jar -Dconfig.file=/opt/myService/conf/application.conf
But I get the following exception:
No configuration setting found for key 'akka.stream'
My application.conf file:
akka {
event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
loglevel = "DEBUG"
actor {
debug {
# enable function of LoggingReceive, which is to log any received message
at
# DEBUG level
receive = on
}
}
stream {
# Default materializer settings
materializer {
max-input-buffer-size = 16
dispatcher = ""
subscription-timeout {
mode = cancel
timeout = 5s
}
# Enable additional troubleshooting logging at DEBUG log level
debug-logging = off
# Maximum number of elements emitted in batch if downstream signals large demand
output-burst-limit = 1000
auto-fusing = on
# Those stream elements which have explicit buffers (like mapAsync, mapAsyncUnordered,
# buffer, flatMapMerge, Source.actorRef, Source.queue, etc.) will preallocate a fixed
# buffer upon stream materialization if the requested buffer size is less than this
max-fixed-buffer-size = 1000000000
sync-processing-limit = 1000
debug {
fuzzing-mode = off
}
}
blocking-io-dispatcher = "akka.stream.default-blocking-io-dispatcher"
default-blocking-io-dispatcher {
type = "Dispatcher"
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = 16
}
}
}
# configure overrides to ssl-configuration here (to be used by akka-streams,
and akka-http – i.e. when serving https connections)
ssl-config {
protocol = "TLSv1.2"
}
}
ssl-config {
logger = "com.typesafe.sslconfig.akka.util.AkkaLoggerBridge"
}
i've added:
println(system.settings.config)
but i get a result without stream section
Can you assist?
The syntax for the java command line is:
java [options] -jar filename [args]
This ordering matters: you must set any options before the -jar option.
So in your case:
java -Dconfig.file=/opt/myService/conf/application.conf -jar ./myService.jar
[I am re-editing this question to reflect on my last tests]
I am trying to upgrade my akka / play 2.3 application from
"org.reactivemongo" %% "play2-reactivemongo" % "0.11.7.play23"
to
"org.reactivemongo" %% "play2-reactivemongo" % "0.11.11-play23"
Compilation goes fine but at run-time I get the following error:
[ERROR] -- NettyTransport(akka://reactivemongo)
failed to bind to /127.0.0.1:2552, shutting down Netty transport
...
Caused by: org.jboss.netty.channel.ChannelException: Failed to bind to: /127.0.0.1:2552
The Akka part of application.conf reads as follows:
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
actor {
provider = "akka.remote.RemoteActorRefProvider"
mailbox {
requirements {
"akka.dispatch.BoundedMessageQueueSemantics" = bounded-mailbox
}
}
}
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
}
}
}
The exception is raised when trying to instantiate the reactivemongo driver
val driver = new reactivemongo.api.MongoDriver()
This suggests that the mongodriver is using Akka under the hood and is binding to the same address that my main application. And indeed, if I edit my application.conf and change the akka.remote.netty.tcp.port from 2552 to 2553, I get the following exception:
[ERROR] -- NettyTransport(akka://reactivemongo)
failed to bind to /127.0.0.1:2553, shutting down Netty transport
In the previous versions of reactivemongo, by default, instantiating the driver was starting a new actor system so maybe version 0.11.11 tries to reuse the existing system?
I have tried to modify the akka port used by the driver as follows:
val customConf = ConfigFactory.parseString("""
akka {
remote {
netty.tcp.port = 4711
}
}
""")
val typesafeConfig: com.typesafe.config.Config = ConfigFactory.load(customConf)
val driver = new reactivemongo.api.MongoDriver(Some(typesafeConfig))
But this does not work, the new port is not taken into account and I keep getting the same error:
[ERROR] -- NettyTransport(akka://reactivemongo)
failed to bind to /127.0.0.1:2552, shutting down Netty transport
Actually, ReactiveMongo loads its configuration from the key 'mongo-async-driver'.
So, Adding the following permits to configure ReactiveMongo underlying akka system:
val customConf = ConfigFactory.parseString("""
mongo-async-driver {
akka {
loglevel = WARNING
remote {
enabled-transports = ["akka.remote.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 4711
}
}
}
}
""")
I am using the Zehon FTP utility on a ColdFusion 9 server. When I am FTP'ing files, it creates one directory, transfers about 16 files, then gives the message :
com.zehon.exception.FileTransferException: org.apache.commons.vfs.FileSystemException: Could not connect to FTP server on "ftpservername.com".
Here's the code:
<cfscript>
//FTP server information
host = "#getSiteList.TMS_FTPADDRESS#";
username = "#getSiteList.TMS_USERNAME#";
password = "#getSiteList.TMS_PASSWORD#";
/* sendingFolder = Folder whose content is to be uploaded recursively
* to the FTP server.
*/
sendingFolder = "#local_Folder#";
/* Forward slash / = root dir of FTP server.
* if you wish to FTP to privateDir under the root, for example,
* then set destFolder to "/privateDir"
*/
destFolder = "/#parent_Folder#";
FTP = createObject("java", "com.zehon.ftp.FTP");
thisBatchTransferProgressDefault= createObject("java", "com.zehon.BatchTransferProgressDefault").init();
FileTransferStatus = createObject("java", "com.zehon.FileTransferStatus");
try {
status = FTP.sendFolder(sendingFolder, destFolder, thisBatchTransferProgressDefault, host, username, password);
if(FileTransferStatus.SUCCESS is status){
writeOutput(sendingFolder & " got ftp-ed successfully to folder " & destFolder);
}
else if(FileTransferStatus.FAILURE is status){
writeOutput("Failed to ftp to folder " & destFolder);
}
} catch (any e) {
writeOutput(e.message);
}
</cfscript>
I tried this on two different servers and the same issue. If I use FileZilla or CFFTP, I can transfer all my files (CFFTP is having issues creating subfolders, which is why I moved away from that, and we want our customers to use their web app to FTP files, not a client). Has anyone else experience this? If so, was a solution discovered? Thanks