I have a problem, I can't generate the certificates in AWS EC2
Linux AWS
I trying execute this command in SSH - docker run --rm -p 3000:3000 -p 3001:3001 -e ASPNETCORE_HTTPS_PORT=https://+:3001 -e ASPNETCORE_ENVIRONMENT="Development" -e ASPNETCORE_URLS=https://+:3001 $MY ECR CONTAINER HERE$
i try too docker run --rm -p 3000:3000 -p 3001:3001 -e ASPNETCORE_HTTPS_PORT=https://+:3001 -e ASPNETCORE_ENVIRONMENT="Development" -e ASPNETCORE_URLS=https://+:3001 -v ASPNETCORE_Kestrel__Certificates__Default__Password=$MY PW$* -v ASPNETCORE_Kestrel__Certificates__Default__Path=%USERPROFILE%/aspnet/https/aspnetapp.pfx $MY CONTAINER$
Error on SSH
My Dockerfile
My Launch Settings
DOTNET INFO ON LINUX AWS
AWS CERTIFICATE MANAGER
it works perfectly on HTTP 80 but to unable HTTPS 443, a docker need a certificate.
what do i need to do to generate this certificate in aws linux?
Edit*
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[6 0]
Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may n ot be persisted outside of the container. Protected data will be unavailable whe n container is destroyed.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {f37427eb-3dc8-4d33-9177-92caadc2c880} ma y be persisted to storage in unencrypted form.
After a lot of searching find the following answers and my project is on LIVE.
1º I edited my program.cs so that it uses HTTPS Redirection and HSTS and configured the Forward Headers
Follow the codes.
`builder.Services.Configure(options =>
{
options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
});`
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json",
"Api Documentation for MyLandingApp");
});
app.UseHsts();
app.UseHttpsRedirection();
app.UseCors("MyLandingAppPolicy");
app.UseForwardedHeaders();
app.Use(async (context, next) =>
{
if (context.Request.IsHttps || context.Request.Headers["X-Forwarded-Proto"] == Uri.UriSchemeHttps)
{
await next();
}
else
{
string queryString = context.Request.QueryString.HasValue ? context.Request.QueryString.Value : string.Empty;
var https = "https://" + context.Request.Host + context.Request.Path + queryString;
context.Response.Redirect(https);
}
});
app.UseAuthentication();
app.UseAuthorization();
2º I added some stuff in my Appsettings.Json
"https_port": 3001,
3ºI changed my DockerFile to create a self certificate and enable HTTPS on docker run
.
Docker File
4ª I changed the docker container execution string, removed the HTTP port that I wouldn't use anyway, I'll explain later.
docker run --rm -p 3001:3001 -e ASPNETCORE_HTTPS_PORT=https://+:3001 -e ASPNETCORE_ENVIRONMENT="Production" -e ASPNETCORE_URLS=https://+:3001 $MY CONTAINER IN ESR$
5º I configured the loudbalancer like this:
HTTP80 - Loud Balancer http80
HTTPS443 - Loud bALANCER https443
Só que tem o macete...
you need to create the target group pointing to the main server, then you will take the private IP and create a new target group
Target Group
With this you will have done the redirection and CERTIFICATE configuration for your API.
Remembering that in Listener https 443 you need a valid certificate.
Related
I currently have an AWS server set up with docker to run the Keycloak docker container. For SSL/TLS, there is an AWS loadbalancer configured to point https/443 traffic to the container and have it receive it over 8080, terminating the encryption connection on said load balancer.
When creating the container with the following command, I am able to browse to and log into the keycloak service by browsing to the server's IP address.
docker run --name keycloak -v keybase-storage -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=TempAdminPassword jboss/keycloak However if I try to log into the server by browsing to the URL, I am redirected to the url http://default-host:8080/auth/admin/ and the browser showing a connection error page.
When trying to find a solution to this, I found how to pass java options to the container when it is first run, and using the resources from this page I used the following command to start the container(URL replaced for privacy concerns)
docker run --name keycloak -v keybase-storage -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=TempAdminPassword -e JAVA_OPTS_APPEND="-Dkeycloak.frontendUrl=https://sso.IntendedURL.com" jboss/keycloak However this yields the same results when trying to browse to the page.
The main clue I have to go off of right now is this line near the end of the previously shown docker run command, which reads as follows:
19:23:00,039 INFO [org.wildfly.extension.undertow] (ServerService Thread Pool -- 67) WFLYUT0021: Registered web context: '/auth' for server 'default-server'
What I believe I need to do now is to either change the config of the docker container after it has been created(have been unable to edit files using docker exec, so this is less likely) or to pass a java option into the run command when the container is first started.
Please let me know if you have any questions or if I can provide any other information.
Thank you.
Environment information:
Operating system
Amazon Linux 2
Docker version
19.03.13-ce, build 4484c46
Keycloak version
12.0.1(WildFly Core 13.0.3.Final)
My Dockerfile is:
FROM nginx
I start a container on AWS docker run -d --name ng_ex -p 8082:80 nginx and :
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6489cbb430b9 nginx "nginx -g 'daemon of…" 22 minutes ago Up 22 minutes 0.0.0.0:8082->80/tcp ng_ex
And inside a container:
service nginx status
[ ok ] nginx is running.
But when I try to send a request thought browser on my.ip.address:8082 I get a timeout error instead Nginx welcome page. What is my mistake and how to fix it?
If you're on an VM on aws, means that you must setup your security group to allow connection on port 8082 from all internet or only your IP/proxyIP. (The timeout may come from this).
Then my.ip.address:8082 should works
If you're inside your VM get the container IP:
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id.
Then curl < container IP >:8082
If stil not working confirm on build your container EXPOSE 80
We are trying to use Stub Runner Boot to have a stub server and stubs available on Nexus (and local repo).
Locally I more or less sorted it out, with the help of other questions I submitted.
But now, I think I will face another problem and I'm stuck again... We are going to deploy the stub server to our PCF for smoke testing.
We can happily say curl pcf_host/stubs and it will respond with the list of configured stubs and port numbers.
But the stubs will be running in some port (that we can even make static, configuring the stub server) but I don't think we'll be able to call PCF on a port other than 443 (or perhpas 80), or can we?
Now that I wrote all of this, I'm starting to realise that the problem is more related with PCF than with SCC, and I must say that my knowledge of PCF is even smaller than of SCC.
Would appreciate if someone could help. Thanks.
Very good that you asked this question :)
We already solve this problem with Spring Cloud Pipelines. You can read about this more in https://cloud.spring.io/spring-cloud-pipelines/single/spring-cloud-pipelines.html#_3_8_enable_stubs_for_smoke_tests .
To put it briefly, you need to open ports and allow multiple port binding for your app. Also you have to generate routes. We already do that here https://github.com/spring-cloud/spring-cloud-pipelines/blob/master/common/src/main/bash/pipeline-cf.sh#L577
Let me copy the main part of that code over here
# FUNCTION: addMultiplePortsSupport {{{
# Adds multiple ports support for Stub Runner Boot
# Uses [PAAS_TEST_SPACE_PREFIX], [ENVIRONMENT] env vars
#
# $1 - Stub Runner name
# $2 - IDs of stubs to be downloaded
# $3 - path to Stub Runner manifest
function addMultiplePortsSupport() {
local stubRunnerName="${1}"
local stubrunnerIds="${2}"
local pathToManifest="${3}"
local appName
appName=$(retrieveAppName)
local hostname
hostname="$(hostname "${stubRunnerName}" "${ENVIRONMENT}" "${pathToManifest}")"
hostname="${hostname}-${appName}"
echo "Hostname for ${stubRunnerName} is ${hostname}"
local testSpace="${PAAS_TEST_SPACE_PREFIX}-${appName}"
local domain
domain="$( getDomain "${hostname}" )"
echo "Domain for ${stubRunnerName} is ${domain}"
# APPLICATION_HOSTNAME and APPLICATION_DOMAIN will be used for stub registration. Stub Runner Boot
# will use this environment variable to hardcode the hostname of the stubs
setEnvVar "${stubRunnerName}" "APPLICATION_HOSTNAME" "${hostname}"
setEnvVar "${stubRunnerName}" "APPLICATION_DOMAIN" "${domain}"
local previousIfs="${IFS}"
local listOfPorts=""
local appGuid
appGuid="$( "${CF_BIN}" curl "/v2/apps?q=name:${stubRunnerName}" -X GET | jq '.resources[0].metadata.guid' | sed 's/^"\(.*\)"$/\1/' )"
echo "App GUID for ${stubRunnerName} is ${appGuid}"
IFS="," read -ra vals <<< "${stubrunnerIds}"
for stub in "${vals[#]}"; do
echo "Parsing ${stub}"
local port
port=${stub##*:}
if [[ "${listOfPorts}" == "" ]]; then
listOfPorts="${port}"
else
listOfPorts="${listOfPorts},${port}"
fi
done
echo "List of added ports: [${listOfPorts}]"
"${CF_BIN}" curl "/v2/apps/${appGuid}" -X PUT -d "{\"ports\":[8080,${listOfPorts}]}"
echo "Successfully updated the list of ports for [${stubRunnerName}]"
IFS="," read -ra vals <<< "${stubrunnerIds}"
for stub in "${vals[#]}"; do
echo "Parsing ${stub}"
local port
port=${stub##*:}
local newHostname="${hostname}-${port}"
echo "Creating route with hostname [${newHostname}]"
"${CF_BIN}" create-route "${testSpace}" "${domain}" --hostname "${newHostname}"
local routeGuid
routeGuid="$( "${CF_BIN}" curl -X GET "/v2/routes?q=host:${newHostname}" | jq '.resources[0].metadata.guid' | sed 's/^"\(.*\)"$/\1/' )"
echo "GUID of the new route is [${routeGuid}]. Will update the mapping for port [${port}]"
"${CF_BIN}" curl "/v2/route_mappings" -X POST -d "{ \"app_guid\": \"${appGuid}\", \"route_guid\": \"${routeGuid}\", \"app_port\": ${port} }"
echo "Successfully updated the new route mapping for port [${port}]"
done
IFS="${previousIfs}"
} # }}}
I've created a docker within AWS server which runs SSH service.
I relied on the following example: https://docs.docker.com/engine/examples/running_ssh_service/ and added my own logic to the Dockerfile.
When trying to log in remotely to the docker I get the password message prompted but the password I set for the SSH user does not work. When trying the exact same password with local ssh connection (from within the AWS server to 127.0.0.1 -p exported_SSH_port) it works perfectly.
any ideas?
There's a little bug in docker docs:
You should change
RUN sed -i 's/PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
To
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config
When i try to run a local ESP then i get this error.
ERROR:Fetching service config failed(status code 403, reason Forbidden, url ***)
I have a new created service account this account works fine with gcloud cli.
System: OSX Sierra with Docker for MAC
this is the command that i use to start the container:
docker run -d --name="esp" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s 2017-02-07r5 -v echo.endpoints.****.cloud.goog -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
UPDATE:
I have found the error i have set for the service name the verision and for the version the servicename.
Now i get no error but it not works, this is the console output from the container. From my view is all fine but it not works, i can't call the proxy with localhost:8082/***
INFO:Constructing an access token with scope https://www.googleapis.com/auth/service.management.readonly
INFO:Service account email: aplha-api#****.iam.gserviceaccount.com
INFO:Refreshing access_token
INFO:Fetching the service configuration from the service management service
nginx: [warn] Using trusted CA certificates file: /etc/nginx/trusted-ca-certificates.crt
This is the used correct command:
docker run -d --name="esp-user-api" --net="host" -v ~/Downloads:/esp gcr.io/endpoints-release/endpoints-runtime:1.0 -s echo.endpoints.***.cloud.goog -v 2017-02-07r5 -p 8082 -a localhost:9000 -k /esp/serviceaccount.json
Aron, I assume:
(1) you are following this user guide: https://cloud.google.com/endpoints/docs/running-esp-localdev
(2) And you do have a backend running on localhost:9000
Have you issued a curl request as suggested in that user guide to localhost:8082/***? does curl command get stuck or returns any error message?
If you don't have a local backend running yet, I would recommend you to follow the user guide above to run a local backend. Note this guide will instruct you to run it at port 8080, so you'll need to change your docker run command from "-a localhost:9000" to "-a localhost:8080" as well.
Also, please note this user guide is for linux env. We haven't tried this set up in a Mac env yet. We do notice some user gets this working on Windows docker with extra work, where he sets backend to "IP of docker NIC". Note "-a" is short for "--backend".
see https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/google-cloud-endpoints/4sRaSkigPiU/KY8g46NSBgAJ