We are trying to use Stub Runner Boot to have a stub server and stubs available on Nexus (and local repo).
Locally I more or less sorted it out, with the help of other questions I submitted.
But now, I think I will face another problem and I'm stuck again... We are going to deploy the stub server to our PCF for smoke testing.
We can happily say curl pcf_host/stubs and it will respond with the list of configured stubs and port numbers.
But the stubs will be running in some port (that we can even make static, configuring the stub server) but I don't think we'll be able to call PCF on a port other than 443 (or perhpas 80), or can we?
Now that I wrote all of this, I'm starting to realise that the problem is more related with PCF than with SCC, and I must say that my knowledge of PCF is even smaller than of SCC.
Would appreciate if someone could help. Thanks.
Very good that you asked this question :)
We already solve this problem with Spring Cloud Pipelines. You can read about this more in https://cloud.spring.io/spring-cloud-pipelines/single/spring-cloud-pipelines.html#_3_8_enable_stubs_for_smoke_tests .
To put it briefly, you need to open ports and allow multiple port binding for your app. Also you have to generate routes. We already do that here https://github.com/spring-cloud/spring-cloud-pipelines/blob/master/common/src/main/bash/pipeline-cf.sh#L577
Let me copy the main part of that code over here
# FUNCTION: addMultiplePortsSupport {{{
# Adds multiple ports support for Stub Runner Boot
# Uses [PAAS_TEST_SPACE_PREFIX], [ENVIRONMENT] env vars
#
# $1 - Stub Runner name
# $2 - IDs of stubs to be downloaded
# $3 - path to Stub Runner manifest
function addMultiplePortsSupport() {
local stubRunnerName="${1}"
local stubrunnerIds="${2}"
local pathToManifest="${3}"
local appName
appName=$(retrieveAppName)
local hostname
hostname="$(hostname "${stubRunnerName}" "${ENVIRONMENT}" "${pathToManifest}")"
hostname="${hostname}-${appName}"
echo "Hostname for ${stubRunnerName} is ${hostname}"
local testSpace="${PAAS_TEST_SPACE_PREFIX}-${appName}"
local domain
domain="$( getDomain "${hostname}" )"
echo "Domain for ${stubRunnerName} is ${domain}"
# APPLICATION_HOSTNAME and APPLICATION_DOMAIN will be used for stub registration. Stub Runner Boot
# will use this environment variable to hardcode the hostname of the stubs
setEnvVar "${stubRunnerName}" "APPLICATION_HOSTNAME" "${hostname}"
setEnvVar "${stubRunnerName}" "APPLICATION_DOMAIN" "${domain}"
local previousIfs="${IFS}"
local listOfPorts=""
local appGuid
appGuid="$( "${CF_BIN}" curl "/v2/apps?q=name:${stubRunnerName}" -X GET | jq '.resources[0].metadata.guid' | sed 's/^"\(.*\)"$/\1/' )"
echo "App GUID for ${stubRunnerName} is ${appGuid}"
IFS="," read -ra vals <<< "${stubrunnerIds}"
for stub in "${vals[#]}"; do
echo "Parsing ${stub}"
local port
port=${stub##*:}
if [[ "${listOfPorts}" == "" ]]; then
listOfPorts="${port}"
else
listOfPorts="${listOfPorts},${port}"
fi
done
echo "List of added ports: [${listOfPorts}]"
"${CF_BIN}" curl "/v2/apps/${appGuid}" -X PUT -d "{\"ports\":[8080,${listOfPorts}]}"
echo "Successfully updated the list of ports for [${stubRunnerName}]"
IFS="," read -ra vals <<< "${stubrunnerIds}"
for stub in "${vals[#]}"; do
echo "Parsing ${stub}"
local port
port=${stub##*:}
local newHostname="${hostname}-${port}"
echo "Creating route with hostname [${newHostname}]"
"${CF_BIN}" create-route "${testSpace}" "${domain}" --hostname "${newHostname}"
local routeGuid
routeGuid="$( "${CF_BIN}" curl -X GET "/v2/routes?q=host:${newHostname}" | jq '.resources[0].metadata.guid' | sed 's/^"\(.*\)"$/\1/' )"
echo "GUID of the new route is [${routeGuid}]. Will update the mapping for port [${port}]"
"${CF_BIN}" curl "/v2/route_mappings" -X POST -d "{ \"app_guid\": \"${appGuid}\", \"route_guid\": \"${routeGuid}\", \"app_port\": ${port} }"
echo "Successfully updated the new route mapping for port [${port}]"
done
IFS="${previousIfs}"
} # }}}
Related
I have a problem, I can't generate the certificates in AWS EC2
Linux AWS
I trying execute this command in SSH - docker run --rm -p 3000:3000 -p 3001:3001 -e ASPNETCORE_HTTPS_PORT=https://+:3001 -e ASPNETCORE_ENVIRONMENT="Development" -e ASPNETCORE_URLS=https://+:3001 $MY ECR CONTAINER HERE$
i try too docker run --rm -p 3000:3000 -p 3001:3001 -e ASPNETCORE_HTTPS_PORT=https://+:3001 -e ASPNETCORE_ENVIRONMENT="Development" -e ASPNETCORE_URLS=https://+:3001 -v ASPNETCORE_Kestrel__Certificates__Default__Password=$MY PW$* -v ASPNETCORE_Kestrel__Certificates__Default__Path=%USERPROFILE%/aspnet/https/aspnetapp.pfx $MY CONTAINER$
Error on SSH
My Dockerfile
My Launch Settings
DOTNET INFO ON LINUX AWS
AWS CERTIFICATE MANAGER
it works perfectly on HTTP 80 but to unable HTTPS 443, a docker need a certificate.
what do i need to do to generate this certificate in aws linux?
Edit*
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[6 0]
Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may n ot be persisted outside of the container. Protected data will be unavailable whe n container is destroyed.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {f37427eb-3dc8-4d33-9177-92caadc2c880} ma y be persisted to storage in unencrypted form.
After a lot of searching find the following answers and my project is on LIVE.
1º I edited my program.cs so that it uses HTTPS Redirection and HSTS and configured the Forward Headers
Follow the codes.
`builder.Services.Configure(options =>
{
options.ForwardedHeaders = ForwardedHeaders.XForwardedFor | ForwardedHeaders.XForwardedProto;
});`
app.UseSwagger();
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/swagger/v1/swagger.json",
"Api Documentation for MyLandingApp");
});
app.UseHsts();
app.UseHttpsRedirection();
app.UseCors("MyLandingAppPolicy");
app.UseForwardedHeaders();
app.Use(async (context, next) =>
{
if (context.Request.IsHttps || context.Request.Headers["X-Forwarded-Proto"] == Uri.UriSchemeHttps)
{
await next();
}
else
{
string queryString = context.Request.QueryString.HasValue ? context.Request.QueryString.Value : string.Empty;
var https = "https://" + context.Request.Host + context.Request.Path + queryString;
context.Response.Redirect(https);
}
});
app.UseAuthentication();
app.UseAuthorization();
2º I added some stuff in my Appsettings.Json
"https_port": 3001,
3ºI changed my DockerFile to create a self certificate and enable HTTPS on docker run
.
Docker File
4ª I changed the docker container execution string, removed the HTTP port that I wouldn't use anyway, I'll explain later.
docker run --rm -p 3001:3001 -e ASPNETCORE_HTTPS_PORT=https://+:3001 -e ASPNETCORE_ENVIRONMENT="Production" -e ASPNETCORE_URLS=https://+:3001 $MY CONTAINER IN ESR$
5º I configured the loudbalancer like this:
HTTP80 - Loud Balancer http80
HTTPS443 - Loud bALANCER https443
Só que tem o macete...
you need to create the target group pointing to the main server, then you will take the private IP and create a new target group
Target Group
With this you will have done the redirection and CERTIFICATE configuration for your API.
Remembering that in Listener https 443 you need a valid certificate.
We have some test, dev and ci servers that we have setup as long running persistent spot instances mapped to specific domains using route53. This works great - we get the savings, we can allocate these without too much concern about cost but every now and then we loose the instance due to availability. When they come back - they come back with different IP addresses which breaks the route.
Is there a good way to have these instances automatically remap to the new IP address when they come back online (usually within a minute or two)?
Caution: I'm not convinced this approach is working after all. While I can confirm everything runs as I expected it to - the new routes didn't get setup correctly after these machines were assigned new spot instances. I'm not sure if this is because some service is not started by the time this script runs or if Amazon specifically prohibits this behavior. I'd be curious to hear what others have found.
--
N.B. The right answer here might well be using elastic IP addresses which as I understand allow you to have a single static IP address avoiding this issue altogether. I've not done the cost calculation on this but it might well be cheaper than the solution offered below.
What we ended up coming up with is a script that uses the AWS instance metadata and cli to make a route53 call upon reboot. This did NOT work on our old Ubuntu 14.04 instances but appears to on our newer Ubuntu 20.04 instances.
Here's how it works:
We built a little script called setupRoute53.sh that knows how to make a single call to route53.
We added a job to cron to run this on each reboot.
(Bonus) we also created an ansible script to add additional lines to the crontab for each virtual host we're running locally - we reverse proxy multiple services using nginx.
We're currently running this within the ubuntu user - the crontab looks like this:
# m h dom mon dow command
#reboot /home/ubuntu/setupRoute53.sh example.com test.example.com
And setupRoute53.sh looks like this:
#!/bin/sh
# setupRoute53.sh
export ROOT_DOMAIN="$1"
export ROUTE53_HOSTNAME="$2"
export IP="$3"
if [ -z "$1" ]; then
echo "Usage: $1 <route53 domain> <this hostname> ";
echo "";
echo "Example: $1 tokenoftrust.com test.tokenoftrust.com";
echo;
exit;
fi
if [ -z "$3" ]; then
echo "IP not given...trying EC2 metadata...";
IP=$( curl http://169.254.169.254/latest/meta-data/public-ipv4 )
fi
echo "Updating $ROUTE53_HOSTNAME to : $IP"
HOSTED_ZONE_ID=$( aws route53 list-hosted-zones-by-name | grep -B 1 -e "$ROOT_DOMAIN" | sed 's/.*hostedzone\/\([A-Za-z0-9]*\)\".*/\1/' | head -n 1 )
echo "Hosted zone being modified: $HOSTED_ZONE_ID"
INPUT_JSON=$(echo '{
"Comment": "Update the A record set",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "HOSTNAME",
"Type": "A",
"TTL": 60,
"ResourceRecords": [
{
"Value": "127.0.0.1"
}
]
}
}
]
}' | sed "s/127\.0\.0\.1/$IP/" | sed "s/HOSTNAME/$ROUTE53_HOSTNAME/" )
# http://docs.aws.amazon.com/cli/latest/reference/route53/change-resource-record-sets.html
# We want to use the string variable command so put the file contents (batch-changes file) in the following JSON
INPUT_JSON="{ \"ChangeBatch\": $INPUT_JSON }"
aws route53 change-resource-record-sets --hosted-zone-id "$HOSTED_ZONE_ID" --cli-input-json "$INPUT_JSON"
exit 0;
When I spin up my AWS machine, the first thing I do is run hostnamectl set-hostname myhost.test.com but then when I install and run puppet, it is pulling standard-1-ami.test.com as the cert name. standard-1-ami is the name of my AMI.
Where is it getting this name from on the OS?
I have this issue as well. Every time I make a new machine, without setting the hostname in a userdata script, I have this issue. I have noticed that the initial hostname is cached somewhere in memory.
Here's how I fix it:
Hostname: new_host ; IP: 192.168.10.50 ; DomainName: inside.myhouse.com
hostnamectl set-hostname new_host
echo "192.168.10.50 new_host.inside.myhouse.com new_host" >> /etc/hosts
echo "new_host" > /etc/hostname
service network restart
These 3 places are where the hostname "lives" or "can be retrieved.
To validate my configs, I run these 3 commands:
$ hostname
new_host
$ hostname -f
new_host.inside.myhouse.com
hostname -i
192.168.10.50
Note that, if your prompt is set to have your hostname displayed, your prompt may not change until you log back in. If the hostname & hostname -f commands work, you can run puppet and it should use the correct hostname.
BTW: I use Red Hat. YMMV.
I'm writing a 0dt deploy script for django.
When a deploy is made, it creates a new django server with the most recent code version, writes a new nginx config pointing to the right socket and reload nginx.
How do I find out when the old nginx workers replied all connections so I can drop the old django server?
Is looking at the workers pid the best option? I can't use nginx status url because the old config stops receiving connections.
Additionaly, there's another problem. Django is my backend, nginx is also proxying to a node server to serve the client. Is it possible to look at the active connections of a single upstream? Otherwise I will have to wait all connections to finish on the frontend too.
Well, in case anyone comes looking for it, I ended up with this solution:
[...]
# Run API
echo ":: Starting new server"
pm2 start ./server.sh --name api-$PRJ > /dev/null
# Copy nginx config
#echo ":: Swap nginx config"
rm ~/nginx/api.atados.com.br.*
cp ~/deploy/api/nginx.conf ~/nginx/api.atados.com.br.$PRJ.conf
perl -pi -e 's/{PRJ}/$PRJ/g' ~/nginx/api.atados.com.br.$PRJ.conf
# Grab workers pid
workers=`ps -aux | grep "nginx: worker" | sed "$ d" | awk '{print $2}'`
# Reload nginx
echo ":: Reloading nginx and waiting for old connections to drop."
sudo service nginx reload > /dev/null
# Wait for workers to die
for job in $workers
do
while [ -e /proc/$job ]
do
sleep 1
done
done
# Close old server
pm2 list | grep api | awk '{print $2}' | grep -v $PRJ | xargs pm2 delete > /dev/null
[...]
able to run rails server from production server directly but not pulling website from domain name,
I have an rails app which is deployed on EC2 instance with unicorn+nginx, both the things are running.
If I run rails s RAILS_ENV=production, it runs properly on ip address:3000, but when I enter hostname www.example.com, it shows error 522 connection timeout
Any help on this, I tried googling but nothing helped.
This may work
rails s -e production -b your_host_ip -p 80
OR
rails s -e production -b 0.0.0.0 -p 80
OR
rails s -e production -b domain -p 80