Child Process exits unexpectedly inside a windows based docker container which spawns through nodejs - c++

I am trying to spawn a new process using nodeJs in the following way.
public async executeProcess() {
this._childProcess = childProcess.spawn ("C:\\app\\SampleApp.exe");
}
Later I am building a docker container based on mcr.microsoft.com/windows/servercore:ltsc2016 by using the following docker file
ARG core=mcr.microsoft.com/windows/servercore:ltsc2016
ARG target=mcr.microsoft.com/windows/servercore:ltsc2016
FROM $core as download
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';
$ProgressPreference = 'SilentlyContinue';"]
ENV NODE_VERSION 10.16.0
RUN Invoke-WebRequest $('https://nodejs.org/dist/v{0}/node-v{0}-win-x64.zip' -f $env:NODE_VERSION) -OutFile 'node.zip' -UseBasicParsing ; \
Expand-Archive node.zip -DestinationPath C:\ ; \
Rename-Item -Path $('C:\node-v{0}-win-x64' -f $env:NODE_VERSION) -NewName 'C:\nodejs'
FROM $target
ENV NPM_CONFIG_LOGLEVEL info
COPY --from=download /nodejs /Bridge
ARG VS_OUT_DIR=.
WORKDIR /
ADD ${VS_OUT_DIR} ./app
WORKDIR /app
SHELL [ "powershell", "-Command"]
RUN Get-ChildItem -Path C:/Bridge -Recurse -Force
RUN Get-ChildItem Env:
SHELL ["cmd", "/C"]
ENTRYPOINT ["c:/nodejs/node.exe", "./lib/app.js"]

Related

How can I install Oracle Database 18c XE into Windows docker container?

I'm not able to install Oracle Database 18c Express Edtition into a Windows docker container.
The Oracle silent setup (documented here) reports success, but no installation is being performed. The destination directory (C:\OracleXE\) is empty. And, of course, nothing is installed.
What am I doing wrong here?
This is my Dockerfile
# escape=`
FROM mcr.microsoft.com/windows:20H2
USER ContainerAdministrator
COPY / /O18c
WORKDIR /O18c
SHELL ["PowerShell", "-Command"]
RUN New-Item 'C:\db-data' -ItemType Directory; New-LocalUser -Name OracleAdministrator -NoPassword -UserMayNotChangePassword -AccountNeverExpires; Set-LocalUser -Name OracleAdministrator -PasswordNeverExpires:$True; $adm = (Get-LocalGroup | Where-Object {$_.Name.IndexOf('Admin') -eq 0}).Name; Add-LocalGroupMember -Group $adm -Member OracleAdministrator
USER OracleAdministrator
RUN ./Setup.exe /s /v"RSP_FILE=C:\O18c\XEInstall.rsp" /v"/L*v C:\O18c\setup.log" /v"/qn"
EXPOSE 1521 5550 3389
VOLUME C:\db-data
ENTRYPOINT PowerShell
This is my XEInstall.rsp file
#Do not leave any parameter with empty value
#Install Directory location, username can be replaced with current user
INSTALLDIR=C:\OracleXE\
#Database password, All users are set with this password, Remove the value once installation is complete
PASSWORD=foobar123!
#If listener port is set to 0, available port will be allocated starting from 1521 automatically
LISTENER_PORT=0
#If EM express port is set to 0, available port will be used starting from 5550 automatically
EMEXPRESS_PORT=0
#Specify char set of the database
CHAR_SET=AL32UTF8
This is my directory structure:
This is my docker build command:
docker build -f .\Dockerfile .\OracleXE184_Win64\
Apparently, the Oracle setup doesn't work with PowerShell.
When run with standard command prompt, setup installs fine.
This is my working Dockerfile
# escape=`
FROM mcr.microsoft.com/windows:20H2
USER ContainerAdministrator
COPY / /O18c
WORKDIR /O18c
RUN PowerShell -Command "New-Item 'C:\db-data' -ItemType Directory; New-LocalUser -Name OracleAdministrator -NoPassword -UserMayNotChangePassword -AccountNeverExpires; Set-LocalUser -Name OracleAdministrator -PasswordNeverExpires:$True; $adm = (Get-LocalGroup | Where-Object {$_.Name.IndexOf('Admin') -eq 0}).Name; Add-LocalGroupMember -Group $adm -Member OracleAdministrator;"
USER OracleAdministrator
RUN setup.exe /s /v"RSP_FILE=C:\O18c\XEInstall.rsp" /v"/L*v C:\O18c\setup.log" /v"/qn"
RUN PowerShell -Command "Get-ChildItem; Get-ChildItem \ -Attributes Directory;"
EXPOSE 1521 5550 3389
VOLUME C:\db-datado
ENTRYPOINT PowerShell

PowerShell does not recognize AWS CLI installed in the same script

I have installed aws cli using powershell script
$command = "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12"
Invoke-Expression $command
Invoke-WebRequest -Uri "https://awscli.amazonaws.com/AWSCLIV2.msi" -Outfile C:\AWSCLIV2.msi
$arguments = "/i `"C:\AWSCLIV2.msi`" /quiet"
Start-Process msiexec.exe -ArgumentList $arguments -Wait
aws --version
When I try to print the aws --version it gives the below error.
aws : The term 'aws' is not recognized as the name of a cmdlet, function,
script file, or operable program. Check the spelling of the name, or if a path
was included, verify that the path is correct and try again.
At line:1 char:1
+ aws
+ ~~~
I was able to fix this by adding the below line after installing aws cli:
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")
complete code:
$command = "[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12"
Invoke-Expression $command
Invoke-WebRequest -Uri "https://awscli.amazonaws.com/AWSCLIV2.msi" -Outfile C:\AWSCLIV2.msi
$arguments = "/i `"C:\AWSCLIV2.msi`" /quiet"
Start-Process msiexec.exe -ArgumentList $arguments -Wait
$env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine")
aws --version
aws s3 ls

AWS CLI on windows docker

I am working on asp.net applications. The current plan is to setup CI pipeline in AWS with ECS. So basically I created 2 stages now.
Scenario:
I have an ASP.NET Web API application built on .NET Framework 4.6.2. I need to setup a containerized CI/CD pipeline using the AWS Code pipeline.
My initial focus to setup CI only which includes automating the source build, unit test and upload the build to ECR repository as a docker. This will be used in the next stage to deploy in ECS.
Current Progress and Problem Description:
Stage 1: Source
This stage configured successfully with the GitHub webhook.
Stage 2: Build
I used Microsoft docker ('mcr.microsoft.com/dotnet/framework/sdk:4.7.2') as a build environment image.
This is working fine for build and unit testing.
Now I need to build the docker image and push to ECR repository. I created a repository and done the commands locally. I also enabled docker support in my application so docker-compose also there.
I am stopped here as I don't have any idea to continue...
My current build spec file consists of following;
version: 0.2
env:
variables:
PROJECT: aspnetapp
DOTNET_FRAMEWORK: 4.6.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=727003307347.dkr.ecr.eu-west-1.amazonaws.com/ecr-repo-axiapp-cloud
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=aspnetapi
build:
commands:
- echo Build started on `date`
- nuget restore
- msbuild $env:PROJECT.sln /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release /p:DeployIisAppPath="Default Web Site" /p:PackageAsSingleFile=false /p:OutDir=C:\codebuild\artifacts\
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
artifacts:
files:
- '**/*'
base-directory: C:\codebuild\artifacts\
While running, I got an error on aws --version. From this, I can understand that we need to install AWS CLI on the build server. For that, I am creating a custom docker. I got an articles and following the same for this.
https://aws.amazon.com/blogs/devops/extending-aws-codebuild-with-custom-build-environments-for-the-net-framework/
From the article, I have the following DockerFile
# escape=`
FROM microsoft/dotnet-framework:4.7.2-runtime
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
#Install NuGet CLI
ENV NUGET_VERSION 4.4.1
RUN New-Item -Type Directory $Env:ProgramFiles\NuGet; `
Invoke-WebRequest -UseBasicParsing https://dist.nuget.org/win-x86-commandline/v$Env:NUGET_VERSION/nuget.exe -OutFile $Env:ProgramFiles\NuGet\nuget.exe
#Install AWS CLI
RUN Invoke-WebRequest -UseBasicParsing https://s3.amazonaws.com/aws-cli/AWSCLI64PY3.msi -OutFile AWSCLI64PY3.msi
# Install VS Test Agent
RUN Invoke-WebRequest -UseBasicParsing https://download.visualstudio.microsoft.com/download/pr/12210068/8a386d27295953ee79281fd1f1832e2d/vs_TestAgent.exe -OutFile vs_TestAgent.exe; `
Start-Process vs_TestAgent.exe -ArgumentList '--quiet', '--norestart', '--nocache' -NoNewWindow -Wait; `
Remove-Item -Force vs_TestAgent.exe; `
# Install VS Build Tools
Invoke-WebRequest -UseBasicParsing https://download.visualstudio.microsoft.com/download/pr/12210059/e64d79b40219aea618ce2fe10ebd5f0d/vs_BuildTools.exe -OutFile vs_BuildTools.exe; `
# Installer won't detect DOTNET_SKIP_FIRST_TIME_EXPERIENCE if ENV is used, must use setx /M
setx /M DOTNET_SKIP_FIRST_TIME_EXPERIENCE 1; `
Start-Process vs_BuildTools.exe -ArgumentList '--add', 'Microsoft.VisualStudio.Workload.MSBuildTools', '--add', 'Microsoft.VisualStudio.Workload.NetCoreBuildTools', '--add', 'Microsoft.VisualStudio.Workload.WebBuildTools;includeRecommended', '--quiet', '--norestart', '--nocache' -NoNewWindow -Wait; `
Remove-Item -Force vs_buildtools.exe; `
Remove-Item -Force -Recurse \"${Env:ProgramFiles(x86)}\Microsoft Visual Studio\Installer\"; `
Remove-Item -Force -Recurse ${Env:TEMP}\*; `
Remove-Item -Force -Recurse \"${Env:ProgramData}\Package Cache\"
# Set PATH in one layer to keep image size down.
RUN setx /M PATH $(${Env:PATH} `
+ \";${Env:ProgramFiles}\NuGet\" `
+ \";${Env:ProgramFiles(x86)}\Microsoft Visual Studio\2017\TestAgent\Common7\IDE\CommonExtensions\Microsoft\TestWindow\" `
+ \";${Env:ProgramFiles(x86)}\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\")
# Install Targeting Packs
RUN #('4.0', '4.5.2', '4.6.2', '4.7.2') `
| %{ `
Invoke-WebRequest -UseBasicParsing https://dotnetbinaries.blob.core.windows.net/referenceassemblies/v${_}.zip -OutFile referenceassemblies.zip; `
Expand-Archive -Force referenceassemblies.zip -DestinationPath \"${Env:ProgramFiles(x86)}\Reference Assemblies\Microsoft\Framework\.NETFramework\"; `
Remove-Item -Force referenceassemblies.zip; `
}
I tried to download AWS CLI using the command.
If my understanding right, please help me to update the Dockerfile to install AWS CLI.
So same way shall I need to install docker for windows also in the build server?
I have done this. Please find the updated docker file
# escape=`
FROM microsoft/dotnet-framework:4.7.2-runtime
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
#Install NuGet CLI
ENV NUGET_VERSION 4.4.1
RUN New-Item -Type Directory $Env:ProgramFiles\NuGet; `
Invoke-WebRequest -UseBasicParsing https://dist.nuget.org/win-x86-commandline/v$Env:NUGET_VERSION/nuget.exe -OutFile $Env:ProgramFiles\NuGet\nuget.exe
#Install AWS CLI
RUN Invoke-WebRequest -UseBasicParsing https://s3.amazonaws.com/aws-cli/AWSCLI64PY3.msi -OutFile AWSCLI64PY3.msi; `
Start-Process "msiexec.exe" -ArgumentList '/i', 'AWSCLI64PY3.msi', '/qn', '/norestart' -Wait -NoNewWindow; `
Remove-Item -Force AWSCLI64PY3.msi; `
# Install VS Test Agent
Invoke-WebRequest -UseBasicParsing https://download.visualstudio.microsoft.com/download/pr/12210068/8a386d27295953ee79281fd1f1832e2d/vs_TestAgent.exe -OutFile vs_TestAgent.exe; `
Start-Process vs_TestAgent.exe -ArgumentList '--quiet', '--norestart', '--nocache' -NoNewWindow -Wait; `
Remove-Item -Force vs_TestAgent.exe; `
# Install VS Build Tools
Invoke-WebRequest -UseBasicParsing https://download.visualstudio.microsoft.com/download/pr/12210059/e64d79b40219aea618ce2fe10ebd5f0d/vs_BuildTools.exe -OutFile vs_BuildTools.exe; `
# Installer won't detect DOTNET_SKIP_FIRST_TIME_EXPERIENCE if ENV is used, must use setx /M
setx /M DOTNET_SKIP_FIRST_TIME_EXPERIENCE 1; `
Start-Process vs_BuildTools.exe -ArgumentList '--add', 'Microsoft.VisualStudio.Workload.MSBuildTools', '--add', 'Microsoft.VisualStudio.Workload.NetCoreBuildTools', '--add', 'Microsoft.VisualStudio.Workload.WebBuildTools;includeRecommended', '--quiet', '--norestart', '--nocache' -NoNewWindow -Wait; `
Remove-Item -Force vs_buildtools.exe; `
Remove-Item -Force -Recurse \"${Env:ProgramFiles(x86)}\Microsoft Visual Studio\Installer\"; `
Remove-Item -Force -Recurse ${Env:TEMP}\*; `
Remove-Item -Force -Recurse \"${Env:ProgramData}\Package Cache\"
# Set PATH in one layer to keep image size down.
RUN setx /M PATH $(${Env:PATH} `
+ \";${Env:ProgramFiles}\NuGet\" `
+ \";${Env:ProgramFiles(x86)}\Microsoft Visual Studio\2017\TestAgent\Common7\IDE\CommonExtensions\Microsoft\TestWindow\" `
+ \";${Env:ProgramFiles(x86)}\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\")
# Install Targeting Packs
RUN #('4.0', '4.5.2', '4.6.2', '4.7.2') `
| %{ `
Invoke-WebRequest -UseBasicParsing https://dotnetbinaries.blob.core.windows.net/referenceassemblies/v${_}.zip -OutFile referenceassemblies.zip; `
Expand-Archive -Force referenceassemblies.zip -DestinationPath \"${Env:ProgramFiles(x86)}\Reference Assemblies\Microsoft\Framework\.NETFramework\"; `
Remove-Item -Force referenceassemblies.zip; `
}

Jenkinsfile to automatically deploy to EKS

How do I pass my aws credentials when I am running a Jenkinsjob
taking this as an example https://github.com/PaulMaddox/amazon-eks-kubectl
$ docker run -v ~/.aws:/home/kubectl/.aws -e CLUSTER=demo maddox/kubectl get services
The above works on my laptop , but I want to pass aws credentials on the file.I have aws configured in my Jenkins-->credentials .I also have a bitbucket repo which contains a Jenkinsfile and a yam file for "service" and "deployment"
the way I do it now is run the kubectl create -f filename.yaml and it deploys to eks .. just want to do the same thing but automate it with a Jenkinsfile , suggestions on how to do it either with kubectl or with helm
In your Jenkinsfile you should include similar section:
stage('Deploy on Dev') {
node('master'){
withEnv(["KUBECONFIG=${JENKINS_HOME}/.kube/dev-config","IMAGE=${ACCOUNT}.dkr.ecr.us-east-1.amazonaws.com/${ECR_REPO_NAME}:${IMAGETAG}"]){
sh "sed -i 's|IMAGE|${IMAGE}|g' k8s/deployment.yaml"
sh "sed -i 's|ACCOUNT|${ACCOUNT}|g' k8s/service.yaml"
sh "sed -i 's|ENVIRONMENT|dev|g' k8s/*.yaml"
sh "sed -i 's|BUILD_NUMBER|01|g' k8s/*.yaml"
sh "kubectl apply -f k8s"
DEPLOYMENT = sh (
script: 'cat k8s/deployment.yaml | yq -r .metadata.name',
returnStdout: true
).trim()
echo "Creating k8s resources..."
sleep 180
DESIRED= sh (
script: "kubectl get deployment/$DEPLOYMENT | awk '{print \$2}' | grep -v DESIRED",
returnStdout: true
).trim()
CURRENT= sh (
script: "kubectl get deployment/$DEPLOYMENT | awk '{print \$3}' | grep -v CURRENT",
returnStdout: true
).trim()
if (DESIRED.equals(CURRENT)) {
currentBuild.result = "SUCCESS"
return
} else {
error("Deployment Unsuccessful.")
currentBuild.result = "FAILURE"
return
}
}
}
}
}
which will be responsible for automating deployment proccess.
I hope it helps.

Running supervisord in AWS Environment

I'm working on adding Django Channels on my elastic beanstalk enviorment, but running into trouble configuring supervisord. Specifically, in /.ebextensions I have a file channels.config with this code:
container_commands:
01_copy_supervisord_conf:
command: "cp .ebextensions/supervisord/supervisord.conf /opt/python/etc/supervisord.conf"
02_reload_supervisord:
command: "supervisorctl -c /opt/python/etc/supervisord.conf reload"
This errors on the 2nd command with the following error message, through the elastic beanstalk CLI:
Command failed on instance. Return code: 1 Output: error: <class
'FileNotFoundError'>, [Errno 2] No such file or directory:
file: /opt/python/run/venv/local/lib/python3.4/site-
packages/supervisor/xmlrpc.py line: 562.
container_command 02_reload_supervisord in
.ebextensions/channels.config failed.
My guess would be supervisor didn't install correctly, but because command 1 copies the files without an error, that leads me to think supervisor is indeed installed and I have an issue with the container command. Has anyone implemented supervisor in an AWS environment and can see where I'm going wrong?
You should be careful about python versions and exact installation paths ,
Here is how did it maybe it can help
packages:
yum:
python27-setuptools: []
container_commands:
01-supervise:
command: ".ebextensions/supervise.sh"
Here is the supervise.sh
#!/bin/bash
if [ "${SUPERVISE}" == "enable" ]; then
export HOME="/root"
export PATH="/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin"
easy_install supervisor
cat <<'EOB' > /etc/init.d/supervisord
# Source function library
. /etc/rc.d/init.d/functions
# Source system settings
if [ -f /etc/sysconfig/supervisord ]; then
. /etc/sysconfig/supervisord
fi
# Path to the supervisorctl script, server binary,
# and short-form for messages.
supervisorctl=${SUPERVISORCTL-/usr/bin/supervisorctl}
supervisord=${SUPERVISORD-/usr/bin/supervisord}
prog=supervisord
pidfile=${PIDFILE-/var/run/supervisord.pid}
lockfile=${LOCKFILE-/var/lock/subsys/supervisord}
STOP_TIMEOUT=${STOP_TIMEOUT-60}
OPTIONS="${OPTIONS--c /etc/supervisord.conf}"
RETVAL=0
start() {
echo -n $"Starting $prog: "
daemon --pidfile=${pidfile} $supervisord $OPTIONS
RETVAL=$?
echo
if [ $RETVAL -eq 0 ]; then
touch ${lockfile}
$supervisorctl $OPTIONS status
fi
return $RETVAL
}
stop() {
echo -n $"Stopping $prog: "
killproc -p ${pidfile} -d ${STOP_TIMEOUT} $supervisord
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -rf ${lockfile} ${pidfile}
}
reload() {
echo -n $"Reloading $prog: "
LSB=1 killproc -p $pidfile $supervisord -HUP
RETVAL=$?
echo
if [ $RETVAL -eq 7 ]; then
failure $"$prog reload"
else
$supervisorctl $OPTIONS status
fi
}
restart() {
stop
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status -p ${pidfile} $supervisord
RETVAL=$?
[ $RETVAL -eq 0 ] && $supervisorctl $OPTIONS status
;;
restart)
restart
;;
condrestart|try-restart)
if status -p ${pidfile} $supervisord >&/dev/null; then
stop
start
fi
;;
force-reload|reload)
reload
;;
*)
echo $"Usage: $prog {start|stop|restart|condrestart|try-restart|force-reload|reload}"
RETVAL=2
esac
exit $RETVAL
EOB
chmod +x /etc/init.d/supervisord
cat <<'EOB' > /etc/sysconfig/supervisord
# Configuration file for the supervisord service
#
# Author: Jason Koppe <jkoppe#indeed.com>
# orginal work
# Erwan Queffelec <erwan.queffelec#gmail.com>
# adjusted to new LSB-compliant init script
# make sure elasticbeanstalk PARAMS are being passed through to supervisord
. /opt/elasticbeanstalk/support/envvars
# WARNING: change these wisely! for instance, adding -d, --nodaemon
# here will lead to a very undesirable (blocking) behavior
#OPTIONS="-c /etc/supervisord.conf"
PIDFILE=/var/run/supervisord/supervisord.pid
#LOCKFILE=/var/lock/subsys/supervisord.pid
# Path to the supervisord binary
SUPERVISORD=/usr/local/bin/supervisord
# Path to the supervisorctl binary
SUPERVISORCTL=/usr/local/bin/supervisorctl
# How long should we wait before forcefully killing the supervisord process ?
#STOP_TIMEOUT=60
# Remove this if you manage number of open files in some other fashion
#ulimit -n 96000
EOB
mkdir -p /var/run/supervisord/
chown webapp: /var/run/supervisord/
cat <<'EOB' > /etc/supervisord.conf
[unix_http_server]
file=/tmp/supervisor.sock
chmod=0777
[supervisord]
logfile=/var/app/support/logs/supervisord.log
logfile_maxbytes=0
logfile_backups=0
loglevel=warn
pidfile=/var/run/supervisord/supervisord.pid
nodaemon=false
nocleanup=true
user=webapp
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:process-ipn-api-gpsfsoft]
command = -- command that u want to run ---
directory = /var/app/current/
user = webapp
autorestart = true
startsecs = 0
numprocs = 10
process_name = -- process name that u want ---
EOB
# this is now a little tricky, not officially documented, so might break but it is the cleanest solution
# first before the "flip" is done (e.g. switch between ondeck vs current) lets stop supervisord
echo -e '#!/usr/bin/env bash\nservice supervisord stop' > /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/00_stop_supervisord.sh
# then right after the webserver is reloaded, we can start supervisord again
echo -e '#!/usr/bin/env bash\nservice supervisord start' > /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
chmod +x /opt/elasticbeanstalk/hooks/appdeploy/enact/99_z_start_supervisord.sh
fi
PS: You have define SUPERVISE as Enable in Elasticbeanstalk environment value to get this run.