How can I install Oracle Database 18c XE into Windows docker container? - dockerfile

I'm not able to install Oracle Database 18c Express Edtition into a Windows docker container.
The Oracle silent setup (documented here) reports success, but no installation is being performed. The destination directory (C:\OracleXE\) is empty. And, of course, nothing is installed.
What am I doing wrong here?
This is my Dockerfile
# escape=`
FROM mcr.microsoft.com/windows:20H2
USER ContainerAdministrator
COPY / /O18c
WORKDIR /O18c
SHELL ["PowerShell", "-Command"]
RUN New-Item 'C:\db-data' -ItemType Directory; New-LocalUser -Name OracleAdministrator -NoPassword -UserMayNotChangePassword -AccountNeverExpires; Set-LocalUser -Name OracleAdministrator -PasswordNeverExpires:$True; $adm = (Get-LocalGroup | Where-Object {$_.Name.IndexOf('Admin') -eq 0}).Name; Add-LocalGroupMember -Group $adm -Member OracleAdministrator
USER OracleAdministrator
RUN ./Setup.exe /s /v"RSP_FILE=C:\O18c\XEInstall.rsp" /v"/L*v C:\O18c\setup.log" /v"/qn"
EXPOSE 1521 5550 3389
VOLUME C:\db-data
ENTRYPOINT PowerShell
This is my XEInstall.rsp file
#Do not leave any parameter with empty value
#Install Directory location, username can be replaced with current user
INSTALLDIR=C:\OracleXE\
#Database password, All users are set with this password, Remove the value once installation is complete
PASSWORD=foobar123!
#If listener port is set to 0, available port will be allocated starting from 1521 automatically
LISTENER_PORT=0
#If EM express port is set to 0, available port will be used starting from 5550 automatically
EMEXPRESS_PORT=0
#Specify char set of the database
CHAR_SET=AL32UTF8
This is my directory structure:
This is my docker build command:
docker build -f .\Dockerfile .\OracleXE184_Win64\

Apparently, the Oracle setup doesn't work with PowerShell.
When run with standard command prompt, setup installs fine.
This is my working Dockerfile
# escape=`
FROM mcr.microsoft.com/windows:20H2
USER ContainerAdministrator
COPY / /O18c
WORKDIR /O18c
RUN PowerShell -Command "New-Item 'C:\db-data' -ItemType Directory; New-LocalUser -Name OracleAdministrator -NoPassword -UserMayNotChangePassword -AccountNeverExpires; Set-LocalUser -Name OracleAdministrator -PasswordNeverExpires:$True; $adm = (Get-LocalGroup | Where-Object {$_.Name.IndexOf('Admin') -eq 0}).Name; Add-LocalGroupMember -Group $adm -Member OracleAdministrator;"
USER OracleAdministrator
RUN setup.exe /s /v"RSP_FILE=C:\O18c\XEInstall.rsp" /v"/L*v C:\O18c\setup.log" /v"/qn"
RUN PowerShell -Command "Get-ChildItem; Get-ChildItem \ -Attributes Directory;"
EXPOSE 1521 5550 3389
VOLUME C:\db-datado
ENTRYPOINT PowerShell

Related

Rename ec2 hostname using userdata does not work

I am trying to rename the hostname to a specific one - TEAM-CNTNR using the user-data script but after the EC2 instance comes up online and I connect to it (via Session Manager), the hostname is the random one that EC2 service gives the instance such as EC2AMAZ-VHAGRNV.
This is my user-data script, am I missing something? This is my user-data script:
<powershell>
Import-Module ECSTools
[Environment]::SetEnvironmentVariable("ECS_ENABLE_AWSLOGS_EXECUTIONROLE_OVERRIDE", $TRUE, "Machine")
Initialize-ECSAgent -Cluster "${cluster_name}" -EnableTaskIAMRole -LoggingDrivers '["json-file","awslogs"]' -EnableTaskENI
# rename the instance hostname so that it works with the gMSA account
Rename-Computer -NewName "TEAM-CNTNR" -Force
## instance-domain-join code here. Omitted for brevity
# Perform the domain join
Add-Computer -DomainName "$domain_name.$domain_tld" -OUPath "OU=Computers,OU=enrcloud,DC=enr,DC=cloud" -ComputerName "$hostname" -Credential $credential -Passthru -Verbose -Restart
</powershell>
<runAsLocalSystem>true</runAsLocalSystem>
It was revealed to me that there is a NewName parameter that can be used that is part of the Add-Computer command that would:
join the machine to the new domain
and change its name at the same time
Add-Computer ... -NewName "MyComputer" ...
Reference: Microsoft site

Child Process exits unexpectedly inside a windows based docker container which spawns through nodejs

I am trying to spawn a new process using nodeJs in the following way.
public async executeProcess() {
this._childProcess = childProcess.spawn ("C:\\app\\SampleApp.exe");
}
Later I am building a docker container based on mcr.microsoft.com/windows/servercore:ltsc2016 by using the following docker file
ARG core=mcr.microsoft.com/windows/servercore:ltsc2016
ARG target=mcr.microsoft.com/windows/servercore:ltsc2016
FROM $core as download
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';
$ProgressPreference = 'SilentlyContinue';"]
ENV NODE_VERSION 10.16.0
RUN Invoke-WebRequest $('https://nodejs.org/dist/v{0}/node-v{0}-win-x64.zip' -f $env:NODE_VERSION) -OutFile 'node.zip' -UseBasicParsing ; \
Expand-Archive node.zip -DestinationPath C:\ ; \
Rename-Item -Path $('C:\node-v{0}-win-x64' -f $env:NODE_VERSION) -NewName 'C:\nodejs'
FROM $target
ENV NPM_CONFIG_LOGLEVEL info
COPY --from=download /nodejs /Bridge
ARG VS_OUT_DIR=.
WORKDIR /
ADD ${VS_OUT_DIR} ./app
WORKDIR /app
SHELL [ "powershell", "-Command"]
RUN Get-ChildItem -Path C:/Bridge -Recurse -Force
RUN Get-ChildItem Env:
SHELL ["cmd", "/C"]
ENTRYPOINT ["c:/nodejs/node.exe", "./lib/app.js"]

AWS CLI on windows docker

I am working on asp.net applications. The current plan is to setup CI pipeline in AWS with ECS. So basically I created 2 stages now.
Scenario:
I have an ASP.NET Web API application built on .NET Framework 4.6.2. I need to setup a containerized CI/CD pipeline using the AWS Code pipeline.
My initial focus to setup CI only which includes automating the source build, unit test and upload the build to ECR repository as a docker. This will be used in the next stage to deploy in ECS.
Current Progress and Problem Description:
Stage 1: Source
This stage configured successfully with the GitHub webhook.
Stage 2: Build
I used Microsoft docker ('mcr.microsoft.com/dotnet/framework/sdk:4.7.2') as a build environment image.
This is working fine for build and unit testing.
Now I need to build the docker image and push to ECR repository. I created a repository and done the commands locally. I also enabled docker support in my application so docker-compose also there.
I am stopped here as I don't have any idea to continue...
My current build spec file consists of following;
version: 0.2
env:
variables:
PROJECT: aspnetapp
DOTNET_FRAMEWORK: 4.6.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws --version
- $(aws ecr get-login --region $AWS_DEFAULT_REGION --no-include-email)
- REPOSITORY_URI=727003307347.dkr.ecr.eu-west-1.amazonaws.com/ecr-repo-axiapp-cloud
- COMMIT_HASH=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_TAG=aspnetapi
build:
commands:
- echo Build started on `date`
- nuget restore
- msbuild $env:PROJECT.sln /p:TargetFrameworkVersion=v$env:DOTNET_FRAMEWORK /p:Configuration=Release /p:DeployIisAppPath="Default Web Site" /p:PackageAsSingleFile=false /p:OutDir=C:\codebuild\artifacts\
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
artifacts:
files:
- '**/*'
base-directory: C:\codebuild\artifacts\
While running, I got an error on aws --version. From this, I can understand that we need to install AWS CLI on the build server. For that, I am creating a custom docker. I got an articles and following the same for this.
https://aws.amazon.com/blogs/devops/extending-aws-codebuild-with-custom-build-environments-for-the-net-framework/
From the article, I have the following DockerFile
# escape=`
FROM microsoft/dotnet-framework:4.7.2-runtime
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
#Install NuGet CLI
ENV NUGET_VERSION 4.4.1
RUN New-Item -Type Directory $Env:ProgramFiles\NuGet; `
Invoke-WebRequest -UseBasicParsing https://dist.nuget.org/win-x86-commandline/v$Env:NUGET_VERSION/nuget.exe -OutFile $Env:ProgramFiles\NuGet\nuget.exe
#Install AWS CLI
RUN Invoke-WebRequest -UseBasicParsing https://s3.amazonaws.com/aws-cli/AWSCLI64PY3.msi -OutFile AWSCLI64PY3.msi
# Install VS Test Agent
RUN Invoke-WebRequest -UseBasicParsing https://download.visualstudio.microsoft.com/download/pr/12210068/8a386d27295953ee79281fd1f1832e2d/vs_TestAgent.exe -OutFile vs_TestAgent.exe; `
Start-Process vs_TestAgent.exe -ArgumentList '--quiet', '--norestart', '--nocache' -NoNewWindow -Wait; `
Remove-Item -Force vs_TestAgent.exe; `
# Install VS Build Tools
Invoke-WebRequest -UseBasicParsing https://download.visualstudio.microsoft.com/download/pr/12210059/e64d79b40219aea618ce2fe10ebd5f0d/vs_BuildTools.exe -OutFile vs_BuildTools.exe; `
# Installer won't detect DOTNET_SKIP_FIRST_TIME_EXPERIENCE if ENV is used, must use setx /M
setx /M DOTNET_SKIP_FIRST_TIME_EXPERIENCE 1; `
Start-Process vs_BuildTools.exe -ArgumentList '--add', 'Microsoft.VisualStudio.Workload.MSBuildTools', '--add', 'Microsoft.VisualStudio.Workload.NetCoreBuildTools', '--add', 'Microsoft.VisualStudio.Workload.WebBuildTools;includeRecommended', '--quiet', '--norestart', '--nocache' -NoNewWindow -Wait; `
Remove-Item -Force vs_buildtools.exe; `
Remove-Item -Force -Recurse \"${Env:ProgramFiles(x86)}\Microsoft Visual Studio\Installer\"; `
Remove-Item -Force -Recurse ${Env:TEMP}\*; `
Remove-Item -Force -Recurse \"${Env:ProgramData}\Package Cache\"
# Set PATH in one layer to keep image size down.
RUN setx /M PATH $(${Env:PATH} `
+ \";${Env:ProgramFiles}\NuGet\" `
+ \";${Env:ProgramFiles(x86)}\Microsoft Visual Studio\2017\TestAgent\Common7\IDE\CommonExtensions\Microsoft\TestWindow\" `
+ \";${Env:ProgramFiles(x86)}\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\")
# Install Targeting Packs
RUN #('4.0', '4.5.2', '4.6.2', '4.7.2') `
| %{ `
Invoke-WebRequest -UseBasicParsing https://dotnetbinaries.blob.core.windows.net/referenceassemblies/v${_}.zip -OutFile referenceassemblies.zip; `
Expand-Archive -Force referenceassemblies.zip -DestinationPath \"${Env:ProgramFiles(x86)}\Reference Assemblies\Microsoft\Framework\.NETFramework\"; `
Remove-Item -Force referenceassemblies.zip; `
}
I tried to download AWS CLI using the command.
If my understanding right, please help me to update the Dockerfile to install AWS CLI.
So same way shall I need to install docker for windows also in the build server?
I have done this. Please find the updated docker file
# escape=`
FROM microsoft/dotnet-framework:4.7.2-runtime
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
#Install NuGet CLI
ENV NUGET_VERSION 4.4.1
RUN New-Item -Type Directory $Env:ProgramFiles\NuGet; `
Invoke-WebRequest -UseBasicParsing https://dist.nuget.org/win-x86-commandline/v$Env:NUGET_VERSION/nuget.exe -OutFile $Env:ProgramFiles\NuGet\nuget.exe
#Install AWS CLI
RUN Invoke-WebRequest -UseBasicParsing https://s3.amazonaws.com/aws-cli/AWSCLI64PY3.msi -OutFile AWSCLI64PY3.msi; `
Start-Process "msiexec.exe" -ArgumentList '/i', 'AWSCLI64PY3.msi', '/qn', '/norestart' -Wait -NoNewWindow; `
Remove-Item -Force AWSCLI64PY3.msi; `
# Install VS Test Agent
Invoke-WebRequest -UseBasicParsing https://download.visualstudio.microsoft.com/download/pr/12210068/8a386d27295953ee79281fd1f1832e2d/vs_TestAgent.exe -OutFile vs_TestAgent.exe; `
Start-Process vs_TestAgent.exe -ArgumentList '--quiet', '--norestart', '--nocache' -NoNewWindow -Wait; `
Remove-Item -Force vs_TestAgent.exe; `
# Install VS Build Tools
Invoke-WebRequest -UseBasicParsing https://download.visualstudio.microsoft.com/download/pr/12210059/e64d79b40219aea618ce2fe10ebd5f0d/vs_BuildTools.exe -OutFile vs_BuildTools.exe; `
# Installer won't detect DOTNET_SKIP_FIRST_TIME_EXPERIENCE if ENV is used, must use setx /M
setx /M DOTNET_SKIP_FIRST_TIME_EXPERIENCE 1; `
Start-Process vs_BuildTools.exe -ArgumentList '--add', 'Microsoft.VisualStudio.Workload.MSBuildTools', '--add', 'Microsoft.VisualStudio.Workload.NetCoreBuildTools', '--add', 'Microsoft.VisualStudio.Workload.WebBuildTools;includeRecommended', '--quiet', '--norestart', '--nocache' -NoNewWindow -Wait; `
Remove-Item -Force vs_buildtools.exe; `
Remove-Item -Force -Recurse \"${Env:ProgramFiles(x86)}\Microsoft Visual Studio\Installer\"; `
Remove-Item -Force -Recurse ${Env:TEMP}\*; `
Remove-Item -Force -Recurse \"${Env:ProgramData}\Package Cache\"
# Set PATH in one layer to keep image size down.
RUN setx /M PATH $(${Env:PATH} `
+ \";${Env:ProgramFiles}\NuGet\" `
+ \";${Env:ProgramFiles(x86)}\Microsoft Visual Studio\2017\TestAgent\Common7\IDE\CommonExtensions\Microsoft\TestWindow\" `
+ \";${Env:ProgramFiles(x86)}\Microsoft Visual Studio\2017\BuildTools\MSBuild\15.0\Bin\")
# Install Targeting Packs
RUN #('4.0', '4.5.2', '4.6.2', '4.7.2') `
| %{ `
Invoke-WebRequest -UseBasicParsing https://dotnetbinaries.blob.core.windows.net/referenceassemblies/v${_}.zip -OutFile referenceassemblies.zip; `
Expand-Archive -Force referenceassemblies.zip -DestinationPath \"${Env:ProgramFiles(x86)}\Reference Assemblies\Microsoft\Framework\.NETFramework\"; `
Remove-Item -Force referenceassemblies.zip; `
}

Best way to automatically move backups of web server to an AWS sever

I have a web server that produces .tar.zg backup files that I want to automatically transfer to an AWS server.
To accomplish this I have tried to write a bash script on the AWS server that will automatically check for a new backup at the web server and make a copy of the backup if it is more recent (preserving timestamps).
Is there an easier or more robust way to go about this?
Am I correct in my FTP script syntax?
# Credentials to access other machine
HOST=xxxxxx
USER=xxxxx
PASSWD=xxxxxxx
# path to the remoteBackups
remoteBackups=/home/ubuntu/testBackups
# Loops indefinitly
#while [[ true ]]
#do
# FTP to remote host and get the name most recent backup
ftp -inv $HOST<<-EOT
user $USER $PASSWD
#Store name of most recent backup to FILE
# does this work or will it just save it to a variable FILE on the
remote machine
FILE=`ls -t ~/Desktop/backups/*.tar.gz | head -1`
bye
EOT
# For testing
echo $FILE
# Copy (preserving modification dates) file to the local remote
backups folder on aws server
#scp -p -i <.pem> $FILE $remoteBackups
# Get the most recent back up from both directories
latestLocal=`ls -t ~/intranetBackups/*.tar.gz | head -1`
latestRemote=`ls -t $remoteBackups/*.tar.gz | head -1`
# For testing
echo $latestLocal
echo $latestRemote
# If the backup from the remote is newer then save to backups and
sleep for 15 days
if [[ $latestLocal -ot $latestRemote ]]
then
echo Transferring backup from $latestRemote to $latestLocal
sleep 15d
else
echo No new backup file found
sleep 1d
fi
# If there are more than 20 backups delete the oldest
if [[ `ls -1 ~/intranetBackups | wc -l` -ge 20 ]]
then
rm `ls -t ~/intranetBackuos | tail -1`
echo removed the oldest backup
else
echo no file to be removed
fi
#done

Replace username/password authentication with keypair on an existing Linux AMI

I have a ami which need username/password for login via ssh. I want to create new amis from this, in which I can login from any newly created keypairs.
Any suggestions?
I'm not sure what AMI allows username/password login, but when you create an instance from an AMI, you need to specify a key pair.
That key will be ADDED to the authorized_keys for the default user (ec2-user for Amazon Linux, ubuntu for the Ubuntu AMI, etc).
Why you don't just add the users/password to the instance and then build your AMI from there? Then you can change your /etc/ssh/sshd_config and permit username passwords with this: PasswordAuthentication yes. Btw, Username/Password authentication is not recommended for servers in the cloud because of man in the middle attacks. (use it at your own risk)
Not sure if I understand the question fully, but if you want to change the behavior of the instance when it boots up I suggest you look at fuzzing with cloud-init. The configuration in the instance is under /etc/cloud/cloud.cfg. For example on on Ubuntu the default says something like this:
user: ubuntu
disable_root: 1
preserve_hostname: False
...
If you want to change the default user you can change it there
user: <myuser>
disable_root: 1
preserve_hostname: False
...
The simplest way is to do this is by adding the following snippet in to the /etc/rc.local or its equivalent.
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
if [ ! -d /root/.ssh ] ; then
mkdir -p /root/.ssh
chmod 0700 /root/.ssh
fi
# Fetch public key using HTTP
curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/aws-key 2>/dev/null
if [ $? -eq 0 ] ; then
cat /tmp/aws-key >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
fi
rm -f /tmp/aws-key
# or fetch public key using the file in the ephemeral store:
if [ -e /mnt/openssh_id.pub ] ; then
cat /mnt/openssh_id.pub >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
fi